text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Investigation of Precursors in VLF Subionospheric Signals Related to Strong Earthquakes (M>7) in Western China and Possible Explanations : Earthquakes may disturb the lower ionosphere through various coupling mechanisms during their seismogenic and coseismic periods. The VLF signal radiated from ground ‐ based transmitters is affected when it passes near the disturbed region above the seismogenic area, and this anomaly can be recorded by ground ‐ based VLF receivers. In this paper, the seismic anomalies before two strong earthquakes (M>7) that occurred in western China were detected using the ground ‐ based observation of VLF signal; the possible reasons for the anomalies were discussed using full wave simulation. The amplitude of the VLF signals observed by the link between NOV, KHA transmitter, and VLF receivers at Ya’an and Tonghai show obvious anomaly by nighttime fluctuation analysis. The simulated results demonstrate that the anomalies could have been induced by ascending/descending of the bottom height of the ionosphere, caused by depletion/increase in D region electron density. The simulated result also illustrates that terminator time shift could have been induced by descending of the bottom boundary of the ionosphere, which is due to modal interference between different wave modes. Introduction VLF (Very Low Frequency) radio waves radiated from powerful ground based transmitters could be reflected alternately by the earth and the subionospheric layer, forming a 'zigzag' path between the lithosphere-ionosphere waveguide [1]. Abnormal variation in the ionosphere, especially in the lower ionosphere, could result in abnormal variation in the amplitude and phase recorded by VLF receivers [2][3][4][5]. There are two methods to detect pre-seismic effects on VLF navigational transmitter signalsterminator time (TT in short) shift and nighttime fluctuations (NF in short)-which have been widely used over the past years [23,26]. The TT shift corresponds to shifts in sunrise and sunset time on the continuous record of the amplitude and phase of the VLF signal before the earthquake, which reveal that variation in the D region of the ionosphere could exist before the earthquake. Research on the 1995 Kobe earthquake [27], the 2004 Sumatra earthquake [28], the 2015 Nepal earthquake [29], and other earthquakes [25] have all observed this phenomenon. Additionally, some researchers also declare that the nighttime fluctuation of VLF/LF transmitter signals could predict seismic anomalies [28][29][30][31]. Statistical analyses also demonstrate that nighttime fluctuation of the received VLF signal correlate with strong earthquakes(M>6) [26,24]. The TT shift and NF on the amplitude or phase of VLF signals are usually attributed to the variation in the bottom boundary of the ionosphere [26,29,32,33]. As for how this variation could affect the received VLF signal needs to be demonstrated further. Lehtinen and Inan [34] proposed a full-wave finite element method to solve the electromagnetic field in the waveguide and ionosphere simultaneously [35,36], which is inherently stable against such ''swamping'' problem based on the recursive calculation of reflection coefficients and mode amplitudes. This method can be used to study the TT shift and NF of the VLF signal in the waveguide between the transmitter and the receiver. We established a similar full wave model to simulate the electromagnetic field radiated from ground-based VLF transmitters, which have been used to verify the electromagnetic data from the China Seismo-Electromagnetic satellite (CSES) [37]. Two devasting earthquakes (Ms 7.1 Yushu earthquake and Ms 7.0 Lushan earthquake) occurred on April 14, 2010 and April 20, 2013 in western China, respectively, killing thousands of lives. The electromagnetic field data from the DEMETER satellite was used to extract the earthquake-related anomalies in VLF signals from the terrestrial VLF transmitters before these strong earthquakes [38,39]. However, very few researches have delved into finding anomalies from ground-based observation. In this paper, electromagnetic anomalies before these two earthquakes were studied by examining the data of ground VLF receiver through the NF method; the factors that could induce these anomalies have been researched by full wave simulation for the first time. Fortunately, since December 2009, we have established two ALPHA monitoring stations, which record the amplitude and phase of the electric field radiated from an ALPHA navigation system in Russia, including three ground VLF transmitters (KRA, NOV, KHA). The electric field data from these stations could be used to detect the anomalies of these two earthquakes. In this paper, the ALPHA monitoring data was investigated to find anomalies before the Ms 7.1 Yushu earthquake in 2010 and the Ms 7.0 Lushan earthquake in 2013. We also constructed a full wave model to analyze the possible factors that could induce the VLF signal anomalies. A brief description of the instruments, data, and full-wave method are presented in Section 2. The anomalies of the observed data before the Yushu and Lushan earthquake were investigated by the NF method in Section 3. In Section 4, the full-wave model is used to simulate the possible impact factors causing the anomalies. Discussion and conclusions are presented in Sections 5 and 6. Instruments and Data The VLF monitoring stations located at Ya'an (city in Sichuan province, China, [ [40]. Six observation paths can be obtained by connecting the transmitting station and the receiving station. In addition, only the amplitude of the received signal has been used in this paper, because the phase data is interfered by environmental noise. As shown in Figure 1, there are six detecting paths between the three transmitters and Ya'an and Tonghai, respectively, where the red and blue ellipses denote the range of the fifth Fresnel region of frequency F1(11.9kHz). As known, electromagnetic wave is transmitted through Fresnel ellipse, whose focus is located on the receiving and transmitting points. In this study the fifth Fresnel region is considered to be an effective detection region [24,41]. The maximum radius of the nth Fresnel region corresponds to the short axis of the Fresnel ellipsoid, and the expression is as follows: d is the distance from the transmitter to the receiver,  is the wavelength. Thus, the lower frequency corresponding to the longer wavelength indicates a larger radius of the Fresnel region and a larger detecting range, which illustrates that the detecting range of F1(11.9kHz) is the largest. The instrument used in each monitoring station is called the "CJA-1 alpha amplitude and phase monitor". It can receive three frequency signals from the NOV, KRA, and KHA transmitters at the same time respectively with high measurement accuracy. The specific working mode is described as follows. The transmit cycle of these three transmitters is 3.6s. Every cycle is divided into 6 periods (denoted by T1-T6). The signals transmitted from KHA are in T1 (at F1) and T6 (at F2), T5 (at F3); the signal from NOV are in T4 (at F1), T5 (at F2), and T6 (at F3); the signal from KRA in T6 (at F1), T1 (at F2), and T4 (at F3). In each period, the amplitude, phase, and SNR (signal to noise ratio) of the signals are recorded. It should be noted that the instruments used Beijing time (UT+8h) to record the data. Full Wave Method A full-wave method was used to seek a solution of Maxwell equations for plane waves varying as e jwt in a horizontally stratified medium with fixed dielectric permittivity tensors ε and permeability μ in each layer. The solution of the Maxwell equations is given in the form of a linear combination of plane waves ~e ⋅ , where k is the horizontal component of the wave vector k, which is conserved by Snell's law inside each layer; we have: where ω is the angular frequency, μ is the permeability of the medium (μ ≡ 1 for non-magnetic medium), ε = ε I χ is a dielectric tensor, and χ is an electric susceptibility tensor [42]. χ is determined by the electron density and collision frequency in the ionosphere, as well as the geomagnetic field. In our simulation, electron density is obtained from the International Reference Ionosphere (IRI) model [43], and the electron collision frequency (denoted by v) is calculated by the exponential decay law with the height (denoted by h) increasing . The parameters of the geomagnetic field at the location of the VLF transmitter is calculated by the International Geomagnetic Reference Field (IGRF) model [44]. Eliminating the z components from Equation (2), the following elegant form of Maxwell equations is obtained: Matrix T is different in every layer of the ionosphere but is the same in the waveguide. Thus, the reflection coefficient is the same in the waveguide, but the amplitude differs at different altitudes. The electromagnetic field in each layer can be obtained in the k (wave vector) domain by solving Equation (3) recursively [34,45], and different wave modes will be represented because of different selected k. Based on this, the electromagnetic field is solved both in waveguide and ionosphere. It should be noted that we only use a fixed dielectric permittivity and permeability in the horizontal direction in our model, because from results of the IRI model, we found that the ionospheric gradients of the lower ionosphere during night along the VLF propagation paths is small. More details of the full-wave method is described in Lehtinen and Inan [34]. VLF Signal Analysis from the CAJ-1 Monitoring Station From Figure 1, we can see the epicenter of the Yushu earthquake located in the detecting region between the NOV transmitter and Ya'an and Tonghai. It is known that the detecting range is larger when the transmitting frequency is lower. Therefore, the signal with frequency F 1(11.9kHz) is analyzed first; this is because we were trying to find anomaly that took place before the Yushu earthquake that occurred on April 14, 2010; data from April 1 to 20, 2010 are selected. For the Lushan earthquake that occurred on April 20, 2013, data from April 5 to 21, 2013 are selected. As mentioned above, only the amplitude of the electric field is analyzed, because the phase data is not good. Two well-documented techniques (Terminator Time and Night Fluctuation) mentioned in the introduction are used to extract the seismic related anomalies from the ground-based VLF data. It is often suggested that the nighttime fluctuation method is best suited for medium and long TRGCP (Transmitter Receiver Great Circle Path), while the terminator time method is more suitable when the transmitter and the receiver are located close by (<1000 km) [26] and reference therein]. Because the distances between the VLF transmitters and the receivers exceed 1000 km in this paper, only the night fluctuation method has been used to find the seismic anomalies of these two earthquakes. Night Fluctuation (NF) Observation of the Yushu Earthquake For the NF analysis, we took 7 hours nighttime VLF amplitude data during 21:00 LT-04:00 LT (22:00 Beijing Time -05:00 Beijing Time) for 20 days from 01 April 2010 to 20 April 2010. As explained and used by Maurya et al. [29] for the 2015 Nepal earthquakes, we estimated a total of three statistical parameters for the NF analysis. First, we also take the running mean of data for 3 days (window length 3 days) following Maurya et al. [22] and then calculated the difference dA(t) for a particular day as dA(t) = (A(t)-<A(t)>), where A(t) is the VLF amplitude at time t on that particular day and <A(t)> is the average value at the same time t for 3 days from 01 April to 20 April in 2010. An example of A(t), <A(t)> and dA(t) in the path between NOV and Ya'an is shown in Figure 2. The red solid line represents the A(t) of current day (April 7 in Figure 2); the blue solid line represents the average value (<A(t)>) of the previous three days. The black line is dA(t), and the circles in Figure 2b are the selected data during 21:00 LT-04:00 LT (22:00 Beijing Time -05:00 Beijing Time). The three parameters for NF analysis are estimated using difference dA(t) and are defined as (1) trend (T): it is the average of nighttime amplitude difference dA(t) for each day; (2) dispersion (D): it is the standard deviation of nighttime amplitude difference dA(t) for each day; and (3) nighttime fluctuation (F): it is the (dA(t)) 2 over relevant night hours, which gives one data for each day. It should be noted that there are some data (e.g. the nighttime data between 1:00 and 2:00 Beijing time on April 12, see supplementary figure 1) descends to noise level, which is because the transmitter turns off in that time, these data have been excluded in our nighttime fluctuation analysis. First, we consider the path between NOV and Ya'an and Tonghai at F1 because the epicenter of the Yushu earthquake lies in these two paths directly. The nighttime fluctuation for Yushu earthquake on April 14, 2010 is shown in Figure 3. Figures 3a-3f show the trend (T), dispersion (D), and fluctuation (F) for these two paths, respectively. The horizontal line in each panel denotes the 2 standard deviation criteria (2*σ. As known, in the normal distribution, the probability of a value exceeds the mean value plus 2 standard deviation is less than 5%, which means the probability of anomaly occurs is less than 5%) to define the anomalous day. The parameter fluctuation (F, Figure 3c, f) exhibits an increase exceeding the 2σF criterion line on April 7, 2010 at both two paths between NOV and Ya'an and Tonghai. The parameters T and D (Figures 3a, 3b and 3e,) also exhibit similar increase exceeding the 2σT and 2σD criterion. It should be noted that the trend in the path between NOV and Ya'an in Figure 3d is small and does not exceed 2 σT. This is because the amplitude during 22:00-24:00 BJT is less than the running mean value (negative in dA(t)), but is greater than the running mean value during 0:00-5:00 BJT (positive in dA(t), (as shown in Figure 2), which makes the average of dA(t) (Trend between NOV and Ya'an) small during 22:00 -05:00 BJT on April 7. The possibility of this phenomena has been demonstrated by our simulation in Section 4. These anomalies have also been detected on these two paths at F2 and F3 on the same day by nighttime fluctuation analysis (Supplementary Figure 2). We also check the Kp and Dst index for April 1-20 ( Figure 4). The red dashed lines denote the Kp=3 and Dst=-30, and the green histograms and curves indicate high geomagnetic activities and space weather activities. As we can see, there is a moderate storm (-100 nT<Dst<-50 nT) during April 5-7. Generally, such moderate storms are very unlikely to affect the D region of the ionosphere [29]. The study by Kumar and Kumar [46] on geomagnetic storm effects demonstrated that a storm with Dst<-140 nT has no effect on VLF signals. We have checked VLF signals during storms from 2010 to 2012-there were no obvious anomalies detected by the NF method. For the event that occurred on April 6 and 7, we also used the COSMIC data to obtain the electron density in the lower ionosphere and check its possible effect on the D region. The results of the COSMIC data show no obvious anomalies in the lower ionosphere in the nighttime on April 6 and 7. In addition, we also conducted nighttime fluctuation analysis excluding data of the geomagnetic storms (04/06-07 and 04/12). The results show that an anomaly occurred on April 8 (see Supplementary Figure 3) in detecting the path between NOV and Ya'an, Tonghai. This indicates that the possible seismic anomalies may have started on April 6 and lasted till April 8. Thus, in sum, we can speculate that the anomalies of the ground VLF signal on April 7 are not related to geomagnetic and space weather activities, and were possibly are caused by seismic activity. The other factors that could induce the anomalies are discussed in Section 5.1. According to the formula by Dobrovolsky [40], the preparation zone of the earthquake can reach ρ=10 0.43M with unit of km. For the Yushu earthquake, it could be reach 1000 km. According to the distribution of transmitters and receivers in Figure 1 and considering that the signal to noise is lower between KRA and Ya'an, Tonghai, we only selected the path between KHA and Ya'an to analyze seismic anomalies. The nighttime fluctuation analysis of this path is shown in Figure 5. The parameter fluctuation (F, Figure 5c) exhibits an increase exceeding the 2σF criterion line on April 10, 2010 at F1. The parameter T (Figure 5a) also exhibits a similar increase exceeding 2σT criterion. The parameter D (Figure 5b) is close to 2σD, which does not strictly exceed the 2σ criterion. However, the parameter D is not so effective to determine the anomalies in a period, because it only represents the dispersion of dA(t) in one day. The parameter fluctuation of KHA-Ya'an at F2 and F3 (Figures 5f, i) exhibit increases exceeding the 2σF criterion on April 12, 2010. The other parameters T and D (Figure 5d, e, h, g) also exhibit similar increase exceeding 2σ, which have been depicted by red arrows in Figure 5. Night Fluctuation (NF) Observation of the Lushan Earthquake For the NF analysis of the Lushan earthquake, we used 7 hours nighttime VLF amplitude data during 21:00 LT-04:00 LT (22:00 Beijing Time -05:00 Beijing Time) for 16 days from 05 April 2013 to 21 April 2013. Because there are no recorded data from Tonghai station, we only consider the path between NOV and Ya'an. Fortunately, the Ya'an station is very close to the epicenter of the Lushan earthquake, so we were able to observe that the electromagnetic anomalies were remarkable on the night before the Lushan earthquake, as shown in Figure 6. The red rectangular box denotes the nighttime fluctuation and the red vertical bar represents the time of the Lushan earthquake. We also conduct the same NF analysis as the previous section. The nighttime fluctuation for the Lushan earthquake on April 20, 2013, is shown in Figure 7. Figures 7a-7c show the trend (T), dispersion (D), and fluctuation (F), respectively. The parameters (T,D,F) exhibit an increase exceeding the 2σ criterion line on April 19 and 20, 2013, at the path between NOV and Ya'an, which correspond to the fluctuation on the night of April 19 and before dawn of April 20. The anomalies detected in the Lushan earthquake is much more significant than that of the Yushu earthquake, which is probably because the receiver is very close to the epicenter of the Lushan earthquake. We also checked the Kp and Dst index for April 5-25, 2013 ( Figure 8). The pink dashed lines denote the Kp=3 and Dst=-30, and the green histograms and curves indicate high geomagnetic activities and space weather activities. As can be seen, it is in the geomagnetic quiet period before the Lushan earthquake, which illustrates that the detected anomalies are not related to geomagnetic activities and space weather activities, and are high likely related to the Lushan earthquake. The Possible Reasons Causing the Anomalies In this section, the full wave model was used to find the possible reason causing the anomalies. It has been concluded that the largest anomaly occurs in the path between NOV and Ya'an at F1 (11.9 kHz) and F3 (14.9 kHz) of these two earthquakes according to the results in Section 3. Thus, in this section, we conduct simulation in this path at 11.9 kHz to research the possible explanation for these anomalies (the simulated result at 14.9 kHz is the same as the result at 11.9 kHz). Firstly, we simulate the Ez (vertical component of electric field) amplitude for the 24 hours of April 9 to compare with the observed data. The top panel of Figure 9 shows the continuous amplitude of Ez observed by CJA-1 on April 9, 2010. It shows that the electric field is stronger at nighttime. There are two obvious troughs of amplitude at about 7:00 and 20:00, which are denoted by terminator time corresponding to sunrise and sunset. The characteristic minima around the sunrise and sunset of local time is generated by the modal interference between the different modes [29]. The bottom panel shows the calculated Ez at 2,700 km (comparable with the distance between NOV and Ya'an) away from the NOV transmitter on the same day. The calculated result also illustrates that the electric field is stronger at nighttime, and the difference is about 1 dB between nighttime and daytime, which is slightly smaller than the variation of the observed value between nighttime and daytime in the top panel (it should be noted the observation of the instrument is not the actual value of the electric field). The simulated terminator time is LT 3:00 and 21:00 under the original IRI profile, shown as a blue solid line in the bottom panel of Figure 9, which is because the daytime electron density referenced from IRI is from 4:00 to 20:00 LT (the bottom height of the ionosphere is 65 km). If we change the bottom height of the ionosphere to 80 km from LT 4:00 to 7:00 of IRI profile, it is equal to shifting the sunrise time to LT 7:00. We can see a simulated terminator time shift to LT 7:00 (red solid line with pentagrams in the bottom panel of Figure 9), which indicates that the bottom boundary of the ionosphere may control the terminator time shift. Details of the terminator time shift will be discussed in Section 5.2. Secondly, we conduct a simulation to check that factors that could induce the nighttime fluctuation. Figure 2, 3, 5, and 7 show anomalies of the Ez component using the nighttime fluctuation method before the Yushu and Lushan earthquakes. Generally, this abnormal fluctuation is attributed to ionospheric disturbance, which could have been due to two possible reasons. One is the variation in reflection height of the waveguide (bottom height of ionosphere), that is because the variation in D region electron density makes the bottom boundary of the ionosphere lower or higher. The bottom boundary descent could be because electron density increases, and the ascent could have been induced by electron density depletion in the D region of the ionosphere. Another possible factor is electron density variation in the ionosphere, which maybe be validated by variation in TEC and f0F2 results. First, we set a different bottom height of the ionosphere (reflection height of waveguide) to simulate the variation in electric field at ground receivers, and then check the influence of this kind of disturbance on the nighttime ground electric field. The bottom height of the lower ionosphere electron density obtained from IRI is 65 km from 4:00 to 20:00 LT, which corresponds to a daytime electron density profile. The bottom height of the lower ionosphere is 80 km during 0:00-3:00 and 21:00-24:00 LT. Generally, the variation in effective reflection height for VLF waves is almost 2 km. Thus, in our simulation, we set the bottom height of the lower ionosphere to 78 km and 82 km during 22:00-02:00 LT to simulate the Ez components, which correspond to a 2-km variation in effective reflection height of the lower ionosphere. Figure 10 shows the simulated Ez 2,700 km away from NOV (comparable with the distance between NOV and Ya'an) on April 7, 2010, under different reflection heights of the ionosphere during 22:00-2:00 LT. The blue solid line in Figure 10 is obtained under the original IRI profile (bottom height of ionosphere is 80 km).The red solid line with pentagrams in Figure 10 is obtained under a reflection height set at 82 km, which means that the depletion in D region electron density makes the bottom boundary of ionosphere higher. The black solid line with asterisks in Figure 10 is obtained when reflection height is set to 78 km, which means that an increase in D region electron density lowers the bottom boundary of the ionosphere. As can be seen, when the reflection height descends to 78 km, the electric field decreases during 22:00-00:00 LT but increases from 0:00 to 02:00 at this observation point; this simulated result is similar to the observation in Figure 3a. However, when the reflection height ascends to 82 km, the electric field decreases in all the simulated nighttimes. Furthermore, we calculate the difference in Ez amplitude between the changed bottom height of the ionosphere and the original ionosphere profile (Figure 11). The black solid line with asterisks is the difference in Ez amplitude when the bottom height of the ionosphere is set to 78 km and the original is 80 km. The difference of Ez is less than 0 in 22:00-24:00 (represents less than the original background value) and then increases in 0:00-2:00 LT and become greater than 0 from 1:00-2:00 LT, when the bottom height of the ionosphere descends 2 km. Because the original nighttime IRI profile is during 21:00-3:00 LT (7 hours), we only simulate the electric field during 22:00-2:00 LT (5 hours); the electric field after 02:00 is not simulated. We see that the difference is less than 0 in the early night and becomes greater than 0 before dawn. This changing trend is similar to the observation in Figure 11b, which the dA(t) (difference between observed value of current day and value of background) is less than 0 in the early night and becomes greater than 0 before dawn. The red solid line with asterisks is the difference in Ez amplitude when the bottom height of ionosphere is set to 82 km and the original is 80 km. The difference is always less than 0 under this setting. Thus, a more plausible explanation in this event is that the anomaly is induced by an increase in D region electron density caused by seismogenic activity, which lowers the bottom height of the lower ionosphere; however, this needs to be confirmed by more observations in future study. Second, we set different electron densities of the ionosphere to simulate variation in electric field at ground receivers, and then check the influence of this kind of disturbance on the nighttime ground electric field. Here, we set the electron density increase and decrease (from 100 km altitude to F2 region) thrice, compared to the original electron density from IRI during 22:00-02:00 LT. Although this is a dramatic variation for nighttime electron density, the variation in Ez amplitude is not so significant, compared with the results when ionosphere bottom height changes. The electron density increases, and the amplitude decreases at this observation point (2,700 km from the NOV transmitter), and vice versa ( Figure 12). These simulated results also illustrate that the anomalies detected by nighttime fluctuation could be induced by an increase in D region electron density, which lower the bottom height of the lower ionosphere. Figure 12. The red solid line with asterisks represents the difference in Ez amplitude between the electron density (from 100 km to F2 region) set thrice and the original IRI profile. The blue solid line with asterisks represents the difference in Ez amplitude when electron density (from 100 km to F2 region) is set as 1/3 times and the original IRI profile. Other Factors May Induce Disturbance in the Ionosphere Lightning, geomagnetic storms, solar flares, and other natural sources may induce disturbance in the lower ionosphere [29,[47][48][49]. From the index of Dst and Kp (Figure 3), we can see that there were two moderate geomagnetic storms (-100 nT <Dst <-50 nT) before the Yushu earthquake. Generally, moderate storms are highly unlikely to affect the D region of the ionosphere [29,49]. A study by Kumar and Kumar [46] on geomagnetic storm effects demonstrated that a storm with Dst<-140 nT has no effect on VLF signals. In this research, the minimum Dst during geomagnetic storms is -80 nT, which means that this storm is highly unlikely to affect the D region of the ionosphere. We also use COSMIC data to obtain the electron density in the lower ionosphere; the results show no obvious anomalies in the lower ionosphere on the nighttime of April 7. Furthermore, we also conducted nighttime fluctuation analysis excluding data during geomagnetic storms (04/06-07 and 04/12, 2010). The results show that an anomaly occurred on April 8 (see Supplementary Figure 3). Thus indicates that possible seismic anomalies may have started on April 6 and lasted until April 8, 2010. Thus, the above characteristics illustrate that the anomaly of nighttime fluctuation analysis on the Yushu earthquake should be unconcerned with geomagnetic storms. In addition, geomagnetic and space weather activities were very quiet before the Lushan earthquake; the anomalies before the Lushan earthquake should, thus, be unconcerned with geomagnetic storms. The early/fast events of lightning flashes produce a vertical electric filed, which could be projected to the lower ionosphere and lower its effective reflection height for VLF waves [50]. A lightning flash is very rare in our research region (only four events from Feb 2010 to Apr 2010, with almost no events in April 2013 in the detecting path between NOV and Ya'an, which was sourced from the following website (https://lightning.nsstc.nasa.gov/nlisib/nlissearch.pl?coords=?579,18)); thus, the effect of lightning can be ignored in this study. As is known, solar flare can induce ionospheric D region perturbations that influence the subionospheric VLF/LF propagation as an anomaly in amplitude and/or in phase [5,51]. We checked solar activity from April 1-20 and did not find any X-ray flux burst (http://rwcc.bao.ac.cn/history/index.jsp). Thus, the anomaly detected in this study could not induced by solar activities. Simulation of Terminator Time Shift Many researches have introduced the method of terminator time shift to predict the anomalies of earthquakes. Some researchers have declared that a possible reason is the increase in D region electron density, which lowers the boundary of the ionosphere [33]. The lowering of D region boundary changes the condition for modal interference to result in a shift in the VLF signal minima [29]. We also conducted a simulation to check for this phenomenon. We selected an NWC transmitter with transmitting frequency 19.8 kHz, which has maximum power in the southern hemisphere as the source; the electron density and geomagnetic parameters are obtained at the NWC location (114°E, 22°S). The bottom height of the ionosphere was 65 km during LT 6:00-18:00 corresponding to the daytime, which is 80 km for another time corresponding to the nighttime. The blue solid line in Figure 13 shows the simulated Z component of the electric field 1,350 km from NWC under this IRI profile. We can see that a terminator time exists at LT 5:00 and 19:00, which corresponds to sunrise and sunset, respectively. When we lower the bottom height of the ionosphere from 80 km to 66 km at LT 5:00 and 19:00, the calculated result shows that a sunrise terminator time exists at LT 4:00 (shown as the red solid line with pentagrams in Figure 13), which indicates that the terminator time shift an hour forward. The sunset terminator time exists at LT 20:00 (shown as a red solid line with pentagrams in Figure 13), indicating that the terminator time shift an hour backward. The simulated result demonstrates that the terminator time shift could have been induced by lowering of the boundary of the ionosphere. The reason for this could be modal interference of the VLF wave in the waveguide. The Simulated Result at Other Locations As is known, the amplitude of an electric field at different ground locations will be different, owing to modal interference in the waveguide. Figure 14 (a,b) shows the simulated Ez of the electric field 2,400 km and 3,000 km away from the NOV transmitter. The intensity of Ez is different at different locations. The terminator time is slightly different because of different modal interferences, but the main characteristics of sunrise and sunset have been shown in the figure. The electric field at these locations could also be disturbed by variation in bottom boundary of the ionosphere at the different locations (the red solid line with pentagrams shown in Figure 14 (a,b)). The red solid line with pentagrams in Figure 14 (a) shows the Ez components increased when the bottom height of the ionosphere descends to 63 km at LT 11:00. The simulated increasing amplitude of the electric field could be possible in a longer propagating path with descending reflection height, which was illustrated by Mcrae and Thomson [51] using observed data from the NLK transmitter to the VLF receiver located at Dunedin. The red solid line with pentagrams in Figure 14 (b) shows the Ez components decrease 3,000 km from NOV when the bottom height of the ionosphere descends to 63 km at LT 11:00, compared to the original IRI profile. Conclusion In this paper, the instruments and data format of ALPHA monitoring stations were detailed. The amplitude of electric field recorded by the stations were investigated to detect precursors to the Ms 7.1 Yushu earthquake on April 14, 2010, and the Ms 7.0 Lushan earthquake on April 20, 2013. A full wave model was used to research the possible factors that could induce seismic anomalies on the received VLF electric field. (1) The results of the nighttime fluctuation analysis show that there was an obvious anomaly on April 7 (7 days before the Yushu earthquake) between the transmitter (NOV) and the receivers (Ya'an, Tonghai). (2) The results of the nighttime fluctuation analysis show that there was a remarkable anomaly on the night of April 19 and before dawn of April 20 (1 days before the Lushan earthquake) between the transmitter (NOV) and the receivers (Ya'an). (3) The anomaly is much more significant for the Lushan earthquake, which may be attributed to the VLF receiver being much closer to the epicenter of the earthquake. (4) The results of the nighttime fluctuation analysis also show two obvious anomalies on April 10 (4 days before Yushu earthquake) and April 12 (2 days before Yushu earthquake) between the transmitter (KHA) and the receivers (Ya'an). (5) The simulated result illustrates that the received electric field from the VLF transmitters could change abnormally because of variation in bottom boundary of the ionosphere or variation in electron density in the ionosphere. However, the change induced by variation in bottom boundary of the ionosphere is remarkable. The same conclusion was obtained from the simulated results at different locations of ground receivers. (6) Combining the observation and simulation, we conclude that the more plausible explanation is that the anomalies are induced by a depletion in D region caused by seismogenic activity, which lowers the effective height of the ionosphere in this event. (7) Our simulation has demonstrated that terminator time shift could be induced by the descending of the bottom boundary of the ionosphere, which is due to the modal interference between different modes. In this study, based on the analysis of sub-ionospheric VLF signals, the possible precursor activity of two strong earthquakes that occurred in western China (Yushu earthquake and Lushan earthquake) has been found. The difference in effective detection range determined by different detecting paths and different frequency signals may provide the basis for identifying the location of the ionospheric disturbance of the earthquake, which should be studied in detail in future. The full wave model was used to provide possible explanations into precursors that occurred prior to the Yushu and Lushan earthquakes. Supplementary Materials: The following are available online at www.mdpi.com/2072-4292/12/21/3563/s1, Figure S1: The observed Ez amplitude in the path between NOV and Ya'an on April 5, Figure S2
8,204
2020-10-30T00:00:00.000
[ "Geology", "Environmental Science", "Physics" ]
Consistent Estimation of Partition Markov Models : The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns. Introduction The Markov models have received enormous visibility for being powerful tools [1][2][3].In recent years, the theoretical advances have allowed for its users to identify the most suitable methods for estimating them.For instance, [4] shows that the Bayesian Information Criterion (BIC)- [5]-can be used to consistently choose a Variable Length Markov Chain model in an efficient way using the Context Tree Maximization (CTM) algorithm.See also [6,7].In this paper, we show that the criterion BIC is also consistent for estimating a more general Markovian family (the Partition Markov Models), which includes the Variable Length Markov Chain models and the complete Markov chains.We consider a discrete stationary process with finite alphabet A of size |A|.Markov chains of finite order are widely used to model stationary processes with finite memory.In databases with Markovian structure, it is frequently observed a pronounced degree of redundancy, which means that different sequences of symbols have the same effect over the law of the process.For example, in datasets coming from the linguistic field, we can observe words which are synonyms.In some cases, exchanging a word for a synonym, does not change the meaning of sentences.In a more general context, there are also sequences of several words, which are equivalent in that sense.See, for instance, [8,9].For this kind of data, a model should retrieve and use the redundancy to improve the quality of the estimate.The Partition Markov Model represents the redundancy through a partition of the state space (see [10]).See also [11] and contemporary literature of [10].Under the assumption of this family, we address the problem of model selection, showing that the model can be selected consistently using the BIC.We show that, in order to apply the BIC criterion, it is not necessary to find a global maximum inside the set of partitions, which will be impossible even for a moderate size of the state space.Instead, it is possible to start the searching process by an initial partition; for instance, the state space itself, and then coarsen the partition, step by step.This process is associated with a metric that governs the state space. The Partition Markov Models are being used and explored intensively: for instance, [12] combines two statistical concepts-Copulas and Partition Markov Models-with the purpose of defining a natural correction for the estimator of the transition probabilities of a multivariate Markov process.Moreover, Reference [13] presents a simulation study that identifies when this correction succeeds well.A second strategy to deal with this issue, is shown and applied to real data in [14].The idea is to combine through a copula the partitions coming from the marginal processes and the partition coming from the multivariate process.This strategy shows excellent theoretical properties that will be essential to increasing the predictive ability of the estimation.In [14], the strategy was applied to multivariate Brazilian financial data in order to show how this new estimator allows for considering a longer past (order of the process), in the estimation of the transition probabilities, in comparison with the allowed past in the Partition Markov Models, when the data size is not large enough to ensure reliable results.The application of Partition Markov Models has also been useful to reveal important facts in other areas, as shown in [15], where these models were applied to written texts of European Portuguese, in order to identify change points from the period 16th century-19th century. Here is a description of the issues addressed in this article, section by section.In Section 2, we introduce the concept of Markov chain with partition L, which is a partition of the state space defined through a stochastic equivalence between the strings of the state space (see [10]).In Section 3, we describe the model selection procedure for choosing the optimal partition, which is based on the BIC criterion and on the concept of good partitions of the state space, which were also introduced in this section.We introduce a distance between the parts of a partition, and this concept defines a metric on the state space and also allows it to build efficient algorithms for estimating the optimal partition (see [10]).In Section 3, we show that the optimal partition can be obtained through the BIC criterion, eventually almost surely, when the sample size tends to infinity.Section 4 shows the application to model navigation patterns on a website.We conclude this paper with a discussion in Section 5.The proof of the results introduced in this paper are included in Appendixes A and B. Preliminaries Let (X t ) be a discrete time order M Markov chain on a finite alphabet A, with M < ∞.Let us call S = A M the state space.Denote the string a m a m+1 . . .a n by a n m , where a i ∈ A, m ≤ i ≤ n.For each a ∈ A and s ∈ S, P(a|s) = Prob(X t = a|X t−1 t−M = s).Let L = {L 1 , L 2 , . . ., L |L| } be a partition of S, ∀a ∈ A and L ∈ L, define P(L, a) = ∑ s∈L Prob(X t−1 t−M = s, X t = a), P(L) = ∑ s∈L Prob(X t−1 t−M = s).If P(L) > 0, we define P(a|L) = P(L,a) P(L) .With the purpose of formulating the model, we introduce the following equivalence relation.Definition 1.Let (X t ) be a discrete time order M Markov chain on a finite alphabet A, with state space S = A M : (i) s, r ∈ S are equivalent (denoted by s ∼ p r) if P(a|s) = P(a|r) ∀a ∈ A. (ii) (X t ) is a Markov chain with partition L = {L 1 , L 2 , . . ., L |L| } if this partition is the one defined by the equivalence relationship ∼ p introduced by item (i). The equivalence relationship defines a partition on S. The parts of this partition are subsets of S with the same transition probabilities, i.e., s, r ∈ S are in different parts if, and only if, they have different transition probabilities.To understand more deeply the motivation of a model like this, we note that the full Markov chains show a restriction, in terms of point estimation, which is, for an order M model, that the number of parameters given by |A| M (|A| − 1) grows exponentially with the order M. Another limitation is that the class of full Markov chains is not very rich, since, due to the fixed the alphabet A, there is just one model for each order M and in practical situations a more flexible structure could be necessary.For an extensive discussion of these two restrictions, see [1].A well-known and richer class of finite order Markov models, introduced by [1,2], is composed of the Variable Length Markov Chains (VLMC).In the VLMC class, each model is identified by a prefix tree T called Context Tree.For a given model with a Context Tree T , the total number of parameters is |T |(|A| − 1).We will see later that the Definition 1(ii) supports both: complete Markov chains and VLMC, becoming a natural extension of the two possibilities. Remark 1.Given a Markov chain over the alphabet A = {a 1 , a 2 , ..., a |A| } with partition L = {L 1 , L 2 , . . ., L |L| } (Definition 1(ii)); in order to specify the process, it is necessary to estimate (|A| − 1) transition probabilities for each part in L. Thus, the set of parameters to estimate is {P(a i |L j ) : 1 ≤ i < |A|, 1 ≤ j ≤ |L|} and the total number of parameters for the model is |L|(|A| − 1).If the estimation of the transition probabilities is performed under any other conception-for instance, considering a complete Markov chain or a VLMC-the number of parameters to estimate will be higher than |L|(|A| − 1), since they do not consider that there are strings that share transition probabilities. The structure of a VLMC can be expressed by a partition in the sense described before.Each model in the family of VLMC models is identified by its Context Tree, and we will use this structure to establish the relation between VLMC and Partition Markov Models (see Example 1). Example 1.Let (X t ) be a finite order Markov chain taking values on A = {0, 1} and T a set of sequences of symbols from A such that no string in T is a suffix of another string in T , d(T ) = max l(s), s ∈ T , where l(s) is the length of the string s ∈ T .Consider d(T ) = 3 and T = {{0}, {01}, {011}, {111}}.Define a partition of A 3 as being , while L does not check that definition. In the next example, we can see a situation in which can be observed the economy in the number of parameters, achieved by a Partition Markov Model following Definition 1(ii) (see also Remark 1). Example 2. Let (X t ) be a finite order Markov chain taking values on A = {0, 1} with state space A 3 .Suppose that this chain follows the transition probabilities given by the Table 1.Considering the process as a full chain, we have eight parameters.Then, if we look more closely, a Context Tree is enough to describe the process, with just four parameters, because T = {{0}, {01}, {011}, {111}}.Moreover, if we analyze this situation from the perspective of Definition 1(ii), we note that only two parameters are needed to describe the source, since L = {{000, 001, 010, 100, 101, 110, 111}, {011}}, because just the string 011 has different transition probability to 0. Let x n 1 be a sample of the process X t , s ∈ S, a ∈ A and n > M. We denote by N n (s, a) the number of occurrences of the string s followed by a in the sample x n 1 , which is N n (s, a) = {t : M < t ≤ n, x t−1 t−M = s, x t = a} .In addition, the number of occurrences of s in the sample x n 1 is denoted by N n (s) and N n (s) = {t : M < t ≤ n, x t−1 t−M = s} .The number of occurrences of elements into L followed by a and the total number of strings in L are given by In order to simplify the notation, we use the same notation N n with different arguments, a string s or a part L. In addition, we note that N n (L) is a function of the partition L. As a consequence, if we write P(x n 1 ) = Prob(X n 1 = x n 1 ), we obtain under the assumption of a hypothetical partition L of S : The Bayesian Information Criterion (BIC) is defined through a modified maximum likelihood (see [4]).We will call maximum likelihood the maximization of the second term in the Equation ( 2) for a given observation x n 1 .We denote that term as ML(L, x n 1 ), and the BIC is given by the next definition. Definition 2. Given a sample x n 1 of the process (X t ), a discrete time order M Markov chain on a finite alphabet A with state space S = A M and L a partition of S, the BIC of the model given by Definition 1(ii), and according to the modified likelihood, Equation ( 3 Remark 2. The results of this paper will remain valid if we replace, in Definition 2, the constant for some arbitrary constant v, positive and finite. Below, we define some concepts that help, in practice, to limit the search for an ideal partition to a subset of possible partitions, with natural characteristics.Definition 3. Let (X t ) be a discrete time order M Markov chain on a finite alphabet A and S = A M the state space.Let L = {L 1 , L 2 , . . ., L |L| } be a partition of S : (ii) L is a good partition of S if for each i ∈ {1, . . ., |L|}, L i verifies item (i). Example 3. We will consider two situations: (ii) Consider the Example 1(ii), and the partition L is a good partition of S = {0, 1} 3 . If L is a good partition of S, we define for each part L ∈ L where s ∈ L. We introduce a notation that will be used in the next results. Notation 1. (a) Let L ij denote the partition (b) For a ∈ A, we write P(L ij , a) = P(L i , a) + P(L j , a) and P(L ij ) = P(L i ) + P(L j ).In addition, Note that, if L is a good partition and P(•|L i ) = P(•|L j ), then L ij is a good partition.We show a way to build partitions, from good partitions, which are candidates more suitable for checking the Definition 1(ii).This way of building partitions seeks to reduce the size of the partition, step by step. A Metric on the State Space The next result allows formulating the main findings of this section.Nonetheless, this result could be applied to partitions with at least two good parts, i.e., it is not necessary to have good partitions.In this section, we also define a measure to quantify the distance between the parts of a partition.This distance is based on the practical use of the next theorem and allows building an efficient algorithm for estimating the partition given by Definition 1(ii).For complementing, see [16]. Theorem 1.Let (X t ) be a Markov chain of order M over a finite alphabet A, S = A M the state space and x n 1 a sample of the Markov process.Let L = {L 1 , L 2 , . . ., L |L| } be a partition of S and suppose that i and jexist, and i = j such that L i and L j verified the Definition 3(i) (are good parts).Then, P(a|L i ) = P(a|L j ) ∀a ∈ A if, and only if, eventually almost surely as n → ∞, where L ij is defined under L by Notation 1(a). Proof. See Appendix A.1. It is also possible to decide simultaneously if more than two good parts should be put together, as shown in the next corollary.Corollary 1.Let (X t ) be a Markov chain of order M over a finite alphabet A, S = A M the state space and x n 1 a sample of the Markov process.If L = {L 1 , L 2 , . . ., L |L| } is a partition of S with K 1 good parts, denoted by {L i k } K 1 k=1 , and T is an index set, T ⊆ {1, . . ., K 1 }, then, P(a|L i k ) = P(a|L i l ) ∀a ∈ A, ∀k, l ∈ T if, and only if, eventually almost surely as n → ∞, BIC(L, x n 1 ) < BIC(L T , x n 1 ), where L T denotes the partition which join the |T| good parts in ∪ k∈T L i k generalizing the Notation 1(a). Proof. Replace Equation (A3) in the proof of Theorem 1 by Applying log-sum inequality, the result follows. Remark 3. Let (X t ) be a Markov chain of order M over a finite alphabet A, S = A M the state space and x n 1 a sample of the Markov process.Let L = {L 1 , L 2 , . . ., L |L| } be a partition of S, given i, j ∈ {1, 2, . . ., |L|}, i = j, such that L i and L j verified the Definition 3(i)(are good parts).If P(a|L i ) = P(a|L j ) for some a ∈ A, then, eventually almost surely as n → ∞, BIC(L, x n 1 ) > BIC(L ij , x n 1 ), where L ij verified the Notation 1(a).Now, we can introduce a distance in L. This distance allows to establish a metric in the state space S. Definition 4. Let (X t ) be a Markov chain of order M, with finite alphabet A and state space S = A M , x n 1 a sample of the process and let L = {L 1 , L 2 , . . ., L |L| } be a good partition of S The next theorem shows that d L is a distance in L. Theorem 2. Let (X t ) be a Markov chain of order M over a finite alphabet A, and S = A M the state space and x n 1 a sample of the Markov process.If L = {L 1 , L 2 , . . ., L |L| } is a good partition of S, for each n, and for any i, j, k ∈ {1, 2, ..., |L|} : Some observations of practice order are appropriate at this time.Suppose the good partition of Theorem 2 is the space S itself, so each part of the partition is given by each string of S. Thus, the distance (Definition 4) defines the following relation of equivalence between strings of S, for each value n : The next result formalizes how the distance is related to the BIC criterion. Corollary 2. Let (X t ) be a Markov chain of order M over a finite alphabet A, with S = A M the state space and x n 1 a sample of the Markov process.Let L = {L 1 , L 2 , . . ., L |L| } be a partition of S, given i, j ∈ {1, 2, . . ., |L|}, i = j, such that L i and L j verified the Definition 3(i).(are good parts): Proof.From Equation (A2) in the proof of Theorem 1. The previous corollary provides the statistical interpretation of the distance. Consistent Estimation of the Process's Partition In this section, we prove that the partition following Definition 1(ii), referred to herein as minimal good partition, can be obtained by maximizing the equation introduced in Definition 2, in the space of all possible partitions of the state space.For instance, the smaller good partition in the universe of all possible good partitions of S is the partition defined by the equivalence relationship in Definition 1.Note that, for a discrete time order M Markov chain on a finite alphabet A, with S = A M the state space, there exists one and only one minimal good partition of S. The next theorem shows that for large enough n, we obtain, through the BIC, the minimal good partition.Theorem 3. Let (X t ) be a Markov chain of order M over a finite alphabet A, with S = A M the state space and x n 1 a sample of the Markov process.Let P be the set of all the partitions of S. Define L * n = argmax L∈P {BIC(L, x n 1 ).} Then, eventually, almost surely as n → ∞, L * = L * n , where L * is the minimal good partition of S, following Definition 1(ii). From Corollary 2, algorithms can be formulated to obtain L * .See, for instance, Algorithm 3.1 in [10].For large enough n, the algorithm returns the minimal good partition as shown by the next result.Corollary 3. Let (X t ) be a Markov chain of order M over a finite alphabet A, S = A M the state space and x n 1 a sample of the Markov process.Ln , given by the Algorithm 3.1, [10] converges almost surely eventually to L * , where L * is the minimal good partition of S. Remark 4. Algorithm 3.1 [10] requires as initial input a good partition.In the case in which there is not previous information about a good partition or about the length of the memory, the initial good partition can be chosen as the set of sequences, satisfying the suffix property and appearing in the sample at least B times, where B is a positive integer, which corresponds to the first part of the Context Algorithm ( [2,3]). We note that Corollary 3 also applies to other clustering algorithms based on distances, such as single-linkage clustering.However, exploring this aspect is beyond the scope of this paper. Navigation Patterns on a Web Site (MSNBC.com) The MSNBC.com anonymous web data set consists of one million user sessions recorded in 24 h on the web site.The dataset can be retrieved from [17] The web pages on the site are divided into 17 categories: frontpage, news, tech, local, opinion, on-air, misc, weather, msn-news, health, living, business, msn-sports, sports, summary, bbs and travel. Each category will be a letter in the alphabet A, with total size equal to 17.Each user session corresponds to a sequence of symbols from the alphabet starting with the category during which the session is initiated on the MSNBC site.The sequence of categories which the user visits defines the string that finishes when the user leaves the MSNBC site.In Table 2, we show an illustrative sample of 12 user sessions.The issue that the model can clarify is to identify the strings that can be considered equivalent, in terms of the next step of the internet surfers.This information could be extremely important in determining the profile of users, in relation to the preferences of states in A. The idealization that supports this application is that there are different sequences in the state space that share the same transition probability to the next symbol in the alphabet of the process.Sequences with such properties form a part of the minimal good partition, which completely describes the process.Furthermore, from this perspective, these sequences are equivalent to deciding the next symbol for the process.These sequences in our application are the paths used by internet surfers. We use the distance d L to find the minimal partition.It is known that there are several strategies based on a measure which allows us to reach this purpose, and one of them is the algorithm introduced in [10].We used three strategies, with input given by the set of strings (S): (i) Algorithm 3.1 introduced in [10]; (ii) Algorithm 3.1 [10] modified; and (iii) an agglomerative strategy.Option (ii) is composed of two stages.First, we join in the same part all the strings r and s of S, which show that d L (r, s) < , and this process generates an initial partition that is used as input of Algorithm 3.1 [10], which is the second stage.The agglomerative strategy of (iii) explores the ability of d L to build distances between strings of S and between groups of strings of S. Thus, in this strategy, all of the distances are computed joining groups of strings L i and L j if d L (i, j) < (|A|−1) 2 . We show in Table 3 that the agglomerative algorithm produces the best BIC value.We expose two cases with order M = 3 and M = 2, where the first is the order chosen as usual, 3 = log |A| (1.0 × 10 6 ) − 1.Let us look at some specific situations.For example, suppose that our interest is to investigate the parts that lead to the local state with probability greater than 0.6.There are seven different parts (using M = 3 and method (iii)) that fulfill this condition, and Table 4 shows the composition of each part and its probability of being local the next state. For instance, once the model was selected, in order to predict the next place that the user will visit, we first check during which of the 269 parts his/her path fall and then the corresponding probabilities are used.For example, if the user's path is weather.weather.misc,then the probability for the user to visit local is 0.7822 and this probability is shared by all three of the strings of L 5 .The full partition obtained by the algorithm and the set of transition probabilities associated to each part can be obtained from [18].We can draw some observations.For example, the transition probabilities of each part P(a|L i ) have been computed, using, in general, several strings, representing a natural improvement in the calculation of the transition probabilities.The reader can find many larger parts in [18], making this observation more incisive.On the other hand, the strings listed as members of the same part (see, for instance, several situations in Table 4), must be considered stochastically equivalent. Conclusions The development of the partition concept in Markov processes allows for proving that, for a stationary, finite memory process and a sample large enough, it is theoretically possible to consistently find a minimal partition to represent the process and this can be accomplished in practice.In this paper, we show (in Theorem 3) that the Bayesian Information Criterion can be used to obtain a consistent estimation of the partition of a Markov process.We show that the use of this criterion is also convenient for producing a consistent strategy that allows for deciding if a candidate partition is preferable to another candidate partition (in Theorem 1).We also define a metric on the state space, allowing for the introduction of a definition of a distance between the parts of a partition (Theorem 2).The distance allows the construction of consistent estimation algorithms to identify the partition.Research in progress suggests that this measure can be harnessed for the development and implementation of robust estimation techniques, given that there are records (see [19]) of the need of these techniques for Markov processes.In summary, in this paper, in addition to responding positively to the question of whether the Bayesian Information Criterion is capable of allowing a consistent estimation of the partition of the Markov process, we also obtain that, in terms of the model selection procedure, the Bayesian Information Criterion corresponds to a distance in the state space of the Markov process. Table 2 . Sample of 12 sessions.Each line represents the path followed by an user. Table 3 . Number of parts or cardinal of L and BIC value of the model (Definition 2), for memories 2 and 3, respectively.In (ii), =(|A|−1)
6,388.2
2017-04-06T00:00:00.000
[ "Computer Science", "Mathematics" ]
Synthesis and Thermal Properties of Acrylonitrile/Butyl Acrylate/Fumaronitrile and Acrylonitrile/Ethyl Hexyl Acrylate/Fumaronitrile Terpolymers as a Potential Precursor for Carbon Fiber A synthesis of acrylonitrile (AN)/butyl acrylate (BA)/fumaronitrile (FN) and AN/EHA (ethyl hexyl acrylate)/FN terpolymers was carried out by redox polymerization using sodium bisulfite (SBS) and potassium persulphate (KPS) as initiator at 40 °C. The effect of comonomers, BA and EHA and termonomer, FN on the glass transition temperature (Tg) and stabilization temperature was studied using Differential Scanning Calorimetry (DSC). The degradation behavior and char yield were obtained by Thermogravimetric Analysis. The conversions of AN, comonomers (BA and EHA) and FN were 55%–71%, 85%–91% and 76%–79%, respectively. It was found that with the same comonomer feed (10%), the Tg of AN/EHA copolymer was lower at 63 °C compared to AN/BA copolymer (70 °C). AN/EHA/FN terpolymer also exhibited a lower Tg at 63 °C when compared to that of the AN/BA/FN terpolymer (67 °C). By incorporating BA and EHA into a PAN system, the char yield was reduced to ~38.0% compared to that of AN (~47.7%). It was found that FN reduced the initial cyclization temperature of AN/BA/FN and AN/EHA/FN terpolymers to 228 and 221 °C, respectively, in comparison to that of AN/BA and AN/EHA copolymers (~260 °C). In addition, FN reduced the heat liberation per unit time during the stabilization process that consequently reduced the emission of volatile group during this process. As a result, the char yields of AN/BA/FN and AN/EHA/FN terpolymers are higher at ~45.1% and ~43.9%, respectively, as compared to those of AN/BA copolymer (37.1%) and AN/EHA copolymer (38.0%). The high chain stiffness contributed to the high T m value of polymer. The chain mobility (that can be determined by glass transition temperature, T g ) of PAN was also influenced by the chain stiffness attributed to the dipolar interaction between nitrile groups. Thus, a high T g indirectly indicates that PAN has a high T m that prevents PAN from undergoing melt processing [21]. Hence, to investigate the melt processing potential of PAN, T g of polymers was searched for to reflect the T m of polymers. Therefore, in this present work, two acrylate comonomers with different molecular size namely butyl acrylate (BA) and ethyl hexyl acrylate (EHA), were incorporated to disrupt the order of polymer chains in the PAN system. Acrylate comonomer acts as defects and helps to reduce the dipole-dipole interactions and long-range order present in the PAN system [22], thereby, reducing its glass transition temperature and lowering its processing temperature [23]. This will enable the melt processing of terpolymer. Although acrylate comonomer is able to reduce the T g [24], it has been reported that acrylate comonomer would result in copolymer with low char yield [22,25]. To overcome the stabilization temperature problem and the low char yield, the acidic comonomers that are commonly used [11,26,27] to facilitate the stabilization process can be replaced by fumaronitrile (FN) as a termonomer. We previously reported that the incorporation of FN slightly reduced the initial exotherm temperature during the stabilization process with a slower rate of nitrile cyclization. Hence, it significantly increased the char yield of PAN [28]. It is anticipated that the presence of two nitrile groups in FN is favorable in reducing the weight loss during stabilization process; hence, increasing the char yield during carbonization [28]. A high amount of char yield indicates a high amount of carbon content in fibers which leads to carbon fibers having good mechanical properties [22,25]. Overall, this study focused on producing PAN terpolymers with low T g , which results in better stabilization process by incorporating acrylate comonomers into the PAN system. Figures 1 and 2 show the formation of poly(acrylonitrile/butyl acrylate/fumaronitrile) and poly(acrylonitrile/ethyl hexyl acrylate/fumaronitrile), respectively. FTIR Spectroscopy The IR spectra of PAN, its copolymer and terpolymer are shown in Figure 3. In all cases, the bands in the region of 2943 cm −1 were assigned to C-H stretching in CH, CH 2 and CH 3 . The bands at 1450 cm −1 , 1353 cm −1 and 1204-1199 cm −1 were due to the C-H vibrations of different modes. The band at 2244 cm −1 indicated the absorption of nitrile groups in PAN homopolymer, copolymers and terpolymers [26]. The band in the region of 1632 cm −1 was assigned to the stretching of NH 2 groups due to acrylamide formation by partial hydrolysis of acrylonitrile units during the polymerization process using redox initiator [27]. The strong band in the range of 1735 cm −1 in copolymer and terpolymer spectra was due to the C=O stretching [26,29]. The disappearance of bands at 2238-2239 cm −1 that are due to the stretching of unsaturated nitriles of FN [30] confirms that FN was incorporated into terpolymers. It was reported that fumaronitrile cannot homopolymerize but copolymerizes easily under free radical condition [31]. Table 1 shows the conversions of polymerization for PAN homopolymer, copolymers and terpolymers. The conversions of AN/BA copolymer and AN/EHA copolymers are 75% and 77%, respectively, which are lower compared to the PAN homopolymer (85%), as expected. Similarly, AN/BA/FN and AN/EHA/FN terpolymers were also lower (61%-77%) compared to the PAN homopolymer. This is probably due to the absence of abnormalities or defects in the PAN homopolymer chains since polymerization was not incorporated with other monomers. In addition, the method used to obtain PAN homopolymer by redox method as reported before [23] was the most suitable and under optimum conditions. Besides, redox polymerization that was carried out under mild condition (lower energy of activation) lowered the possibility of side chain reactions; hence, giving high yield for the PAN homopolymer [32]. The composition of reacted comonomer and termonomer in the polymer increased as the amount of comonomer and temonomer was increased in the feed ( Table 1). The composition of BA and EHA comonomers in copolymer system is higher (~90%) when compared to that of AN (~70%). This is due to the ester monomers that are more reactive in radical copolymerization, as reported by other researchers [33]. Conversions and Composition Analysis In terpolymer system, the composition of comonomers (BA and EHA) is higher (85%-91%) as compared to that of AN (55%-72%) and FN (76%-79%). The rate of copolymerization is greatly affected by the concentration and polarity of monomers [16]. During the initial redox polymerization, propagation occurred in an aqueous phase which also contains the reaction initiating species. Due to higher solubility of AN and FN, these two monomers have a higher probability to combine with initiating species (SBS and KPS) at this initial homogenous stage [29]. When the propagating chains grow big enough, they will be precipitated and polymerization takes place both in water and on the surface of the precipitated particles, which means that the polymerization has changed from a homogenous to heterogeneous reaction [34]. At this stage, AN and FN which are more hydrophilic, remain in the aqueous phase and are now in contact with the polymerizable polymer chains [34]. Meanwhile, BA and EHA comonomers which are more hydrophobic become buried in the particle core of oligomers and continuously propagate to form polymer [29]. Due to better chances of BA and EHA to polymerize during the heterogeneous stage, the reaction or conversion of BA and EHA comonomers is the highest compared to AN and FN, respectively. Effect of Comonomer on T g As shown in Table 2, the PAN homopolymer showed the highest T g at 210 °C. This can be explained by the nitrile-nitrile dipolar interactions that provide regularity sequence along the PAN homopolymer chains, resulting in retardation of the chains movements. However, the introduction of 10% BA and 10% EHA comonomers greatly lowered the glass transition (T g ) of AN/BA copolymers to 70 °C and 63 °C, respectively, compared to PAN homopolymer (210 °C). Since the acrylate groups affect the regularity sequence along the PAN homopolymer chains, they reduce the dipolar interaction of PAN chains. Less dipolar interactions increase the free volume of the PAN system, thus enhance the mobility of polymer chains [24,35] and thereby depress the T g . With the same amount of comonomers feed (10%), the T g of AN/EHA copolymer was lower at 63 °C compared to the AN/BA copolymer (70 °C). This is due to the larger size of acrylates which provide larger interruptions to the regularity sequence of the PAN system and facilitate the chain mobility by reducing the nitrile-nitrile dipolar interactions along the polymer chains. On the other hand, it was shown that AN/EHA/FN 90/2/8 terpolymer gave similar T g value with the AN/BA/FN 90/4/6 terpolymer at 67 °C. This indicates that a lower amount of bulkier acrylate comonomer (EHA) provides a comparable T g to a smaller size of acrylate group (BA). Incorporating the least amount of acrylates is favorable to minimize the interruption along the PAN homopolymer chains. High interruption on the nitrile sequence along PAN chains leads to the low formation of char yield after thermal stabilization [22]. Low T g is favorable in this study because it reflects low T m that lowers its processing temperature, thereby facilitating the melt extrusion/spinning of PAN homopolymer [12,22]. A PAN system with a potential to undergo melt spinning is desirable to avoid high expenses on solvent recovery and recycling. In the AN/BA/FN and AN/EHA/FN terpolymers obtained, it was seen that as the amount of FN increased from 2 to 8 mol%, the T g of the PAN system slightly increased. This could be attributed to some potential interactions between the nitrile groups of FN and AN, which could reduce the chain mobility and lead to a higher T g . FN initiates the cyclization of the nitrile group at lower temperature as shown in Table 3. The T i (initial cyclization temperature) for AN/BA/FN terpolymers is about 228 °C which is lower as compared to that of AN/BA copolymer (264 °C). Similar observation was found in the case of AN/EHA/FN terpolymers in the sense that T i was lower (220 °C) as compared to that of the AN/EHA copolymer (263 °C). This behavior could be attributed to the interruption of polymer chain sequence provided by the FN group. Table 3. DSC data of PAN homopolymer, copolymers and terpolymers. The DSC thermogram that demonstrated the cyclization behavior of AN/BA/FN 90/4/6 terpolymer is shown in Figure 4. It reveals that AN/BA/FN terpolymer has a broader exothermic peak, as indicated by the greater value of ∆T (T f − T i ) in Table 3. Similarly, the ∆T of the AN/EHA copolymer was lower (43 °C) as compared to the value of AN/EHA/FN terpolymer (∆T = 70 °C). A broader peak suggests that terpolymer has a slower propagation reaction for producing a ladder-like polymer [10,36], whereas the peaks observed for the AN/BA (Figure 4) and AN/EHA copolymer ( Figure 5) were more intense. On the other hand, the total heat of exothermic reaction, ∆H in the case of terpolymers is lower as compared to that of copolymers. This observation suggests a different reaction mechanism that might have taken place during stabilization with a relatively much slower propagation [11] as a result of incorporation of FN. For instance, the ∆H/∆t for AN/BA 90/10 copolymer and AN/EHA 90/10 copolymer are 104 J· g −1 · min −1 and 122 J· g −1 · min −1 respectively, which are higher than that of AN/BA/FN 90/4/6 terpolymer (47 J· g −1 · min −1 ) and AN/EHA/FN 90/4/6 terpolymer (49 J· g −1 · min −1 ). This shows that the exothermic reaction in the case of terpolymers takes a much longer time to be completed with a lower heat liberation per time. This leads to a better heat distribution during stabilization process, thereby increasing the formation of a ladder-like structure that leads to a better quality of carbon fiber [11]. TGA Studies The TGA was carried out to obtain the thermal stability and get an estimate of the carbon yield. As the samples are subjected to heating, it starts to actively cyclize at temperatures greater than 220 °C, leading to char formation at the end of heat treatment. The results give a preliminary estimate of the carbon yield of the polymers [4,22]. As shown in Table 4, the char yield of PAN is low at 47.7% because the heat dissipation is more difficult in the PAN homopolymer due to its poor conduction of heat [22]. Hence, excessive localized heating can lead to chain scission and subsequently lowers the char yield. On the other hand, as the amount of comonomers (BA and EHA) increased from 2 mol% to 10 mol%, the char yields of AN/BA/FN and AN/EHA/FN terpolymers were significantly reduced to 38.0% and 37.1%, respectively. This is due to the acrylate comonomer units that act as defects in the PAN chains to reduce the formation of a ladder-like structure along the polymer chains, resulting in a lower char yield [22]. However, by incorporating 2-6 mol% of FN in the case of AN/BA/FN and AN/EHA/FN terpolymers, the char yield of terpolymers increases to 45.1% and 43.9%, respectively. Some interactions between FN units that have two nitrile groups each and AN system ease the stabilization process. This minimizes the chain scission reactions and the loss of volatile products, leading to lesser weight loss. The degradation behavior of the AN/BA copolymer and its terpolymer as well as AN/EHA copolymer and its terpolymer is shown in Figures 6-9. The TGA thermograms of copolymer and its terpolymer (Figures 6 and 8) showed a three-step weight loss. As shown in Figures 7 and 9, and listed in Table 4, the weight loss in the region of 30-100 °C is the vaporization of moisture. The first step is up to 250 °C, where the weight loss is not substantial. The degradation in this region is due to the evolution of HCN and NH 3 [11]. The second step is between 250 and 600 °C with a very rapid weight loss. This is due to the cyclization and oxidation reactions which are exothermic and contribute to the formation of ladder polymer structure in the polyacrylonitrile molecule [13]. The third step starts at 600 up to 950 °C with a quite steady weight loss. In the case of AN/BA copolymer, the DTG curve ( Figure 7) shows that the weight loss in the first step is 0.18%, while in the second step, starting from 250 to 600 °C, the weight loss is very high at 54.76% and appears as a broad DTG curve. The final step shows a steady and slow weight loss (7.06%) up to 950 °C. Overall, the total char yield is 38.0% as shown in Figure 6. Meanwhile, the AN/EHA copolymer showed a similar trend with the highest weight loss at 50.80% in the second step. Note that in Table 4 with the same feed amount of acrylate comonomer, the weight loss in the second step (part 1; 250-350 °C) for AN/EHA copolymer was slightly higher (35.80%) compared to that of the AN/BA copolymer (34.62%). This is probably due to the size of EHA comonomer which is bulkier compared to the BA; therefore, it provides more defects in the polymer system and increases the weight loss during degradation in the second step (part 1) that refers to the exothermic process during stabilization. As a result, the char yield of AN/EHA copolymer was slightly lower at 42.5% compared to that of the AN/BA copolymer (43.9%). However, as shown in Table 4, the AN/EHA/FN 90/2/8 obtained a comparable value of char yield with AN/BA/FN 90/4/6 (43.9%). This observation shows that the amount and size of acrylate group affect the degradation behavior of the PAN system. It is important to minimize the amount of acrylate group to reduce the weight loss during heat treatment of the PAN system. As shown in Table 4, the final char yields of AN/BA copolymer and AN/EHA copolymer are 38.0% and 37.1%, respectively, which are lower than that of PAN (47.7%). This is expected to be due to the effect of acrylate group that is anticipated to interrupt the nitrile sequence along the polymer chains, resulting in high weight loss after stabilization up to 950 °C. It should be noted that the role of acrylate comonomer as diluent is to enhance the chain mobility of the PAN system and, hence, reduces the T g value. On the other hand, for the AN/BA/FN 90/4/6 terpolymer (Figure 7), the weight loss in the first step is not substantial at only about 0.18%. The weight loss of the second step is 47.53%. The DTG curve shows that the degradation occurred at a faster rate as shown by the more intense but a smaller peak as compared to that of the AN/BA copolymer. Whereas, in the third step, the weight loss is about 8.43% and results in a total char yield of 43.9%, which is higher compared to AN/BA copolymer (38.0%). Likewise, the thermogram of AN/EHA/FN terpolymer ( Figure 9) exhibited a smaller peak during degradation in the second step in comparison to that of the AN/EHA copolymer. Thus, AN/EHA/FN terpolymer gave a higher char yield (42.5%) than that of the AN/EHA copolymer (37.1%). This result indicates that FN successfully facilitates the stabilization process and improves the thermal stability of AN/acrylate/FN. While one of the nitrile groups involved in the fragmentation of the chain, another nitrile group undergoes cyclization process to form a ladder-like structure as stabilized fibers. This shows that the incorporation of FN into terpolymer increases the possibility to form a ladder-like structure during stabilization. A perfect formation of a ladder-like structure leads to a high char yield during carbonization of AN/acrylate/FN terpolymer. Char yield is very important because it provides a realistic indication of the final carbon content of carbon fiber [22]. Synthesis and Characterization Polymerization of acrylonitrile (AN) with butyl acrylate (BA) and ethyl hexyl acrylate (EHA) as comonomers and fumaronitrile (FN) as termonomer was prepared by redox method. Sodium bisulfite (SBS) and potassium persulfate (KPS) were used to initiate the polymerization. The synthesis of poly (AN/BA/FN) 90/4/6 (mole ratios) was carried out in a three-necked flask at 40 °C under nitrogen atmosphere. The flask was fitted with a condenser and the third neck was used for nitrogen purging. Deionized water (200 mL) was added into the flask, and the temperature was increased to 40 °C. After 30 min, AN monomer (172.8 mmol) was added into the reaction mixture followed by BA comonomer (7.7 mmol) and FN termonomer (11.5 mmol). Finally, SBS (1 mmol) and KPS (1 mmol) were added as redox initiators. The mixture was then stirred under nitrogen at 40 °C, and the polymerization was allowed to proceed for 3 h. The polymer formed was precipitated, filtered, washed successively with methanol and deionized water and dried under vacuum at 45 °C till a constant weight was obtained [23,28]. Polyacrylonitrile homopolymer and copolymer were also prepared in the same way. FTIR spectra of PAN homopolymer, copolymers and terpolymers were recorded on a Perkin Elmer GX infrared spectrophotometer using KBr pellets. Composition Analysis of Polymers The composition of reacted monomers in copolymers and terpolymers was determined from residual monomer concentration data. The residual monomer concentrations of withdrawn samples were obtained using a Gas Chromatography system. The polymers obtained were firstly mixed with a defined amount of methanol to precipitate and isolate the polymer from the reaction medium (water). The residual monomer remained in water. A defined portion of supernatant was injected for GC analysis [37]. The calibration curves of butyl acrylate (Figure 10), ethyl hexyl acrylate ( Figure 11) and fumaronitrile ( Figure 12) were obtained by using GC system. Undecane was used as an internal standard for calibration curve and sample analysis. The peak area served as calibration parameter. The composition of reacted acrylonitrile in copolymers and terpolymers was calculated using Equation (1): (1) Differential Scanning Calorimetry (DSC) Samples in powder form (~5) mg were used for DSC analysis. The glass transition temperature (T g ) of the samples was determined using a Mettler-Toledo Differential Scanning Calorimeter, Mettler Toledo International Inc., Greifensee, Switzerland. The heating rate administered was 10 °C· min −1 and the samples were heated from room temperature to 200 °C, cooled back to room temperature at the same rate, and reheated to 200 °C. The stabilization temperature was obtained by heating the samples from room temperature to 400 °C at 10 °C· min −1 . Thermogravimetric Analysis Thermogravimetric analysis was carried out using a Mettler-Toledo TGA instrument, Mettler Toledo International Inc. in nitrogen from room temperature to 950 °C at a heating rate of 10 °C· min −1 . A sample size of ~15 mg in the form of fine powder was used. To establish the relationship between the weight loss and temperature, TG thermograms and DTG curves were recorded. Conclusions AN/BA copolymer, AN/BA/FN terpolymer, AN/EHA copolymer and AN/EHA/FN terpolymer were successfully synthesized by redox method with high conversion of monomers. BA and EHA comonomers greatly reduced the T g of their copolymers respectively, and this indirectly indicates that acrylate group enhanced the mobility of the polymer chains and facilitated their flowability. Thus, AN/BA and AN/EHA copolymers have a potential to undergo melt processing. Incorporating a lower amount of a bulkier acrylate comonomer (EHA) results in similar T g to a higher amount of smaller acrylate comonomer (BA). This observation shows that the size of acrylate group affects the chain mobility and the defects on the PAN chains can be minimized by incorporating a bulkier acrylate group for further heat treatment to obtain char yield. Incorporating BA and EHA resulted in a lower char yield as compared to the PAN homopolymer. FN termonomer incorporated into the PAN system has been shown to facilitate the stabilization process to occur at a lower temperature with lower heat liberation. Therefore, the char yield of AN/BA/FN terpolymer is higher (45.1%) than that of AN/BA copolymer (38.0%). Similarly, the AN/EHA/FN terpolymer also showed higher char yield at 43.9% compared to the AN/EHA copolymer at 37.1%. This research suggests that the AN/BA/FN and AN/EHA/FN terpolymers system has a potential to undergo melt spinning and has a good thermal stability due to a higher char yield which reflects higher carbon content after the heat treatment. The amount of the acrylate group can be minimized to reduce the defect in the PAN system by incorporating a bulkier acrylate group.
5,188.4
2014-09-01T00:00:00.000
[ "Materials Science" ]
The Conserved Non-Coding Sequence 2 (CNS2) Enhances CD69 Transcription through Cooperation between the Transcription Factors Oct1 and RUNX1 The immune regulatory receptor CD69 is expressed upon activation in all types of leukocytes and is strongly regulated at the transcriptional level. We previously described that, in addition to the CD69 promoter, there are four conserved noncoding regions (CNS1-4) upstream of the CD69 promoter. Furthermore, we proposed that CNS2 is the main enhancer of CD69 transcription. In the present study, we mapped the transcription factor (TF) binding sites (TFBS) from ChIP-seq databases within CNS2. Through luciferase reporter assays, we defined a ~60 bp sequence that acts as the minimum enhancer core of mouse CNS2, which includes the Oct1 TFBS. This enhancer core establishes cooperative interactions with the 3′ and 5′ flanking regions, which contain RUNX1 BS. In agreement with the luciferase reporter data, the inhibition of RUNX1 and Oct1 TF expression by siRNA suggests that they synergistically enhance endogenous CD69 gene transcription. In summary, we describe an enhancer core containing RUNX1 and Oct1 BS that is important for the activity of the most potent CD69 gene transcription enhancer. Introduction In mammals, 95% of the genome is noncoding, and 40% of the noncoding region plays a role in transcription regulation [1]. In combination with gene promoters, cis-regulatory elements contribute to gene transcriptional regulation as enhancers or repressors. Cis-regulatory elements are often identified as conserved noncoding sequences (CNS) with DNase I hypersensitivity. Cis-regulatory elements contain binding sites for trans-acting transcription factors (TFBSs) and serve as centers of epigenetic changes [2,3]. These elements can be located around or within genes and can affect their transcription from as far as megabases [4]. The three-dimensional conformation of the genome seems to be important to promote contacts between distant elements and their target genes. Thus, these sequences form complex regulatory landscapes within the noncoding genome. Design of dicer substrate small interfering RNA (DsiRNA). All DsiRNAs described were synthetized by IDT (Integrated DNA Technologies, Coralville, IA, USA). Three DsiRNAs were designed for each gene to be silenced in addition to two additional DsiRNAs and a negative control targeting NC1. The genome reference used for DsiRNA design was mm10. Table 3 provides the sequence and cross-reacting properties of DsiRNAs against POU2F1 (Oct1) and RUNX1 mRNAs. DsiRNA transfection and CD69 characterization. Mouse EL-4 T cells (10 5 ) were transfected with a 1 µM pool of three DsiRNAs against POU2F1 (Oct1) or RUNX1 in a 1:1:1 ratio or a negative control DsiRNA against NC1. RNA of transfected cells were extracted, and a qPCR was performed to assess mRNA levels of RUNX1 and POU2F1 (Oct1). Transfected cells were cultured in a 96-well plate at 37 • C with 5% CO 2 for 16 h and then stimulated or not with 10 ng/mL PMA and 500 ng/mL ionomycin for an additional 6 h. Twenty-two hours after transfection, CD69 surface expression was measured by flow cytometry as follows: cells were washed twice in cold 1X PBS and stained for 30 min at 4 • C with 0.5 µg of an α-mCD69/PE-Cy7 monoclonal antibody (Clone H1.2F3, eBioscience 25-0691-81, San Diego, CA, USA). Samples were acquired with a FACSCanto (Becton Dickinson, Franklin Lakes, NJ USA) flow cytometer, and data were analyzed using FlowJo software. The percentage of inhibition of CD69 expression with respect to its expression in the NC-1 control was represented using GraphPad Prism 7 (version 7, San Diego, CA, USA). Data representation and statistical analysis. Luciferase assay and flow cytometry data were plotted with GraphPad Prism 7. Significant differences between multiple conditions were tested with a nonparametric one-way ANOVA test performed with GraphPad Prism 7. Mapping of Transcription Factor Binding Sites within Mouse CNS2 Enhancer In a previous data-mining study of the regulatory features of the CNSs of the CD69 gene, we proposed that CNS2 is the putative main enhancer based on the enrichment of chromatin accessibility, modifications and bound TF [34]. Given the availability of new Chip-seq data, in the present work, we updated the map of TFBSs in the human locus using Chip-seq data from human hematopoietic cell lines available from the ENCODE consortium ( Figure 2A) and also mapped bound TF to the mouse locus using data from murine primary B and T cells available from the ChIP Atlas and ChIPBase v2.0 databases ( Figure 2B) to compare them. These maps showed that the human CD69 3 untranslated region, intron 1, promoter, CNS1, CNS2 and CNS4 and in the murine 3 untranslated region, promoter, CNS1 and CNS2 have a higher density of TFBSs (TFBSs per 100 bp) than the rest of noncoding sequence within the CD69 locus. Thus, the 3 untranslated region, promoter, CNS1 and CNS2 of both species contain numerous TFBSs, which suggests that they may play a particularly important role in the transcriptional regulation of the CD69 promoter. In agreement with our previous observations, mouse CNS2 also shows a higher total number of TFBSs compared to the other regulatory regions within CD69 locus. In our previous work [34], we subdivided CNS2 into different subregions based on the presence of TFBSs conserved between 6 mammal species and tested their contribution to the enhancer capacity of CNS2. However, that characterization left undefined regions where bound TFs have subsequently been identified and could play a relevant role. Moreover, considering that some regulatory mechanisms may have diverted during evolution and that, consequently, some important TFBS might not be conserved, in the present work, we subdivided the mouse CNS2 into five regions based on the abovementioned updated ChIP-seq data, taking into account all bound TFs, as well as those bound to sites that are not conserved between six mammal species ( Figure 3A). Regions II and III contain more than 70% of TFs bound to non-conserved sites, while regions I and IV contains TFs bound only to conserved BSs. Region V has intermediate characteristics. factor binding sites conserved among six mammals species, as previously described [34]; grey boxes Transcriptional Enhancer Capacity of the CNS2 Region and Its Enhancer Core Activity Then, we assessed the contribution of the different regions to mouse CNS2 enhancer capacity using a luciferase reporter assay in Jurkat cells stimulated or not with PMA plus ionomycin. In this experiment, we compared the capacity of the complete CNS2 sequence (Prom+I-V) to enhance transcription from the mouse CD69 promoter with those of different CNS2 fragments: I+II+III, II+III and I+II (Prom+I-III, Prom+II-III and Prom+I-II). As shown in Figure 3B regions II+III have an inducible enhancer activity similar to that of regions I+II+III, while regions I+II have lower enhancer potential. Therefore, regions II+III seem to contain the most of the relevant enhancing features for transcription for the CD69 promoter upon PMA/ionomycin activation, while those of regions I, IV and V have a more modest contribution. While region III contains three overlapping BSs, region II has two separate subregions with TFBSs: One with an Oct1 BS and another with GATA+MyoD TFBSs. We further assessed the contribution of the individual regions I, II and III, as well as the subregions of region II separately or in combination with the region I or III, but keeping, within the tested fragment, the original sequence with the endogenous TFBSs distribution. Figure 4 shows that region II (Prom+II) significantly increases CD69 promoter transcriptional activity upon activation, but neither region I (Prom+I) nor region III (Prom+III) increases CD69 promoter transcriptional activity upon activation ( Figure 4B). The~60 bp sequence of region II containing the Oct1 TFBS (Prom+Oct1) significantly increases the inducible enhancer activity in a similar manner as region II (Prom+II), while the~65 bp sequence containing GATA+MyoD TFBSs (Prom+GATA+MyoD) did not significantly increase the inducible enhancer activity. These results suggest that the subregion that contains the Oct1 TFBS constitutes the most relevant regulatory feature of region II. Moreover, considering that regions I and III alone do not show enhancing activity, these data indicate that the~60 bp sequence containing the Oct1 TFBS is the shortest sequence identified within regions I+II+III that has transcriptional enhancer activity, likely acting as the enhancer core of CD69 transcription. Furthermore, the combination of this sequence with region I (Prom+I+Oct1,~200 bp) had a transcriptional enhancing capacity as high as half of that of region I-III and similar to that of regions I and II together (Figures 3B and 4B). Although the combination of region III with the~65 bp sequence GATA+MyoD (Prom+GATA+MyoD+III,~190 bp) increased the transcriptional activity twofold compared to the sequences containing the BS of each TF individually, it did not reach transcription levels significantly higher than those of the promoter alone. Since GATA and MyoD BS are between Oct1 and RUNX1, it is not possible to test Oct1+RUNX1 without altering the original TFBSs distribution. Thus, considering that regions II+III have almost as much enhancer activity as regions I-III ( Figure 3B), it is suggested that the Oct1 TFBS also needs to cooperate with region III to enhance transcription. Thus, the Oct1 TFBS seems to act as the enhancer core of CD69 transcription and can synergize with either region I or region III or with both of them to achieve greater enhancement of transcription from the CD69 promoter. Altogether, these data suggest a cooperative modular model between the TFs in region II and those in regions I and III. RUNX1 and Oct1 Silencing Affects the Transcriptional Regulation of CD69 Given the synergy observed between the Oct1 BS and region III and the fact that the latter contains a BS for RUNX1, an important TF during development and homeostasis of immune cells, we evaluated the relative contribution of Oct1 and RUNX1 to CD69 transcriptional regulation. To do so, we silenced them individually or in combination using siRNA in the mouse T cell line EL-4 and analyzed the effect of silencing on surface CD69 protein in unstimulated cells or cells activated with PMA/Ion. Under both conditions, inhibition of RUNX1 and Oct1 individually resulted in a CD69 expression reduction of~40%, and the joint inhibition of both TFs did not further reduce CD69 expression ( Figure 5). Therefore, RUNX1 and Oct1 contribute to enhancing CD69 expression both at resting and after stimulation. Moreover, the fact that the effect of their individual silencing was the same as that of their combined deletion suggests that they need each other to increase CD69 expression, which supports the cooperative model of enhancing transcription. PMA/Ion. Under both conditions, inhibition of RUNX1 and Oct1 individually resulted in a CD69 241 expression reduction of ~40%, and the joint inhibition of both TFs did not further reduce CD69 242 expression ( Figure 5). Therefore, RUNX1 and Oct1 contribute to enhancing CD69 expression both at 243 resting and after stimulation. Moreover, the fact that the effect of their individual silencing was the 244 same as that of their combined deletion suggests that they need each other to increase CD69 Discussion In this work, we deepened the understanding of the enhancer function of mouse conserved noncoding sequence 2 (CNS2) on CD69 promoter transcriptional regulation. Using ChIP-seq data, we mapped the TFBS within CNS2 and used the TFBS distribution to define different regions. Then, we tested the effect of these regions on transcription induced by the CD69 promoter in response to PMA/ionomycin activation in luciferase reporter assays. In this way, we defined a minimal sequence with intrinsic enhancer capacity that contained a conserved Oct1 BS that could further enhance transcription if it was accompanied by either a 3 flanking region containing a conserved SRF BS or a 5 flanking region containing conserved RUNX1, GABPA and Elk-1 TFBSs and could enhance transcription to even higher levels in the presence of both of them. These results suggest cooperation between these TFs. We further tested the effect of Oct1 and RUNX1 silencing on CD69 protein expression. It these experiments, the observed synergy seemed even more striking because the effect of each TF depended on the presence of the other. In previous studies, we observed that among the different CNSs, CNS2 had the most potent enhancer activity on CD69 transcription. In the present work, the mapping of TFBSs on the CD69 locus showed that several of the TFs that have been shown to bind to CNS2 also bind the promoter region, which suggests their importance in the transcriptional regulation of CD69. Despite the fact that there are fewer data available for the TFs bound to the mouse CD69 locus than to the human CD69 locus, similar to humans, mouse CNS2 is also the region with the highest enrichment of TFBSs. Occupancy by specific TF has been described to be a predictor of enhancer activity of a given region [38]. Thus, these results highlight the importance of CNS2 as a transcriptional enhancer of CD69. The present results regarding the enhancer capacity of CNS2 regions are in agreement with our previous work, in which the regions with the highest enhancer capacity comprises regions I, II and III, described in the current study [34]. In the present study, the further subdivision of this regions into smaller regions allowed us to identify an Oct1 BS-containing minimum enhancer core that can cooperate with either an adjacent region at the 3 end (region I, containing an SRF TFBS) or 5 end (region III, containing a RUNX1 TFBS) or with both of them to reach greater enhancer activity. Interestingly, although region I did not have enhancer activity per se, it could increase that of Oct1 BS. This could be explained by an interaction of TFBSs in region I with the transcriptional machinery that bounds to the region containing the Oct1 BS. Altogether, these data suggest a cooperative modular model of transcription factors that synergistically enhances transcription. Cooperation between transcription factors may occur based on their physical association, a mechanism frequently described for enhancer sequences that affect the transcriptional behavior of several immune system genes [39][40][41]. Although Oct1 has been widely studied as a regulator of transcription in many cell types [42,43], its precise role is still unclear. Oct1 is known to have both activating [44][45][46] and silencing activity [47][48][49]; moreover, this TF has been described to play a role in CD4 + T cell differentiation [50] and memory formation [51], orchestrating the fate and function of CD4 + effector T cells as a switchable stabilizer of the repressed and inducible state [52], acting at hypersensitivity sites that are distant from promoters of target genes in several cases. RUNX1 has been implicated in the development and homeostasis of different immune subpopulations [53]. As is the case for CNS2, many enhancers are located far from their target genes, but are able to specifically communicate with distal promoters over large distances [54][55][56] through the formation of a chromatin loop [57,58]. Thus, chromosome topology is critical for the interactions between distant enhancers and promoters. However, the relevance of particular interactions for target gene transcription is difficult to assess because they have not always been related to functional outcomes [59,60]. Because of the lack of a genomic context in reporter plasmids, luciferase assay data do not reflect the effects of chromosome topology on transcription. However, the luciferase assay can still be used as a predictor of the regulatory capacity of elements that can be further characterized in a more physiologic context. With that aim, we tested the effects of RUNX1 and Oct1 silencing on the expression of the endogenous CD69 gene. Silencing of these TFs separately in the mouse EL-4 T cell line reduced CD69 expression as much as their joint silencing, suggesting that they can enhance CD69 expression, but only if the other TF is also present. This result suggests a mode of action that is completely synergic. Given the predominant enhancer role of CNS2 in CD69 transcription and, within it, of the Oct1 TFBS, and its observed cooperation with region I, we believe that these results might reflect, to some extent, the role of Oct1 and RUNX1 acting on the BSs described in regions I and II of CNS2. However, we cannot rule out that they might also be influenced by the action of these TFs on other BSs within the CD69 locus. The ChIP-seq databases show two RUNX1 BS found in CD69 promoter sequence. However, since these regions have a much more modest enhancing capacity, we propose that the main enhancer effect of RUNX1 is through the BS in CNS2. Future experiments using ChIP, TF silencing and TFBS mutagenesis might shed more light on the role of the studied TF in CD69 transcription in the genomic context. In summary, our results strengthen evidence of the role of CNS2 as an essential enhancer of mouse CD69 promoter transcription. Despite the need for further studies on the role of CNS2 in the genomic context, to our knowledge, no detailed studies on the composition and function of regulatory elements within this enhancer have been published. Our work is, therefore, a step forward in understanding how CNS2 regulates CD69 expression. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of the data; in the writing of the manuscript; or in the decision to publish the results.
3,966.2
2019-08-28T00:00:00.000
[ "Biology" ]
Self-assembly of polymer-like structures of magnetic colloids: Langevin dynamics study of basic topologies Abstract We study the self-assembly of colloidal magnetic particles permanently cross-linked into polymer-like structures with different topologies, that we call supracolloidal magnetic polymers (SMPs). In order to understand the influence of the interparticle permanent links, we investigate SMPs holding the main topologies observed in the self-assembly of non-cross-linked magnetic particles via grand canonical Monte Carlo simulations: chains, rings and simple branched structures. Here, using molecular dynamics simulations, we focus on systems of SMP pairs. Our results evidence that the presence of crosslinkers leads to the formation of new types of aggregates, not previously observed for individual magnetic colloids. Introduction The fundamental understanding of the self-assembly properties of colloidal systems is one of the key topics in current research on novel microstructured soft materials and technologies. Available experimental techniques allow the synthesis of colloidal particles with sizes ranging from the micrometre to the nanometre scale from a wide variety of substances and with a high control on their chemical and physical properties. This places colloidal particles among the most versatile building blocks in bottom-up strategies for the design of advanced materials, including cutting-edge approaches such as that ones based on hierarchical self-assembly [1,2]. Colloids with defined self-assembly properties are particularly useful for the design of stimuli-responsive soft materials, i.e. soft matter systems that experience a controlled change in their properties as a response to some particular external drive [3]. One of the most important cases is the combination of magnetic colloidal particles with carrier substances, like liquids and/or polymers, in order to create materials responsive to external magnetic fields. Whereas many substances common in soft matter systems are significantly sensitive to other stimuli such as temperature, pH, electric fields or chemical reagents, their magnetic properties are practically negligible under most conditions of interest for technological applications. The introduction of micro or nanoparticles of magnetic substancestypically, solid metallic compounds -is therefore, essential for the creation of magnetoresponsive soft materials. The fact that only the magnetic particles within such systems are sensitive to the external stimulus represents an advantage for many practical applications, providing a high control on the response of the material [4,5]. Magnetic colloidal particles and materials based on them have been intensively studied for already several decades and CONTACT S. S. Kantorovich<EMAIL_ADDRESS>keep attracting a growing research interest, mainly stimulated by their unique properties and the development of the experimental techniques for the synthesis of colloidal hybrid materials. The simplest and most extensively studied magnetoresponsive soft materials are ferrofluids and magnetorheological fluids, which are dispersions of magnetic colloidal particles in a liquid carrier fluid [6][7][8][9]. These materials show strong changes in their rheological properties under the influence of external magnetic fields and/or cooling. Such macroscopic changes emerge as a consequence of the spontaneous self-assembly or field-induced assembly of the magnetic particles into chain-like structurestypically, linear chains, rings and branched chains -driven by the strongly anisotropic nature of their magnetic interactions [10][11][12][13][14]. This characteristic self-assembly behaviour of magnetic colloidal particles is also determinant for the properties of more complex soft magnetic materials, including polymer hybrid materials such as magnetic gels and elastomers [15][16][17]. The aforementioned soft magnetic materials -conventional magnetic fluids, gels and elastomers -are usually obtained by means of relatively simple experimental techniques -basically, the dispersion or a large amount of magnetic colloidal particles in a liquid carrier and/or the filling of a polymer matrix with them. This provides a limited control on the detailed microstructure of such systems. A much more sophisticated approach is the creation of small aggregates of magnetic particles with a well-defined structure that can be later used as building blocks of more complex systems, following the bottomup approach to materials synthesis. The most straightforward structures for such small building blocks are the aforementioned self-assembled chains of magnetic colloidal particles. In particular, linear polymer-like chains are relatively easy to obtain under external fields. These systems were initially studied as micromechanical sensors [18], microfluidic artificial cilia, propellers and pumpers [19][20][21][22] and contrast agents in magnetic resonance imaging diagnostics [23]. They also have been proposed for other applications, like magnetically tunable photonic crystals [24], or as building blocks of more complex structures such as polymer brush-like magnetoresponsive coatings [25][26][27][28]. In some of such examples, the magnetic chains can keep their linear structure with the sole help of the magnetic interactions between the particles. However, in many cases it is convenient to permanently stabilise the chain-like structures by cross-linking the neighbouring particles with polymers. This provides a much higher resistance to mechanical stresses, a significantly increased magnetic response [29] and a total independence from the self-assembly conditions of the particles after the cross-linking, with the only potential downside of obtaining a much higher rigidity of the chains. The latter property, however, can be tuned by selecting the composition and length of the polymer crosslinkers [30,31]. Even this tuning has been achieved to date only for microparticles, cutting-edge experimental techniques like DNA designed self-assembly [32,33] or in situ polymer mineralisation [34,35] open up the possibility to also create rather flexible linear polymer-like chains of magnetic nanoparticles. Such techniques also may allow the permanent stabilisation of small polymer-like aggregates with morphologies more complex than the simple linear chain. From now on, we will use the generic term 'supracolloidal magnetic polymers' (SMPs) to name these systems. The detailed characterisation of the equilibrium structures of SMPs has started to be thoroughly explored very recently. The rich structural behaviour predicted even for single linear chains [36][37][38] and their polymer brush-like arrangements [26][27][28] points these materials as promising building blocks of sophisticated self-assembled supracolloidal systems with strong magnetic response. Inspired by the possibility of the experimental achievement of more complex morphologies for these stabilised chain-like aggregates, here we extend our preliminary work on the self-assembly properties of pairs of linear chains and rings of cross-linked ferromagnetic colloids [39] to the rest of the most basic structures, observed experimentally in dispersions of free magnetic particles and characterised by simple models of conventional ferrofluids, like the system of dipolar hard spheres [14]. Based on this latter system, very recently we presented a grand canonical Monte Carlo simulation study aimed at understanding the properties of single clusters of ferromagnetic colloids [40]. In this study, we could estimate the probabilities of finding a cluster of a given size and/or topology depending on temperature. Our simulations confirmed the importance of the competition between four basic morphologies: chain, ring, three-point branched structure -or 'Y'-junction -and the four-point branched structure -or 'X'-junction [40]. We also discovered that to observe branching or ring formation even at relatively low temperature, the cluster should contain not less than seven to eight dipolar hard spheres. Examples of these self-assembled clusters of 20 dipolar spheres are provided in Figure 1. Based on these findings, we decided to study SMPs of several lengths, to allow for the variations in the configurational entropy. The idea to permanently cross-link the four basic selfassembled structures observed for magnetic collidal particles in dispersion is based on the assumption that such clusters formed by spontaneous or simple field-induced self-assembly might be experimentally easier to stabilise with polymer crosslinking techniques. In order to study the self-assembly properties of such prospective supracolloidal building blocks, here we perform extensive computer simulations using a bead-spring model of SMPs for every one of the four basic morphologies: linear chain, ring, X-junction and Y-junction. We focus on the equilibrium structures driven by the magnetic interactions of pairs of identical small aggregates in absence of external fields, analysing the dependence of their self-assembly behaviour on their morphology, size and magnetic strength. The work is organised as follows. First, we introduce our bead-spring model of SMPs and the details of the simulation approach. Then, the simulation results obtained for the equilibrium structures of two SMPs of given topology are presented and analysed, discussing the different self-assembled structures observed depending on the interaction range between the two stabilised aggregates. Finally, the summary of the work can be found in the last Section. A model of stabilised polymer-like aggregates of magnetic colloids We assume the magnetic colloids in our system to be identical spherical solid bodies of a ferromagnetic material, composed of a single magnetic domain, that are coated with a thin layer of soft polymer material in order to individually stabilise them in the carrier fluid and to allow their cross-linking. Under this assumption, the magnetic properties of the colloids can be represented by a permanent point magnetic dipole, μ, fixed at the centre of the sphere. Therefore, any pair of magnetic particles i and j, carrying magnetic moments μ i and μ j , respectively, interact by means of the conventional dipole-dipole potential, where r = r ij = r i − r j is the displacement vector between the centres of the particles. For the modelling of the non-magnetic interactions in our system of SMPs, we choose a coarse-grained approach based on a bead-spring representation, as is frequent in simulations of polymer-magnetic colloidal particle composites [41][42][43][44][45][46]. Specifically, the magnetic colloids are modelled as soft spheres of characteristic diameter σ , interacting with a Weeks-Chandler-Andersen (WCA) pair potential [47] U WCA (r; s , σ , r cut ) where U LJ (r) is the conventional Lennard-Jones (LJ) potential with the potential well depth s , (2) is truncated at the position of its minimum, r cut = 2 1/6 σ , and shifted by its corresponding depth, U LJ (r cut ), to make the interaction purely repulsive. Finally, the effects of the polymer crosslinkers that stabilise the aggregates of magnetic particles into polymer-like chain structures are modelled by means of a particular spring-like bonding potential that takes into account the mechanical constrains introduced by the crosslinkers on both, the centre-to-centre distance between the bonded particles and the relative orientations of their dipole moments. This potential includes two terms, defined by the following expression: where K S is the general prefactor that determines the energy scale of the interaction,μ i = μ i / μ i andμ j = μ j / μ j are the unitary vectors parallel to each dipole moment and r max is the maximum allowed centre-to-centre distance. The first term in such expression, introduced in our preliminary works [29,36], is mainly intended to penalise the misalignment of the dipoles from their head-to-tail arrangement. It is basically equivalent to link the particles with a harmonic spring of elastic constant K S whose ends are attached to points on the particles surfaces -i.e. to points at distance σ/2 from their centres -corresponding to the projection of the head of one of the dipoles and the tail of the other one, respectively. Therefore, the first term in expression (3) defines a maximum of two bonding points for each particle, but does not limit the number of neighbours linked to them. the bonding potential corresponds to an isotropic attraction, defined as the standard finitely extensible non-linear elastic (FENE) bonding potential. As pointed above, the FENE term simply prevents the bonding distance to go beyond r max . This term is important to avoid unrealistic arrangements of particles from different aggregates due to the unlimited extensibility of the bond allowed by the first term. The modelling approach described above allows to easily represent any of the basic branched and non-branched chain morphologies to be studied here. Such morphologies are sketched in Figures 2(b) to 2(f), in this case without showing the bonding springs. The depicted examples correspond to the minimal sizes -measured as the amount of particles in the aggregate, Lexplored in this work for every given morphology. For symmetry reasons, non-branched structures -linear chains and rings -and four-point branched ones -X-junctions -are uniquely defined by their backbone shape. Three-point branched structures, instead, allow to define two different morphologies with the same shape, as shown in Figures 2(e) and (f), that correspond to different orientations of the dipoles at the branching points. We named the case in which the central particle is bonded to one dipole head and two dipole tails as type 1 Y-junction (Y1), whereas the case of bonding to two dipole heads and one tail is named type 2 Y-junction (Y2). The simulation approach employed to study the self-assembly behaviour of these five different morphologies is described in the next Section. Simulation details With the model introduced above, we performed molecular dynamics simulations in the canonical ensemble. No periodic boundary conditions were applied. We used a Langevin thermostat in order to approximate implicitly the effects of the thermal fluctuations of the background fluid on the magnetic beads [48,49]. This means that the movements of each bead i are governed by the translational and rotational Langevin equations, obtained by adding an stochastic and a friction terms to the Newtonian equations of motion: where F i and τ i are the total force and torque acting on the particle, m i is the particle mass and I i its inertia tensor. Finally, T and R are the translational and rotational friction constants, and ξ i,T and ξ i,R a Gaussian random force and torque, respectively, fulfilling the normal fluctuation-dissipation rules. In this work, we used unitary inertia tensors in order to ensure isotropic rotations in the system. Since here we are only interested in the equilibrium behaviour and not in the system dynamics, the friction terms in Equations (4) and (5) can be chosen arbitrarily. We chose T = 1 and R = 3/4 as values known for providing a fast relaxation in this type of simulations [50,51]. Our coarse-grained modelling approach makes convenient the use of reduced units. System energies are measured in the scale of the soft core repulsions, s = 1. In all the simulations, we used a reduced system temperature T = 1, defined as the ratio between the actual thermal energy and the chosen energy scale. Distances and masses are measured in units defined by the characteristic diameter of the beads, σ = 1, and their mass, m = 1. Finally, grounded on our previous studies on linear chain structures [29,36], we chose K S = 30 and r max = 1.5 for the bonding interactions. The systems we simulated here consist of two identical aggregates with a given size, L, and squared moment of the dipoles, μ 2 , in a simulation box with open boundaries. The aggregates are placed symmetrically in the simulation box at a minimum separation distance d between them, with an initial configuration corresponding to perfectly ideal arrangements of each shape. In order to control their relative separation during the simulation, the central particle of each aggregate is permanently fixed in its initial placement, forbidding its displacements but not its rotations. This central particle is the middle one in linear chains, the particle at the branching position in X, Y1 and Y2 shapes, and an arbitrary one in rings. The choice to fix these very particles was made because of the branched structures, in which the middle particle corresponded to the branching point, i.e. to the particle which, in comparison to other particles in the SMP, was less likely to form an extra bond. The simulation protocol consisted of an initial warm-up of the initial configurations at T = 4, performing 10 5 integration steps with a time step δt = 5 · 10 −5 . After the warm-up, the systems were equilibrated at T = 1 for 9 · 10 5 integration steps, using a time step δt = 5 · 10 −3 . Finally, a production cycle of 3 · 10 6 steps was performed, in which the system configurations were measured at intervals of 10 5 steps. With this protocol we studied the effects of different morphologies, sizes and dipole moments of the aggregates on the self-assembly behaviour of identical pairs as a function of their relative distance. By considering only two SMPs and fixing the distance between them, we make an idealised approximation to the conditions corresponding to different concentrations of monodisperse SMPs solutions. We sampled all the morphologies introduced above, using sizes L = 9 and L = 25 for the X-junction shape and L = 10 and L = 25 for the rest, three different values for the dipole moments, μ 2 = {3, 5, 8}, and relative separation distances within the interval d/L ∈ [0.1, 1.1]. The small size of the systems under study allowed us to avoid the use of any approximate method for the calculation of the long-range magnetic interactions that were computed by simple direct sum. The simulations were carried out with the package ESPResSo 3.3.1 [52,53]. Results and discussion We start the discussion by analysing the probability of two identical SMPs, being in equilibrium at a relative separation d/L, to form an amount C of energy driven close contact connections between their magnetic particles. This simple parameter will help us to identify the different self-assembled structures that may appear. In order to compute C, we first define the centreto-centre separation threshold for two particles to be considered in close contact as r ij ≤ 1.2. Since the only attractive interaction in the system corresponds to favourable arrangements of the magnetic dipoles, we also impose the condition of the dipoledipole pair energy of the involved particles, given by Equation (1), to be non-positive, U dd ( r ij ) ≤ 0. Finally, in this calculation we do not distinguish the connections formed between pairs of particles belonging to the same or to different SMPs, but we exclude any pair corresponding to permanently bonded particles. Thus, C gives us a quantitative indication of the selfassembly in the systems. Figure 3 shows the probability distributions of two linear SMPs to form C energy driven connections, as a function of their relative separation d/L, for several selected values of dipole moment and SMP size. Several typical configurations for L = 10 are also provided inside the plots. One can see that for μ 2 = 3, independently from L and the distance between the central particles, the two SMPs are not forming any energy driven connections with significant probability. Only for d/L = 0.1, the central particles, fixed next to each other, reorient their dipole moments in such a way that their interaction is favourable. From this observation, one can expect a very weak self-assembly in dispersions of linear SMPs with μ 2 = 3, even in a broad range of concentrations. The same conclusion holds also for SMPs with size L = 25, however, their distributions tend to be slightly more irregular due to their larger configurational entropy. This weak dependence on the size of the SMPs within the sampled range of parameters has been also observed for the rest of systems under study. Thus, without loss of qualitative generality, from now on we will discuss only the behaviour of systems with shortest SMPs. As is expected, with growing magnetic interaction strength the probability of forming self-assembled connections increases. As one can see for μ 2 = 5, at large distances two linear SMPs undergo a closure transition by forming two independent rings. Also significant is the probability of finding a ring and an open chain configuration at large separation distances. As soon as the distance between the central particles of the two linear SMPs becomes small enough for their ends to touch, the chains form a single ring, which then evolves into a double loop with decreasing d/L. By increasing further the value of μ 2 , the probability of observing isolated open chains vanishes. Snapshots of the characteristic configurations observed for μ 2 = 8 are provided separately in Figure 4 for the sake of clarity. From these results we can conclude that, for highly diluted systems, mainly open or closed loop structures of isolated SMPs would be observed for moderate to strong dipole-dipole interactions, with open structures being less probable as the latter become stronger. In general, it is reasonable to expect that the selfassembly in highly diluted systems of SMPs of any morphology will correspond mainly to internal rearrangements of the crosslinked backbone of individual clusters -typically, into closed loop structures -rather than the formation of aggregates of different SMPs. In contrast, for high concentrations of SMPs one can expect the inversion of this tendency, with the observation of a significant degree of external self-assembly -i.e. the self- assembly of different SMPs into larger aggregates -and the multi-loop topology as a dominant structural motif. The next structure we analyse is the ring SMP. In Figure 5 one can see the probability distributions for two ring SMPs to form C self-assembled connections as a function of d/L. Due to the fact that nearly head-to-tail cross-linked dipolar particles keep, when arranged into rings or other closed structures, an almost zero net magnetic moment, makes the self-assembly of these SMPs essentially negligible. Here, this behaviour is observed for all sampled conditions. At this point, the comparison of linear and ring SMPs provides a first indication of what can be one of the main factors determining the different degrees of selfassembly corresponding to each topology: the amount of open chain ends. The important role of the open chain ends in the selfassembly behaviour of the different SMPs is further emphasised by the results obtained for pairs of X-junction morphologies. These results are summarised in Figures 6 and 7. The probability distributions of C as a function of d/L, shown in Figure 6, indicate the existence of a richer variety of self-assembled structures, even at low μ 2 . This is due to the fact that the X-junction morphology has, for example, twice as many open ends as the linear SMP. This increases the amount of different ways to form self-assembled motifs. In this regard, one can think of an analogy with chemical valency or with the self-assembly valency in patchy colloids [54,55]. In addition, a small SMP with X-junction shape is less likely to get their open ends involved in the formation of internal self-assembled structurestypically, as already mentioned above, closed loops -than linear chains or Y-junctions of similar size, keeping instead their free ends available for the self-assembly with other SMPs. The reason for this is related to the relatively shorter length of linear segments with open ends in the X-junction structure, which demands a higher local bending of the chain backbone in order to connect such open ends to other particles within the same SMP and form internal loops. Since the backbone has a finite intrinsic rigidity, led by the chain bonds and the dipoledipole interactions [29,56], such bending can be unreachable for too small SMP sizes. Therefore, small X-junction morphologies are particularly favourable for external self-assembly. Figure 6 also shows how the probability to form energetically favourable connections increases with growing μ 2 . In this case, the observation of the characteristic self-assembled structures, whose main examples are shown in Figure 7, is particularly interesting. The maximum separation distance at which external self-assembly can be observed is d/L ∼ 0.6. At larger distances and with growing μ 2 , the free ends within the same SMP tend to connect to each other in pairs, forming closed internal loops. For μ 2 = 8, the most probable configuration is the resulting from the selfassembly of the four free ends into two pairs, forming a double loop structure. At intermediate distances, 0.3 d/L 0.6, the external self-assembly of two pairs of free ends becomes dominant, so that the two SMPs form a large central closed loop, whereas the remaining free ends tend to form internal small loops. The most interesting structural motif is observed in a narrow separation region around d/L ∼ 0.28 for μ 2 = 5 and d/L ∼ 0.36 for μ 2 = 8: here, a cage-like structure is formed by means of the external self-assembly of the four pairs of free ends. This peculiar structure suggests its potential use as a magnetically controllable container, able to enclose some small material by spontaneous self-assembly and release it when an external magnetic field, strong enough to disassemble the SMPs' aggregate, is applied. Finally, at shorter separation distances, there is a clear jump in the amount of connections led by the self-assembly of the two SMPs into a nearly flat, full contact structure with all dipoles arranged in antiparallel pairs. Finally, we analyse the Y-junction morphologies. The first important observation is that, for the range of parameters sampled here, we found no significant differences in the probability distributions of self-assembled connections corresponding to Y1 and Y2 morphologies. It is also worth to underline that the grand canonical simulations mentioned above [40] show that the probabilities for free particles to form spontaneously Y-junctions of each type are identical. On the other hand, one can expect that the application of external fields will impose different orientations to each type of Y-junction, but to analyse the response of the different morphologies to external fields is beyond the scope of this work. Therefore, regarding the present discussion, we will consider the Y1 and Y2 morpholo- gies as indistinguishable. Figures 8 and 9 correspond to the results obtained for the Y2 case. The probability distributions for the self-assembled connections, C, as a function of the relative separation, shown in Figure 8, display a behaviour similar to the observed for X-junctions, but with a slight shift towards lower values of this parameter. Roughly, one can see that the Y-junction is quantitatively in between the linear chain and the X-junction behaviours, further supporting the idea of the amount of free chain ends as one of the main factors in the selfassembly of these systems. For instance, except for very short separation distances, at μ 2 = 3 we can see that C is distributed around a central valueC ∼ 1, whereas for X-junctions we observedC ∼ 2 and for linear chains and ringsC ∼ 0. Figure 9 shows examples of the different characteristic structures for μ 2 = 5 and μ 2 = 8. At large separation distances, Y-junction SMPs tend to form one closed internal ring, keeping one free chain end. At intermediate distances, the external self-assembly of two free ends from each SMP to form a large closed loop is also observed, whereas the remaining free end of each cluster is not involved in any self-ssembly. At shorter distances the selfassembled structures become more complex, with two pairs of free ends from each SMP still forming a closed loop but also involving the remaining free ends, which tend to connect to the central particle of the opposite SMP. This is favoured by the increase of μ 2 and the decreasing of d/L. The resulting structure is close to the cage-like aggregate observed for X-junctions, and its appearance is clearly signalled for μ 2 = 8 by the shift of the C distribution at d/L ∼ 0.36. Finally, at very close separation distances, the Y-junction SMPs can form full contact antiparallel structures or nearly flat multi-loop shapes. Conclusions and outlook In this work, we have undertaken a first step in the analysis of the self-assembly behaviour of SMPs, or SMPs, by performing computer simulations for different morphologies and selected values of the strength of their magnetic interactions. For their morphologies, we considered the typical basic structures observed in dispersions of free magnetic colloidal particles. For simplicity, we focused on the behaviour a pair of identical small SMPs, mimicking the effect of the concentration by fixing the characteristic separation distance between them. Our results predict that the amount of free chain ends of a given SMP morphology is determinant for the degree of self-assembly and the complexity of the resulting structures, playing a role that we can describe as a self-assembly 'valency'. Ring morphology -with no free chain ends, i.e. with zero selfassembly valency -is essentially inert, not participating in any self-assembly unless it is forced to interact with other aggregates by imposing a very close proximity. Linear chains -with valency 2 -tend to form simple closed rings or, for very high magnetic interactions and/or densities, multi-loops. Y-junction SMPs -with valency 3 -have a richer self-assembly behaviour, forming internal or external single rings at large separation distances and multi-loop, nearly cage-like structures at short separation distances. Finally, X-junction -that has valency 4 -is the morphology more prone to form self-assembled structures, including clear cage-like shapes at short separation distances. In difference with the linear chain and ring structures, wellknown in the context of dispersions of free magnetic particles, the cage-like structures observed for Y-and X-junction SMPs are, to our best knowledge, magnetically self-assembled morphologies never reported before. The cage-like structures formed by X-junction SMPs are particularly interesting for technological applications, as they can be conceived as magnetically actuated containers that can enclose and release small cargos. This observation points SMPs as promising building blocks of advanced magnetoresponsive microstructured materials. We hope this will encourage further research efforts on these systems. Currently, we are planning to extend the present study by analysing non-homogeneous systems. By combining different SMP morphologies, we expect to find more novel self-assembled structures. The response to external fields of these systems will be also studied. Finally, an extension to larger systems, with actual dispersions of SMPs at different concentrations, will be also performed.
6,806.8
2018-04-13T00:00:00.000
[ "Materials Science", "Physics" ]
Different roles of conserved tyrosine residues of the acylated domains in folding and activity of RTX toxins Pore-forming repeats in toxins (RTX) are key virulence factors of many Gram-negative pathogens. We have recently shown that the aromatic side chain of the conserved tyrosine residue 940 within the acylated segment of the RTX adenylate cyclase toxin-hemolysin (CyaA, ACT or AC-Hly) plays a key role in target cell membrane interaction of the toxin. Therefore, we used a truncated CyaA-derived RTX719 construct to analyze the impact of Y940 substitutions on functional folding of the acylated segment of CyaA. Size exclusion chromatography combined with CD spectroscopy revealed that replacement of the aromatic side chain of Y940 by the side chains of alanine or proline residues disrupted the calcium-dependent folding of RTX719 and led to self-aggregation of the otherwise soluble and monomeric protein. Intriguingly, corresponding alanine substitutions of the conserved Y642, Y643 and Y639 residues in the homologous RtxA, HlyA and ApxIA hemolysins from Kingella kingae, Escherichia coli and Actinobacillus pleuropneumoniae, affected the membrane insertion, pore-forming (hemolytic) and cytotoxic capacities of these toxins only marginally. Activities of these toxins were impaired only upon replacement of the conserved tyrosines by proline residues. It appears, hence, that the critical role of the aromatic side chain of the Y940 residue is highly specific for the functional folding of the acylated domain of CyaA and determines its capacity to penetrate target cell membrane. Except for the presence of the unique ' AC-to-Hly-linking segment' , the domain structure of the Hly moiety of CyaA exemplifies the general domain organization of other known RTX hemolysins/cytolysins that share a fair extent of sequence homology and differ primarily in the number of repeat sequences forming their RTX domains 1 . Results The aromatic ring of the tyrosine residue 940 plays a crucial role in the biological activity of Bordetella CyaA toxin. We have previously observed that alanine or proline substitutions of the tyrosine residue 940 (Y940) in the acylated domain of CyaA did not interfere with the CyaC-mediated posttranslational acylation of the toxin, but still ablated the capacity of recombinant CyaA toxin to penetrate target cell membrane and deliver its AC domain into cells 12 . It was thus important to exclude that this loss of toxin activity might be an artifact caused by extraction of the recombinant CyaA proteins from E. coli-produced inclusion bodies by denaturating solutions of 8 M urea and subsequent calcium-driven refolding of the recombinant toxin variants in target cells suspensions. Therefore, we examined whether the Y940 substitutions will affect also the activities of CyaA toxin variants secreted by Bordetella pertussis and B. bronchiseptica. Towards this aim, mutations encoding the various Y940 substitutions were introduced by allelic exchange into the cyaA genes on B. pertussis and B. bronchiseptica chromosomes and the corresponding variants of secreted CyaA toxins were extracted from bacterial cell surface and used in toxin activity assays. Both B. pertussis and B. bronchiseptica intact CyaAs showed a similar capacity to deliver the AC domain across the plasma membrane of sheep erythrocytes and human THP-1 monocytes (Supplementary Figure S1). As further shown in Fig. 1a,b, the wild-type bacteria producing intact CyaA, or the mutant secreting the CyaA-Y940F toxin variant, exhibited a hemolytic phenotype on Bordet-Gengou (BG) agar plates. In contrast, the B. pertussis and B. bronchiseptica mutants secreting the CyaA-Y940A or CyaA-Y940P toxins formed non-hemolytic colonies indicative of loss of hemolytic activity the Y940A and Y940P CyaAs. Indeed, when the respective secreted toxins were extracted from the surface of producing bacteria, only the CyaA and the CyaA-Y940F toxin variant were able to bind and penetrate erythrocytes or CR3expressing THP-1 macrophages in the in vitro activity assay and the CyaA-Y940A and CyaA-Y940P proteins were inactive (Fig. 1c-f). Finally, the loss of biological activity of CyaA upon Y940A and Y940P substitutions was examined by in vivo intranasal infection experiments in Balb/cByJ mice. As shown in Table 1, the mutant B. pertussis strains secreting the CyaA Y940A and CyaA Y940P toxin variants exhibited a decreased virulence with about an order of magnitude higher LD 50 value (lethal dose for 50% of infected animals) than CyaA or CyaA-Y940F -secreting B. pertussis strains. All these results thus demonstrate that the presence of the aromatic ring as the side chain of residue 940 of the CyaA protein is critical for the cytotoxic activity of CyaA on target cells in vitro and for its biological function as a key virulence factor of B. pertussis in vivo in the mouse model of lung infection. www.nature.com/scientificreports/ The aromatic side chain of the tyrosine residue 940 is essential for folding of the acylated domain of CyaA. We hypothesized that substitutions of Y940 by alanine or proline residues affects the folding and formation of a functionally important structure in the CyaA molecule. To test this hypothesis and analyze the structural consequences of Y940 substitution, we constructed a truncated CyaA-derived polypeptide RTX719 (Fig. 2a), in which the fatty acyl-modified segment of CyaA comprising the Y940 residue was fused to a truncated RTX domain capable of vectorial calcium-driven folding 22 . Subsequently, the Y940A and Y940P substitutions were introduced into the RTX719 construct, the proteins were produced in E. coli cells in the presence of the CyaC acyltransferase and purified close to homogeneity from urea extracts by chromatography on DEAE-Sepharose (Supplementary Figure S2). The calcium-dependent folding of the RTX719 variants was then analyzed by circular dichroism (CD) spectroscopy. The urea-denatured proteins were refolded in the Ca 2+ -free buffer and titrated by stepwise addition of Ca 2+ ions. The far-UV CD spectra revealed that the RTX719 construct as well as its Y940A and Y940P variants undergo a Ca 2+ -dependent structural transition from an unfolded to a folded conformation that is characterized by a prominent negative peak in the spectra at 218 nm, which corresponds to a parallel β-roll structure (Fig. 2b). Intriguingly, the Ca 2+ -induced assembly of the Y940A and Y940P mutants was initiated and reached completion at lower Ca 2+ ion concentrations, as that needed for completion of folding of the intact RTX719 construct (Fig. 2b, Supplementary Figure S3). Moreover, the on-column refolding characteristics of the RTX719 variants during Superdex HR200 gel permeation chromatography differed substantially. As documented in Fig. 2c, a major fraction of the on-column refolded RTX719 eluted as monomeric protein with a retention time of 32 min, while the Y940A and Y940P proteins formed predominantly oligomers eluting with retention time of 25 min. Moreover, the far-UV CD spectrum of the eluted RTX719 monomers exhibited a single negative peak at 218 nm typical of the β-stranded protein skeletons (Fig. 2d). In contrast, the spectra of the monomeric forms of the Y940A and Y940P protein variants eluted in the minor fraction at 32 min (black arrows over chromatograms in Fig. 2c and black curves in Fig. 2d) exhibited two negative peaks in the spectra at 206 and 218 nm, revealing the presence of a mixture of α-helices and β-sheets in their structures. Taken together, these data clearly indicated that the conserved tyrosine 940 residue plays a key role in Ca 2+ -induced folding of the acylated domain of CyaA and its replacement by a non-aromatic residue results in misfolding and aggregation of the RTX719 construct. Substitutions of the conserved tyrosine residues in the acylated segments of other RTX hemolysins do not affect their posttranslational acylation but proline substitutions impair their toxin activites. The Y940 residue is located in a predicted β-strand structure within the acylated segment of CyaA and appears to be highly conserved within the homologous segments of other pore-forming RTX hemolysins (Fig. 3). To analyze whether the conserved tyrosine residues might play a role also in the functional folding and cytotoxic activities of other hemolysins, we replaced the Y642 residue of RtxA from Kingella kingae, the Y643 residue of HlyA from Escherichia coli, and the Y639 residue of ApxIA from Actinobacillus pleuropneumoniae with phenylalanine, alanine, and proline residues, respectively. The toxin variants were produced in E. coli BL-21 cells and purified close to homogeneity from urea extracts by Ni-NTA chromatography using the purpose-introduced 6xHis affinity purification tags. As verified by tandem mass spectrometry (Supplementary Table S1), the substitutions of the conserved tyrosine residues did not affect the ability of the co-expressed cognate acyltransferase RtxC, HlyC, and ApxIC to recognize and posttranslationally acylate the respective segments of proRtxA, proHlyA, and proApxIA, which were quantitatively modified by myristoylation and hydroxymyristoylation on the corresponding lysine residues. As shown in Fig. 4a, the HlyA Y643A and HlyA Y643F variants exhibited a near intact hemolytic activity on sheep erythrocytes, whereas the activity of the HlyA Y643P construct was strongly reduced over a wide range of toxin concentrations. Similarly, the cytotoxicity of the HlyA Y643P variant towards LFA-1-positive THP-1 cells was markedly reduced and its cytotoxic effect was observed only at the highest concentration used, whereas the cytotoxic capacity of the HlyA Y643A and HlyA Y643F variants was similar to that of intact HlyA toxin (Fig. 4b). Consistent with this, the HlyA, HlyA Y643A, and HlyA Y643F proteins exhibited similar overall membrane activity on artificial lipid bilayers prepared from crude asolectin, whereas the HlyA Y643P variant formed pores with a significantly lower overall membrane activity than intact HlyA (Fig. 4c). As shown in Fig. 4d,e, the porecharacteristics, such as mean single pore conductance and lifetime of pores formed by all three HlyA mutants were comparable to those of intact HlyA. These data indicated that the presence of the aromatic ring in the side chain of the conserved Y643 residue of HlyA was as such not essential for the formation of the α-hemolysin pores, since the HlyA Y643A variant and the intact HlyA toxin exhibited comparable pore-forming membrane activities. www.nature.com/scientificreports/ We further analyzed whether the low pore-forming (hemolytic) and cytotoxic potency of the HlyA Y643P variant was due to a reduced membrane-binding capacity. Towards this aim, we prepared a series of HlyC-activated chimeric molecules in which the full-length HlyA (residues 1-1024) carrying the substitutions Y643A, Y643F, or Y643P was fused to the AC domain and the AC-to-Hly linker segment of CyaA (N-terminal residues 1-501), (Fig. 4f). The highly active N-terminal AC enzyme domain in the hybrid proteins then allowed the quantification of the specific capacity of the molecules to associate tightly with the cell membrane. As shown in Fig. 4g, the www.nature.com/scientificreports/ cell binding capacity of CyaA 1-501 /HlyA 1-1024 with the Y643F and Y643A substitutions was almost indistinguishable from the cell binding capacity of the intact chimera. In contrast, the cell binding capacity of the CyaA 1-501 / HlyA 1-1024 Y643P construct was reduced by ~ 70% compared to the intact chimera. These data strongly suggest that the low pore-forming and cytolytic activity of the HlyA Y643P mutant was due to the reduced membrane association capacity of the construct. A tyrosine residue pair plays a role in membrane penetration of HlyA. In contrast to CyaA, where the conserved Y940 is adjacent to a hydrophilic S939 residue, a pair of hydrophobic aromatic residues is present in the acylated segment of HlyA (Fig. 3b). To analyze whether the adjacent Y642 residue synergizes with the conserved Y643 residue in maximizing the lytic activity of HlyA on target cells, we produced HlyA-derived constructs carrying both tyrosine residues substituted by a pair of alanine (HlyA Y642A + Y643A) or proline (HlyA Y642P + Y643P) residues, and a further construct had the Y642 replaced by a proline residue (HlyA Y642P). As shown in Fig. 5, the hemolytic and cytotoxic activities of intact HlyA and of the doubly substituted HlyA Y642A + Y643A variant were similar, suggesting that there was no functional impact of a combined tyrosine-toalanine substitution at the positions 642 and 643 of HlyA. However, the hemolytic and cytotoxic activities of the HlyA Y642P protein were reduced over a range of toxin concentrations (Fig. 5). Moreover, the Y642P + Y643P double substitution completely abolished the cytolytic activity of the fully acylated HlyA Y642P + Y643P protein (Supplementary Table S1) on both erythrocytes and THP-1 macrophages (Fig. 5). In conclusion, the nil effect of the double Y642A + Y643A substitutions on HlyA cytotoxic activities strongly suggests that the aromatic side chains of Y642 and Y643 are not critical for membrane insertion and subsequent lytic activity of HlyA. Intriguingly, the complete loss of toxin activity occurred only upon a double Y642P + Y643P substitution, presumably destroying the local structure of the HlyA segment. A single substitution of the tyrosines does not affect the formation of secondary structures of the acylated segment of HlyA. To assess the structural impact of tyrosine residue substitutions in the acylated domain of HlyA, we constructed a truncated HlyA-derived polypeptide carrying the C-terminal residues 419-1024 (HlyA419, Fig. 6a). Subsequently, the Y642P, Y643P and Y643A substitutions were introduced into the HlyA419 construct and the corresponding proteins were produced in E. coli in the presence of HlyC acyltransferase and purified from the urea extracts on Ni-NTA Sepharose (Supplementary Figure S4). Far-UV CD spectra of the Hly-derived constructs revealed that all four proteins undergo a similar Ca 2+ -dependent structural change from disordered conformation to β-roll secondary structures (Fig. 6b). These data show that in contrast to a similar CyaA719 construct, the substitutions of the tyrosine residue of the acylated segment of www.nature.com/scientificreports/ HlyA with alanine or proline residue do not affect any importantly the Ca 2+ -dependent formation of secondary structures in the truncated HlyA419 fragment. Proline, but not alanine substitutions of the conserved tyrosine residues strongly impair the hemolytic activity of the RtxA and ApxIA hemolysins. Finally, the hemolytic activity of the RtxA and ApxIA hemolysins bearing susbtitutions of the conserved tyrosine residues of the acylated domains were analyzed. As shown in Fig. 7a, while the Y642F variant of RtxA was fully active and the Y642A substitution affected the hemolytic activity of RtxA only modestly, the capacity of the RtxA Y642P construct to lyse erythrocytes was strongly impaired. Similarly, the ApxIA Y639P construct was unable to lyse erythrocytes even at high toxin concentrations, while the hemolytic activity of the ApxIA Y639A variant was reduced by a factor of ~ 2 compared to the intact toxin or its ApxIA Y639F variant (Fig. 7b). In summary, these data indicate that the aromatic side chain of the conserved Y642 and Y639 residue is important for the full hemolytic activity of RtxA and of ApxIA hemolysins on sheep erythrocytes. Discussion We show here that the aromatic ring of the side chains of the conserved tyrosine residues located in the acylated segments of four RTX toxins play a different roles in their biological activities. While replacement of the aromatic ring of Y940 residue of CyaA by the alanine or proline residues led to self-aggregation of the truncated RTX719 protein and loss of toxin activities of the full-length CyaA, the elimination of the tyrosine residue at position 643 of HlyA (Y643A) did not significantly affect the folding of the acylated segment and subsequent cytotoxic activity of the α-hemolysin. Interestingly, for the quite homologous RtxA and ApxIA hemolysins, the presence of the aromatic ring of the conserved tyrosine residues were still required for a full hemolytic activity of the RtxA (Y642A) and of ApxIA (Y639A) toxins. First, we re-examined the toxin activities of the CyaA variants secreted by corresponding mutant strains of the closely related B. pertussis and B. bronchiseptica This was important to do, as CyaA is translocated from the cytosol of Bordetella directly into the external medium through a T1SS channel-tunnel assembly spanning the bacterial envelope and undergoes a vectorial co-secretional folding initiated by the first-emerging C-terminal folding scaffold upon exit from the bacterial cell due to binding of extracellular calcium ions 19 . It was thus crucial to verify that also such excreted and vectorially folded molecules of CyaA bearing the Y940A and Y940P substitutions will be devoid of membrane penetrating activity, like the recombinant forms of the toxin produced and acylated in E. coli cells. Indeed, as shown in Fig. 1, this was the case for the corresponding mutant forms of CyaA secreted by both Bordetella species. Recently, Fukui-Miyazaki and coworkers presented that B. bronchiseptica CyaA, in contrast to B. pertussis CyaA, exhibits weak cAMP intoxication of nucleated cells due to phosphorylation of the S375 residue that is bound by the host factor 14-3-3 and results in abrogation of the AC penetration activity 53 . Working with the same human THP-1 cell line and parental B. bronchiseptica RB50 strain as Fukui-Miyazaki et al., we were unable to reproduce the reported striking difference in the specific capacity of B. pertussis and B. bronchiseptica CyaA proteins to increase cAMP in target cell cytosol and we found that the two proteins exhibited comparable cAMPelevating activity in macrophage cells, as previously also reported by Henderson and colleagues 54 . Moreover, using LC FT-ICR-MS analysis, we checked the amino acid residues at position 375 of purified CyaAs by peptide mass mapping (Supplementary Figure S5) and found that B. pertussis CyaA carried F375, whereas B. bronchiseptica CyaA carried S375, as previously described 53 . Our mass spectrometry analysis also identified other substitutions in the CyaA sequences, namely at positions 370, 800, 808, 910, and 978, located in the N-terminal part of B. pertussis and B. bronchiseptica CyaAs (Supplementary Figure S5), confirming that we are working with the same CyaA proteins as Fukui-Miyazaki et al. We have further shown that the binding of B. bronchiseptica CyaA on myeloid cells is inhibited by the monoclonal antibody M1/70 against the CD11b subunit of CD11b/ CD18, as has been repeatedly demonstrated for recombinant CyaA 23,24 . All these data indicate that B. pertussis and B. bronchiseptica CyaAs use the same mechanisms required for receptor binding, membrane insertion, and subsequent penetration into CD11b/CD18-positive cells. We have previously shown that the recombinant CyaA Y940A and CyaA Y940P variants overproduced in E. coli exhibit very low membrane binding and membrane penetration capacity similarly as non-acylated proCyaA 49,55 . However, analysis of the acylation status of the CyaA Y940A and CyaA Y940P variants showed that the residues K860 and K983 were correctly modified by the CyaC acyltransferase as intact CyaA 12 . This excluded the possibility that the absence of fatty-acyl chains induces destabilization of the apolar segments of the hydrophobic domain and of most of the acylation region 56 . Similarly, all three other RTX hemolysins characterized here and their corresponding mutant variants were properly acylated by a combination of myristoyl and hydroxymyristoyl chains, ruling out the possibility that the reduced cytotoxic capacity of the hemolysin mutants was due to their aberrant acylation. This is also the first report showing that A. pleuropneumoniae ApxIA hemolysin, like other bona fide hemolysins 2 , is acylated by a mixture of myristoyl and hydroxymyristoyl fatty acyl chains. The role of aromatic residues having a specific affinity for a membrane region near the lipid carbonyl has been shown in the membrane insertion of numerous bacterial toxins [57][58][59] . In case of CyaA, however, the presence of the aromatic side chain at position 940 plays rather a role in the proper folding of secondary structures of the acylated segment, preventing the formation of inactive aggregates 60 , than in the direct anchoring of the toxin into the lipid bilayer. While the Y940 residue could be functionally replaced by an aromatic phenylalanine residue, lacking hydroxylation in the para position of the aromatic ring, its replacement by an aromatic tryptophan residue resulted in reduction of the cell-binding and membrane penetration capacity of the CyaA Y940W variant (Supplementary Figure S6). This could be due to the presence larger indole ring of the tryptophan residue that might interfere with functional folding of the acylated domain of the toxin. The membrane penetration process of CyaA www.nature.com/scientificreports/ may further proceed by the insertion of several putative transmembrane α-helices located in the N-terminal part of the hemolysin moiety of the toxin, comprising the hydrophobic pore-forming domain and the ' AC-to-Hly' linker segment 6,7,10,12-15,61 . Consistent with our previously published results with recombinant CyaA 12 , our recent results confirm that the tyrosine-to-phenylalanine substitution of the conserved tyrosine of the acylated domain plays no role in membrane insertion and penetration of different RTX hemolysins. However, unlike for CyaA, the tyrosine-to-alanine substitution does not reduce the cytolytic capacity of RtxA and ApxIA hemolysins no more by a factor of ~ 2. A negligible, if any, reduction of the pore-forming and cytotoxic capacity of HlyA Y643A also fits well with the results showing that membrane binding of HlyC-activated CyaA 1-501 /HlyA 1-1024 Y643A resembles the membrane binding capacity of the intact chimeric CyaA 1-501 /HlyA 1-1024 construct. These data together with data from CD spectra show that the presence of the aromatic ring in position 643 is not essential for proper folding of the acylated segment of HlyA and subsequent insertion of the HlyA into the lipid bilayer. However, the tyrosine-to-proline substitution most likely disrupts the secondary structure(s) of the acylated segment involved in membrane insertion, as the specific hemolytic and/or pore-forming activity of HlyA Y643P, ApxIA Y639P, or RtxA Y642P was drastically reduced compared to the intact HlyA, ApxIA or RtxA. This is most likely due, at least in the case of HlyA, to the reduced membrane binding capacity of the HlyA Y643P construct, since the pore properties, namely pore conductance and pore lifetime, of the HlyA mutant variants were similar to those of intact HlyA. Comparable pore characteristics may also indicate that the membrane-interacting pore-forming domain of HlyA Y643P is properly folded. Surprisingly, the large reduction in the ability of HlyA Y643P to insert into the plasma membrane was not reflected in the calcium-dependent folding of the truncated variant of HlyA analyzed by CD spectroscopy. This may indicate that the changes in the secondary structure of the HlyA419 Y643P construct are not as dramatic as in the similar CyaA719 Y940P construct. The vicinity of the conserved Y940 residue of the acylated segment of CyaA is different compared to other RTX hemolysins. In contrast to CyaA, where the conserved Y940 is adjacent to a hydrophilic S939, the Y638-Y639, Y642-Y643, and F641-Y642 pairs are present in the acylated segment of ApxIA, HlyA, and RtxA, respectively. The complete loss of hemolytic and cytotoxic activity of the HlyA Y642P + Y643P variant compared with the HlyA Y642P and HlyA Y643P constructs may indicate that the non-conserved Y642, together with the conserved Y643, may maximize the cytolytic and cytotoxic effects of HlyA. Our data also suggest that bona fide hemolysins, such as HlyA, RtxA, and ApxIA, may interact with the plasma membrane of target cells in a different manner than CyaA and that, unlike Y940 in CyaA, the aromatic rings of Y639 in ApxIA, Y643 in HlyA or Y642 in RtxA are not involved in the folding of the acylated segment preceding the membrane insertion of the hemolysin molecule. Sequence alignments revealed a number of other conserved or non-conserved aromatic residues within the acylated segment of RTX hemolysins 62 , which may be involved in the efficient folding of the acylated segment and the correct positioning of hemolysin molecule towards the lipid bilayer, followed by the insertion of the acylated segment or of its part into the target cell membrane. Construction of RtxA, HlyA, ApxIA, RTX719, HlyA419 and CyaA 1-501 /HlyA 1-1024 variants. Plasmid pT7rtxC-rtxA was used for co-expression of the rtxC and rtxA genes for production of the recombinant RtxC-activated RtxA toxin equipped with a C-terminal double 6xHis purification tag 35 . To produce HlyCactivated HlyA with a C-terminal double 6xHis tag, the plasmid pT7hlyC-hlyA was used 65 . For production of the ApxIC-acylated ApxIA toxin with 6xHis tags on both the N-terminal and C-terminal ends, the pET28bap-xIC-apxIA construct was used 38 . Plasmid pT7hlyC-cyaA 1-501 -hlyA 1-1024 was used to produce HlyC-activated CyaA 1-501 /HlyA 1-1024 hybrid molecule 66 . The expression vectors encoding the RTX719 protein was derived from pT7CT7ACT1-ΔNdeI, a bicistronic vector encoding the structural cyaA gene and the cyaC gene for the dedicated acyltransferase 67 . For construction of the pT7CT7-RTX719, the PCR fragments amplified from pT7CT7ACT1-ΔNdeI using the forward 5'-ATA CAT ATG CAT CAT CAT CAT CAT CAT GAA AAG CTG GCC AAC GAT TAC -3' and the reverse 5'-CCA GAG CTC GTT GTC CTG G-3' primers. Oligonucleotide-directed PCR mutagenesis was performed to construct: (i) pT7rtxCrtxA-derived plasmids for the expression of RtxC-acylated RtxA mutant variants with single substitutions Y642A, Y642F, or Y642P; (ii) pT7hlyC-hlyA-derived plasmids for the expression of HlyC-activated HlyA mutants with single/double substitutions Y642P, Y643A, Y643F Y643P, or Y642A + Y643A and Y642P + Y643P; (iii) pET28bapxIC-apxIA-derived constructs for the production of ApxIC-acylated ApxIA variants harboring single substitutions Y639A, Y639F, or Y639P; and (iv) pT7hlyC-cyaA 1-501 -hlyA 1-1024 -derived plasmids for the expression of HlyC-activated CyaA 1-501 /HlyA 1-1024 hybrid molecules carrying single substitutions Y643A, Y643F, or Y643P. The pT7CT7ACT1 plasmid 67 , harboring the cyaC and cyaA genes, was used to generate a construct for the expression of the HlyC-activated HlyA fragment harboring residues 419-1024 (HlyA419) and equipped with an N-terminal hexahistidine (6xHis) purification tag. For this purpose, the cyaC ORF in pT7CT7ACT1 was replaced from its start to stop codon by a coding sequence of hlyC and similarly, the cyaA ORF was replaced by a hlyA sequence encoding the residues 419-1024 of HlyA. The hlyC and hlyA sequences were PCR-amplified from the plasmid pT7hlyC-hlyA 65 and in addition, the hlyA sequence was fused in frame at the N-terminus to a sequence encoding a 6xHis purification tag, to yield the pT7CT7hlyC-N-6xHis-hlyA419-1024 plasmid. Site . The aromatic ring of the conserved tyrosine is essential for the full hemolytic activity of RtxA and ApxIA. The hemolytic activity of RtxA (a) and ApxIA (b) was analyzed as described in Fig. 4. Each point represents the mean ± SD of three independent determinations performed in duplicate with two independent toxin preparations. Significant differences are indicated by asterisks (**p < 0.01; ***p < 0.001; ****p < 0.0001). www.nature.com/scientificreports/ directed mutations (Y642P, Y643P and Y643A) were introduced into the hlyA fragment of pT7CT7hlyC-N-6xHis-hlyA419-1024 by site-directed PCR mutagenesis as previously described 10 . cAMP determination. THP-1 cells were incubated at 37 °C with CyaA for 30 min in D-MEM, the reaction was stopped by addition of 0.2% Tween-20 in 50 mM HCl, samples were boiled for 15 min at 100 °C, neutralized by addition of 150 mM unbuffered imidazole and cAMP was measured by a competitive immunoassay as previously described 70 . Activity of CyaA was taken as 100%. Hemolytic activity on sheep erythrocytes. Hemolytic The instrument was operating in survey LC-MS mode and calibrated online using Agilent tuning mix, which results in mass accuracy below 2 ppm. After analysis, spectra were processed using the Data Analysis 4.4 software package (Bruker Daltonics) and extracted data were searched against the FASTA of single corresponding toxin molecule (ApxIA, UniProtKB: P55128; HlyA, UniProtKB: P08715; RtxA, UniProtKB: A0A1X7QMH9) using Linx software (RRID:SCR_018657). The Linx algorithm was set for fully tryptic restriction with a maximum of 3 missed cleavages and variable modification for methionine oxidation along with lysine acylation ranging from C12 to C18, including monosaturated and hydroxylated variants. The acylation status of lysine residues was determined by comparison of relative intensity ratios between acylated peptide ions and their unmodified counterparts 2 . CD spectroscopy. The far-UV CD spectra were recorded at 25 °C on a Chirascan-plus spectrometer (Applied Photophysics, USA) in rectangular quartz Suprasil cells of 1-mm path length (110-QS, Hellma, Germany). Protein samples were diluted in 5 mM Tris-HCl (pH 8.0) and 20 mM NaCl in the absence or presence of CaCl 2 and measured at wavelengths from 203 to 260 nm at a scan speed of 1 nm/s. Ca 2+ -induced structural changes were monitored by stepwise titration of protein samples with increasing concentrations of CaCl 2 . The spectra of the buffers were subtracted from the protein spectra and the molar residue ellipticity (Θ) was expressed in milidegrees square centimeter per decimole [mdeg cm 2 dmol −1 ]. Gel filtration. Gel filtration chromatography was performed on a Superdex 200HR gel filtration column (GE Healthcare, UK) connected to AKTAprime Plus liquid chromatography system (GE Healthcare, UK). The column was equilibrated with a buffer containing 20 mM Tris-HCl (pH 8.0), 150 mM NaCl and 2 mM CaCl 2 before the denatured proteins (400 µg) in 50 mM Tris-HCl (pH 8.0) and 8 M urea were loaded onto the column and refolded at a flow rate of 0.5 ml/min. Planar lipid bilayers. Measurements on planar lipid bilayers were performed in Teflon cells separated by a diaphragm with a circular hole (diameter 0.5 mm) bearing the membrane 15 . RTX proteins were prediluted in TUC buffer (50 mM Tris-HCl (pH 8.0), 8 M urea, and 2 mM CaCl 2 ) and added to the grounded cis compartment with positive potential. The membrane was formed by the painting method using soybean lecithin in n-decane-butanol www.nature.com/scientificreports/ with salt bridges (applied voltage, 50 mV), amplified by LCA-200-100G and LCA-200-10G amplifiers (Femto, Berlin, Germany), and digitized by use of a LabQuest Mini A/D convertor (Vernier, Beaverton, OR). For lifetime determination, approximately 400 of individual pore openings were recorded and the dwell times were determined using QuB software with 100 Hz low-pass filter. The kernel density estimation was fitted with exponential function using Gnuplot software. The relevant model was selected by the χ2 value. Statistical analysis. Results were expressed as the arithmetic mean ± standard deviation (SD) of the mean. Data availability All data generated or analysed during this study are included in this published article (and its Supplementary Information file).
6,851.6
2021-10-06T00:00:00.000
[ "Chemistry", "Biology" ]
Early Ovariectomy Results in Reduced Numbers of CD11c+/CD11b+ Spleen Cells and Impacts Disease Expression in Murine Lupus Ninety percent of those diagnosed with systemic lupus erythematosus are female, with peak incidence between the ages of 15 and 45, when women are most hormonally active. Despite significant research effort, the mechanisms underlying this sex bias remain unclear. We previously showed that a functional knockout of estrogen receptor alpha (ERα) resulted in significantly reduced renal disease and increased survival in murine lupus. Dendritic cell (DC) development, which requires both estrogen and ERα, is impacted, as is activation status and cytokine production. Since both estrogen and testosterone levels have immunomodulating effects, we presently studied the phenotype of NZM2410 lupus-prone mice following post- and prepubertal ovariectomy (OVX) ± estradiol (E2) replacement to determine the impact of hormonal status on disease expression and DC development in these mice. We observed a trend toward survival benefit in addition to decreased proteinuria and improved renal histology in the early OVX, but not late OVX- or E2-repleted WT mice. Interestingly, there was also a significant difference in splenic DC subsets by flow cytometry. Spleens from NZM mice OVX’d early had a significant decrease in proinflammatory CD11c+CD11b+ DCs (vs. unmanipulated WTs, late OVX- and E2-repleted mice). These early OVX’d animals also had a significant increase in tolerogenic CD11c+CD8a+ DCs vs. WT. These data join a growing body of evidence that supports a role for hormone modulation of DCs that likely impacts the penetrance and severity of autoimmune diseases, such as lupus. Ninety percent of those diagnosed with systemic lupus erythematosus are female, with peak incidence between the ages of 15 and 45, when women are most hormonally active. Despite significant research effort, the mechanisms underlying this sex bias remain unclear. We previously showed that a functional knockout of estrogen receptor alpha (ERα) resulted in significantly reduced renal disease and increased survival in murine lupus. Dendritic cell (DC) development, which requires both estrogen and ERα, is impacted, as is activation status and cytokine production. Since both estrogen and testosterone levels have immunomodulating effects, we presently studied the phenotype of NZM2410 lupus-prone mice following post-and prepubertal ovariectomy (OVX) ± estradiol (E2) replacement to determine the impact of hormonal status on disease expression and DC development in these mice. We observed a trend toward survival benefit in addition to decreased proteinuria and improved renal histology in the early OVX, but not late OVX-or E2-repleted WT mice. Interestingly, there was also a significant difference in splenic DC subsets by flow cytometry. Spleens from NZM mice OVX'd early had a significant decrease in proinflammatory CD11c+CD11b+ DCs (vs. unmanipulated WTs, late OVX-and E2-repleted mice). These early OVX'd animals also had a significant increase in tolerogenic CD11c+CD8a+ DCs vs. WT. These data join a growing body of evidence that supports a role for hormone modulation of DCs that likely impacts the penetrance and severity of autoimmune diseases, such as lupus. Keywords: systemic lupus erythematosus, mouse models, estrogen, ovariectomy, dendritic cells inTrODUcTiOn Systemic lupus erythematosus (SLE) is the prototypic autoimmune disease characterized by production of autoantibodies and immune complex-mediated end-organ damage. Nine out of 10 patients diagnosed with lupus are female, thus biologic sex is important in disease development. The cause of the sex difference in SLE is likely multifactorial, including the sex chromosomes, sex hormones, and their receptors. Perhaps the strongest epidemiological data to support a role for hormone impact on disease is the fact that incidence of disease is highest during the reproductive years when women are most hormonally active (in contrast to pre-menarche and post-menopause, when lupus incidence is lower and the female:male ratio is less profound). The SELENA trial showed that use of oral contraceptive pills in SLE patients did not enhance flare rates; however, estrogen replacement therapy in post-menopausal women did result in a slight increase in mild lupus flares, indicating that changing/replacing estrogen levels in an estrogen-deficient setting can impact disease (1,2). In murine models of lupus, manipulation of sex hormones also has significant impact on disease expression. NZB/NZW mice and some of the derived NZM strains, including NZM2410, have significant female predominance of disease (3). Both genders of MRL/lpr mice develop severe lupus; however, there is a trend toward earlier and more severe disease in females (4). In classic experiments by Roubinian et al. and later by Tarkowski et al., manipulation of sex hormones and castration led to significant effects on disease expression (5)(6)(7)(8)(9)(10). Specifically, castration of male NZB/NZW or MRL/lpr mice led to female-like disease and ovariectomy (OVX) with androgen replacement in female mice led to disease protection. Protection in male mice was limited to those in which gonadectomy was performed at 4 weeks of age or earlier (prepubertal). Interestingly, castration after onset of sexual maturity had little impact on disease. Administration of pharmacological doses of estrogen led to significant enhancement of disease in ovariectomized females, castrated males, and unmanipulated females (6). These experiments are suggestive that there is an imprinted component to female sex that is fixed at the time of sexual maturity. This imprinting appears manipulatable since lupus can be induced by repleting estrogen. Once the imprinting has occurred, however, taking away estrogen does not appear to be an effective therapeutic intervention, although there is modest data to support androgens as a treatment. Estrogen has pleiotropic effects on many different cell types, including immune cells. For example, estrogen treatment of macrophages is proinflammatory and induces expression/release of TNFα and nitric oxide (NO) (11)(12)(13)(14)(15)(16). In contrast, estrogen given to macrophage in the setting of LPS treatment results in estrogen partially inhibiting the release of NO and TNFα (15). This estrogen-mediated immune suppression was mediated via the activation of ERα. Short-term administration of exogenous estrogen appears immune suppressing, while long-term exogenous or endogenous estrogen is proinflammatory (17). In some lupus-prone strains, exogenous estradiol, via ERα, promotes survival of DNA-reactive B cells and decreases the activation threshold for BCR signaling, leading to increased autoantibody formation (18)(19)(20). While the ability of estrogen to impact B cell activation requires classic ERα action, estrogen effects on B cell survival do not (21). This may also explain data in NZM2410 ERαKO female mice that have high levels of both serum estradiol and anti-dsDNA but are still protected from renal disease and have a significant survival benefit (4). It seems clear that estrogen and ERα have differential effects on immune cells that may also be dose and time dependent. In lupus, dendritic cells (DCs) play a critical role in disease pathogenesis and progression. For example, mice deficient in DCs (conventional DCs and most plasmacytoid DCs) are protected from developing lupus nephritis. DC-deficient mice have similar autoantibody levels and immune complex deposition as their WT littermates, but lack progression of renal damage despite ongoing immune complex deposition. As noted above, we previously showed that female NZM2410 ERαKO mice also had significantly less renal disease and prolonged survival compared to WT littermates despite similar serum levels of autoantibodies and glomerular immune complex deposition. We subsequently showed that DCs from ERαKO mice have a blunted inflammatory response (e.g., IL-6, IL-1β, and IFNα production) to toll-like receptor (TLR) ligands (22). These findings suggested that an important effect of estrogen and ERα in lupus may be on the innate immune system and local tissue response to inflammation. Despite significant research effort, the mechanisms underlying estrogen and its receptors effects on disease remain unclear. Since estrogen and ERα are known to modulate DC development in vitro (23)(24)(25)(26), we and others are interested in the impact of estrogen on DCs in SLE, especially since estrogen influences DC development and function primarily via ERα (24,26). Due to the importance of DCs in SLE pathogenesis, we investigated the effect of early and late OVX ± estradiol (E2) repletion on both lupus disease phenotype and DC development in NZM2410 mice. Mice Mice were maintained at the Ralph H. Johnson VAMC Animal Facility (Charleston, SC, USA). Animal protocols followed the principles outlined in the Guide for the Care and Use of Laboratory Animals, and were approved by MUSC's IACUC. The NZM2410 mouse strain was acquired from Jackson Laboratory (Bar Harbor, ME, USA) and was maintained on a 12-h light/dark cycle with access to food and water ad libitum. All experimental mice (n = 34) were female and littermates where possible. Mice were ovariectomized at 4 or 8 weeks of age. One group of mice OVX' d at 4 weeks of age subsequently received 0.1 mg, 90-day time release 17β estradiol pellet (Innovative Research, Sarasota, FL, USA, Catalog number: NE-121) that was implanted subdermally. Mice were sacrificed with isoflurane and cervical dislocation at 32 weeks of age or when they reached sacrifice requirements (>10% loss of weight, >500 mg urine protein as assessed by dipstick, or upon recommendation by the animal facility). serum anti-dsDna, serum estradiol, and serum Testosterone Serum was collected throughout the experiment and at time of sacrifice. Serum anti-dsDNA was measured by ELISA assay, as previously described (4). Estradiol levels were assessed via ELISA (Calbiotech, San Diego, CA, USA), with an assay sensitivity of 3 pg/ml; precision: 3.1% (intra-assay), 9.9% (inter-assay). Testosterone serum levels were assessed by radioimmunoassay (RIA) at the University of Virginia Center for Research and Reproduction Ligand Assay and Analysis core. Urine Protein excretion Mice were housed in metabolic cages for 24 urine hour collection at 2-week intervals starting at 10 weeks of age until sacrifice. To prevent bacterial growth, antibiotics (ampicillin 25 μg/ml, gentamicin 50 μg/ml, and chloramphenicol 200 μg/ ml) were added to the collection tube. After 24 h, urine quantity was determined and samples were frozen at −20 for future analysis via mouse albumin ELISA with known standards. spleen and Kidney Processing and renal Pathology Spleens were harvested and kept on ice during processing. The spleens were processed through 40 μm strainers and depleted of red blood cells with red blood cell lysis buffer (144 mM NH4Cl and 17 mM Tris, pH 7.6). Spleen cells were washed twice with cold RPMI before being stained for flow cytometry analysis. One kidney was divided evenly for renal pathology and immunofluorescence. One half was snap frozen in liquid nitrogen and stored at −80°C for immunofluorescence analysis, the other half was fixed with buffered formalin, embedded in paraffin, and then sectioned and stained with hematoxylin and eosin. Kidney sections were analyzed in a blinded fashion by Dr. Phillip Ruiz (Department of Pathology, University of Miami School of Medicine, Miami, FL, USA) and graded on glomerular hypercellularity, segmental mesangial expansion, neutrophils/cell debris, crescent formation, and necrosis. These grades were combined for a total glomerular and interstitial pathology score. Deposition of IgG and complement component C3 was analyzed on the half kidney snap frozen in liquid nitrogen. Deposition was assessed by immunofluorescence by incubating slides with rabbit anti-mouse IgG FITC (Sigma) and sheep anti-mouse C3 FITC (Sigma). IgG and C3 were graded 0-3 for intensity of staining, as previously described (4). staining and Flow cytometry Spleen cells (4 × 10 6 per sample) were isolated in staining buffer (0.5% BSA and 0.02% sodium azide in 1× PBS) and stained with CD11c-Brilliant violet 605 (1:100), CD8a-Brilliant violet 421 (1:100), and CD11b-PE (1:400). Cells were incubated with antibodies for 30 min on ice in the dark. All antibodies were purchased from Biolegend (San Diego, CA, USA). Viability was assessed using LIVE/DEAD Fixable Dead Cell stain (Life Technologies, Carlsbad, CA, USA) at a concentration of 50 μl/million cells. Cells were washed twice with staining buffer and resuspended in 0.3 ml of staining buffer for flow cytometry. Cells were acquired on an LSRFortessa cell analyzer (BD Biosciences, San Jose, CA, USA), and analysis was performed using FlowJo software (FlowJo LLC, Ashland, OR, USA). statistical analysis Kruskal-Wallis one-way analysis of variance and post hoc Dunn's multiple comparison test were utilized to test for significance when comparing treatment groups. SEMs were reported where applicable. p Values ≤0.05 were considered significant. Log rank analysis was used to compare trends in animal survival. resUlTs effects of early vs. late Ovariectomy on survival, Body, and Organ Weights in nZM2410 Mice NZM2410 mice underwent four different treatments: (1) no OVX and no E2 pellet (NONP), (2) late OVX at 8 weeks (post-puberty) with no E2 pellet (O8NP), (3) early OVX at 4 weeks (pre-puberty) with no E2 pellet (O4NP), or (4) early OVX at 4 weeks with 0.1 mg E2 via sustained release pellet (O4 + E2). Similar to our previous work (4), unmanipulated NZM2410 (NONP) female mice began to die at 23 weeks of age and only 33% survived to 32 weeks (study endpoint). In contrast, as shown in Figure 1, 71% of mice OVX' d before puberty (O4NP) survived to 32 weeks of age. Mice that were either OVX' d late (post-puberty), or early but estradiol repleted (O4 + E2) were less protected, with only 42 and 50% surviving to 32 weeks of age, respectively. Overall, there was not a significant difference in survival between the four treatment groups (Figure 1A). At the time of sacrifice, all mice were weighed and total body, spleen, and kidney weights were documented. Neither OVX nor presence of E2 pellet significantly altered the weights of treated mice compared to the unmanipulated NONP mice ( Figure 1B). Spleen weights were variable within all four treatment groups, but no significant differences were observed between groups ( Figure 1C). Although O4NP mice trended toward lower kidney weights compared to the other groups ( Figure 1D), there was again no significant difference observed. Overall, OVX with or without E2 repletion did not significantly alter disease penetrance in NZM2410 mice; however, early OVX did appear to impact disease and offer modest protection. serum anti-dsDna autoantibody and Proteinuria levels in OVX'd nZM2410 Terminal anti-double stranded DNA (dsDNA) antibody levels were assessed via ELISA. O8NP mice had anti-dsDNA levels similar to unmanipulated NZM. Interestingly, we observed increased anti-dsDNA levels in both groups of early OVX' d mice, which was significantly different in the O4NP mice (vs. O8NP) ( Figure 2B). Similar results were observed in anti-dsDNA levels at the 18-week time point (Figure 2A). At 18 weeks of age, when onset of lupus-like kidney disease typically becomes evident in NZM2410 mice, 08NP and O4 + E2 mice had not developed proteinuria by albumin ELISA; unexpectedly, several O4NP mice had early levels of proteinuria, similar to unmanipulated NZM mice at that time point (data not shown). Nephritis in these mice did not progress; in fact, proteinuria in the same three O4NP animals was slightly reduced at the terminal time point. In the other WT mice, terminal albuminuria levels increased over time as expected reflecting an increased number of animals developing lupus renal disease ( Figure 3A). It was unexpected to find both higher levels of anti-dsDNA and earlier onset of proteinuria in the O4NP animals (which had the best overall survival), and suggested that early OVX resulted in a disease phenotype that is less severe/less progressive, rather than delayed in onset. Figures 3B,D). H&E stained kidney samples were prepared and renal pathology scores were determined by a blinded pathologist who used multiple standard parameters to assess disease involvement of glomeruli, tubules, interstitium, and vessels. In agreement with the improved survival and reduced levels of terminal proteinuria observed, O4NP mice appeared to have less severe kidney disease by renal pathology score compared to the other treatment groups, although this decrease was not statistically significant ( Figure 3C). Overall, all four groups showed evidence of renal disease by proteinuria, deposition scores, and overall renal pathology scores, although O4NP mice were modestly protected. hormone levels in nZM2410 Mice with and without e2 Estradiol levels in mice can range from <5 pg/ml (old, sick, or nonbreeding) to >1000 pg/ml (young pregnant). In this study, serum 17β estradiol levels for all treatment groups were assessed by the Frontiers in Immunology | www.frontiersin.org current gold standard 17β-estradiol ELISA (27). The O4 + E2 treatment group was OVX' d and had estrogen replaced via subcutaneous E2 pellet to account for potential effects of varying endogenous estrogen levels in cycling NZM2410 mice. The E2 pellet in O4 + E2 maintained an average of 3.8 pg/ml estrogen during the experiment, significantly higher than estrogen levels in the OVX' d unrepleted treatment groups, as expected ( Figure 4A). O4 + E2 mice maintained a level of physiological estrogen that is normally seen in mouse strains, with one individual having a much higher level of estrogen than the rest of the cohort. NONP mice had low normal estrogen levels which could be caused by a variety of factors including age, disease state, and hormonal cycle, since NZM2410 begin to manifest disease symptoms as early as 16 weeks of age. Terminal testosterone levels were quantified with RIA. NONP mice trended toward higher levels of testosterone, as would be expected in gonadintact cycling female mice, compared to the three OVX' d treatment groups, but no significant differences were observed ( Figure 4B). Flow cytometry Dendritic cells are dysregulated in SLE in both human disease and murine models of lupus. Since estrogen is required for normal DC development, we investigated the impact of post-vs. prepubertal OVX ± estrogen on DC numbers in lupus-prone NZM2410 mice. DC numbers were analyzed by flow cytometry using ex vivo splenocytes harvested at the time of sacrifice. We found that the O4NP mice, which tended toward a survival benefit, had a significant reduction in the number of CD11c+/ CD11b+ cells (vs. NONP, O8NP, and O4 + E2) ( Figure 5A). This was particularly surprising in the case of the profound difference between O4NP and O8NP, where the only difference between groups was the timing of the OVX, suggesting that hormone levels during pubescence are critical to determining the developmental program of hematopoietic progenitors. This concept is further supported by the increase in a different myeloid subset (CD11c+/ CD8a+) from spleens of O4NP mice vs. NONP ( Figure 5B). Importantly, CD11c+/CD8a+ DCs have been associated with promotion of regulatory T cells and mediation of tolerance in multiple tissue types and may partially explain the trend toward disease protection in the O4NP mice. DiscUssiOn Sex biases in autoimmunity have been observed in both humans and in rodent models of disease (28). SLE is a disease with a profound sex bias. Although there are data to support a role for sex chromosomes, sex hormones, and their receptors in disease expression, a clear etiology for this health disparity is not apparent. It is known that both estrogens and androgens can modulate the expression of autoimmunity. As in human disease, the striking sex difference in most murine lupus strains includes females developing earlier, more severe disease. This can be modified in both sexes by castration and differential hormone replacement, and the timing of this appears to affect disease differentially in males (10,(29)(30)(31). While there is somewhat conflicting data from prior studies of OVX, estrogen replacement, and estrogen receptor knockout on autoimmunity and immune functions in lupus animals, there is more agreement on the generally protective role for androgens in disease expression. In seminal work by Roubinian et al., prepubertal castration of male NZB/ NZW F1 (B/W) lupus-prone mice resulted in accelerated disease development, with both accelerated autoantibody production and mortality. In contrast, prepubertal castration of female mice did not change mortality, although it did have minor effects on autoantibody production and class switching (10). The effect on castrated males and females was much greater when they were treated with androgen vs. estrogen therapy, which resulted in attenuated kidney disease and improved survival when castrated females were treated with androgen, while castrated male B/W mice treated with estrogen had accelerated disease severity similar to that of WT females (29). Thus, androgen was protective while estrogen promoted disease. Anti-estrogen treatment with tamoxifen has also been shown to reduce autoantibody production and mortality in B/W mice (32,33). These studies were some of the first to show evidence that androgens, estrogens, and antiestrogens can modulate autoimmunity and disease expression in lupus. The purpose of the current study was to determine the impact of early (prepubertal) vs. late (postpubertal) OVX on the disease Albumin -Terminal R enal Histology Scores FigUre 3 | renal pathology in nZM2410 mice. (a) Terminal albumin levels did not show significance across experimental groups, O4NP mice trended toward reduced levels. (B,D) C3 deposition was assessed via immunofluorescence after incubating slides with sheep anti-mouse C3 antibody. Slides were scored on a 0-3.5 scale. No significant difference was seen between groups. IgG deposition was assessed in a similar manner to C3 with slides being stained using rabbit anti-mouse IgG antibody. Again, no significant difference in treatment groups was observed. (c) Histology was scored based on a variety of parameters: glomerular hypercellularity, segmental mesangial expansion, neutrophils/cell debris, crescent formation, and necrosis. Though the O4NP mice trended toward lower histology scores, no significant difference was observed in the treatment groups. All results shown as mean ± SE using Kruskal-Wallis non-parametric test with post hoc Dunn's multiple comparison test, p < 0.05. Cunningham et al. Early OVX Decreases cDCs in Murine Lupus Frontiers in Immunology | www.frontiersin.org phenotype and effects on DC development/subsets in female lupus-prone mice. We utilized NZM2410 mice, a recombinant inbred strain of New Zealand Mixed mice that have a highly penetrant disease with early onset. We have been using these mice for many years to study the effects of estrogen receptor alpha (ERα) on disease expression. We previously showed that backcrossing ERαKO mice onto an NZM2410 background resulted in reduced kidney disease and significantly prolonged survival in female, but not male mice (4). Of note, ERβKO NZM mice were not similarly protected. Thus, it has not been shown that deficiency of ERβ, or the early lack of estrogen alone (such as in castrated female B/W mice), significantly impacts disease, although we and others have shown that deficiency of a functional ERα does impact disease (4,34). Since the timing of castration in males leads to variable protection in some lupus strains, we undertook this study in WT NZM female mice to ovariectomize either before or after puberty (±estrogen replacement in the early OVX group). The results are consistent with earlier experiments that showed no Frontiers in Immunology | www.frontiersin.org significant impact on the development of lupus nephritis with OVX alone. Differences were noted, however, in the fact that estrogen replacement with a 17β-estradiol pellet, which resulted in physiological doses, did not exacerbate disease over WT animals. Additionally, anti-dsDNA levels were not different between the estrogen-treated group and the other groups, but surprisingly, were significantly different between the early and late OVX groups. This result supports the concept that there exists a time point during the transition between adolescence and adulthood, as in males, in which hormone levels critically impact development of autoimmunity. It is important to separate autoantibody production from disease; however, since the animals with higher anti-dsDNA levels trended toward reduced kidney disease and improved survival. This is similar to what we observed in our previous study of NZM ERαKO mice (higher anti-dsDNA, lower disease scores), lending credence to the concept of a two-step process, whereby autoantibodies are necessary but not sufficient for disease (a concept also supported epidemiologically by the large number of normal human females that have autoantibodies but no evidence of disease). Since most testosterone in females is secreted by the ovaries, it is also important to mention that OVX' d mice have reduced levels of both testosterone and estrogen, as evidenced by serum levels in our OVX' d animals. One limitation of this study is that testosterone and estrogen levels were examined late: at 18 weeks (data not shown) and at the terminal endpoint. We chose these time points to reflect hormone levels during early and late lupus disease onset and to show evidence of successful (long-lasting) estradiol replacement. However, the drawback is that these levels do not reflect hormone levels during puberty onset, at which point testosterone and estrogen levels would have been different between the castrated mice (early OVX) and the gonad-intact (late OVX) mice. Also, testosterone levels were assessed by RIA, which is relatively sensitive and specific, but estrogen levels (using ELISA) are notoriously difficult to measure with accuracy. We did see lower estrogen levels in our unmanipulated animals than we expected, likely due to increasing disease activity and abnormal ovarian cycling at that late time point (these animals do not breed well beyond 20 weeks). Importantly, estrogen has different effects depending on timing, concentration, and cell type. For example, estrogen can help break tolerance by promoting survival of autoreactive B cells and leading to increased autoantibody production in a murine lupus model (19,35). In other settings, such as experimental autoimmune encephalitis (EAE), which is thought to be more T-cell driven, estrogen is neuroprotective/anti-inflammatory via ERα signaling in astrocytes, which are CNS APCs expressing MHCII, B7, and CD40, and are critical for T cell activation (36,37). It is also likely that estrogen impacts immune cell types differentially depending on the milieu. Dendritic cells are immune cells that are important for both maintenance of tolerance and an appropriate innate immune response. As with most immune cells, the development status of DCs is critical in influencing its cellular functions. Immature DCs mostly induce tolerance (since they express fewer surface molecules that give a second stimulatory signal), whereas mature DCs function to induce immune responses by stimulating proliferation of naïve antigen-specific CD4+ T cells or CD8+ T cells (38, 39). It is well known that DCs in SLE are abnormal in both number and function (40)(41)(42). Additionally, previously published studies by our lab and others revealed a critical role for estradiol and ERα modulating DC number and function in normal and lupus-prone animals (22,25,26,43). Estrogen also influences maturation and function of human monocyte-derived DCs (44). One limitation to this study is that, although we focused on DCs, it is likely that our isolated cells, whether spleen derived or BM derived, did have some frequency Spleen CD11c+CD11b+ Positive Frontiers in Immunology | www.frontiersin.org of monocytes and macrophage present. We and others are moving toward better markers to gate or sort more specifically on DC subsets, but in this experiment, our markers are inadequate to identify DCs only. Because dysregulated DC activation and function are implicated in the pathophysiology of SLE, and estrogen plays a role in SLE pathogenesis, this line of study is vital to our understanding of sex bias in this disease. In the current study, we observed that the single act of ovariectomizing female lupus-prone mice prior to puberty resulted in near depletion of absolute numbers of CD11c+/CD11b+ DCs from the spleen. This effect appears to be due to a lack of estrogen at a critical time point, since replacement of estradiol by pellet in early OVX' d mice does not result in the same reduction of this DC subset. Perhaps more interesting is the observation that a different subset of DCs (CD8a+) were not reduced in O4NP mice, and in fact trended toward in increase, suggesting that the "imprinting" that took place caused a shift in the DC developmental program, rather than simply being a survival signal. It is possible that levels of ERα were also altered at that time (although we do not have tissue from this early time point) since estrogen is known to increase ERα expression. One could speculate that when estrogen increases at the time of pubescence, it causes an increase in ERα expression that elicits a more proinflammatory environment that then informs DC development. In summary, we investigated the effect of early OVX in WT lupus-prone animals and found the disease phenotype to be consistent with earlier reports in other lupus strains, except that autoantibody levels were actually increased in the (protected) early OVX' d mice, at both disease onset and late disease. Although there were not significant differences in survival, there was a trend toward survival benefit in the early OVX' d animals that were not estrogen repleted. These animals also had a significant reduction in CD11c+/CD11b+ DCs from spleen, which normally make up approximately 70% of murine spleen DC (45,46). Conversely, tolerogenic CD11c+CD8a+ DCs were not reduced, but rather increased in O4NP mice, suggesting a mechanism for the protected phenotype. These data also suggest that the major impact of estrogen on DCs is early, specifically during the critical hormonal changes that take place at the time of puberty. aUThOr cOnTriBUTiOns MC and GG directed the work, contributed to designing the study, and reviewed/interpreted the data. MC and JW conducted all studies with help from JE (breeding, genotyping, bleeding and urine collection), JS (IHC/flow cytometry), and EC (ELISAs). MC and JW prepared the manuscript and figures. GG contributed to manuscript revisions. All authors read and approved the final manuscript.
6,665.4
2016-02-15T00:00:00.000
[ "Biology", "Medicine" ]
Signaling Strategy in Spermatozoa Activation of Sea Urchin, C. elegans and Human: Three Different Players for the Same Melody Many cellular systems undergo a process of activation during their metabolic activity. For instance the contraction of striated muscle, the release of synaptosomes from the presynaptic endplate, the degranulation (exocytosis) of sensitized mast cells, the activation of mammals eggs, the activation of visual transmission in mammalian rods, the activation of male gametes to reach the full fertilizing ability require an activation process. This last event, in particular, has attracted the attention of Researches since its discovery in 50’s because of its important implications both for basic (developmental biology, endocrinology, biochemistry) and applied science (andrology, male infertility, contraception). Thus sperm activation has been studied in several organisms, of different Phyla, and the data about this important topic are growing day by day. The availability of bioinformatics tools has provided to the Scientists new resources to investigate this phenomenon in its complexity. Overcoming the reductionist approach until now used, the use of bioinformatics resources makes possible the aggregation of molecular data in explicative models. The use of computational modeling to explore the biochemical architecture of mammalian sperm activation, thus allowing the quantitative analysis of this complex signaling system, has been recently introduced [1] and validated [2] by our group. In this context it could be very interesting to modelize and compare the sperm activation process in organisms showing completely different ecology, reproductive strategy and sexual behavior, as it is the case of sea urchin, C. elegans and Human. Sea urchins are members of the Phylum Echinodermata. They are dioecious, having separate male and female sexes, and have five gonads (fivefold symmetry) with a single duct, rising from the upper pole to open at a gonopore lying in one of the genital plates surrounding the anus [3]. The gonads are lined with muscles underneath the peritoneum, that allow the animal to squeeze its gametes through the duct and into the surrounding sea water, where fertilization takes place [3]. Sea urchin spermatozoa acquire the motility only once they reach sea water [4]. To increase the success rate of fertilization the male gametes are attracted by the molecules dispersed by the homologous oocytes. The outer investment of oocytes, the egg jelly, contains short sperm-activating peptides (SAPs) that bind to specific sperm receptors and dramatically alter the metabolic rate and motility of spermatozoa, driving them towards the oocytes [5]. When sperm encounter the egg jelly, the exocytosis of the acrosomal vesicle, the acrosome reaction (AR), occurs and the pHi-dependent polymerization of actin leads to the extension of acrosomal tubule, which exposes a new bindincovered membrane which will fuse specifically through a receptor to the egg [6,7]. The nematode C. elegans is an unsegmented, 1 mm long, vermiform and bilaterally symmetrical worm, living in the ground, having hermaphrodites and males. In particular the 99.95% of the individuals are hermaphrodite and the 0.05% of the population is composed by males. Males have a single-lobed gonad, a vas deferens, and a tail specialized for mating [8]. Hermaphrodites have two ovaries, an oviduct, a spermatheca (a kind of chamber where oocytes are fertilized by sperm), and a single uterus. C. elegans eggs are laid by the hermaphrodite and, after hatching, they pass through four juvenile stages (L1–L4). Hermaphrodites produce all their sperm in the L4 stage (150 sperm per gonadal arm), then they switch over to produce oocytes. The sperm are stored in the same area of the gonad as the oocytes until the first oocyte pushes the sperm into the spermatheca. The male can inseminate the hermaphrodite, which will preferentially use male Introduction Many cellular systems undergo a process of activation during their metabolic activity. For instance the contraction of striated muscle, the release of synaptosomes from the presynaptic endplate, the degranulation (exocytosis) of sensitized mast cells, the activation of mammals eggs, the activation of visual transmission in mammalian rods, the activation of male gametes to reach the full fertilizing ability require an activation process. This last event, in particular, has attracted the attention of Researches since its discovery in 50's because of its important implications both for basic (developmental biology, endocrinology, biochemistry) and applied science (andrology, male infertility, contraception). Thus sperm activation has been studied in several organisms, of different Phyla, and the data about this important topic are growing day by day. The availability of bioinformatics tools has provided to the Scientists new resources to investigate this phenomenon in its complexity. Overcoming the reductionist approach until now used, the use of bioinformatics resources makes possible the aggregation of molecular data in explicative models. The use of computational modeling to explore the biochemical architecture of mammalian sperm activation, thus allowing the quantitative analysis of this complex signaling system, has been recently introduced [1] and validated [2] by our group. In this context it could be very interesting to modelize and compare the sperm activation process in organisms showing completely different ecology, reproductive strategy and sexual behavior, as it is the case of sea urchin, C. elegans and Human. Sea urchins are members of the Phylum Echinodermata. They are dioecious, having separate male and female sexes, and have five gonads (fivefold symmetry) with a single duct, rising from the upper pole to open at a gonopore lying in one of the genital plates surrounding the anus [3]. The gonads are lined with muscles underneath the peritoneum, that allow the animal to squeeze its gametes through the duct and into the surrounding sea water, where fertilization takes place [3]. Sea urchin spermatozoa acquire the motility only once they reach sea water [4]. To increase the success rate of fertilization the male gametes are attracted by the molecules dispersed by the homologous oocytes. The outer investment of oocytes, the egg jelly, contains short sperm-activating peptides (SAPs) that bind to specific sperm receptors and dramatically alter the metabolic rate and motility of spermatozoa, driving them towards the oocytes [5]. When sperm encounter the egg jelly, the exocytosis of the acrosomal vesicle, the acrosome reaction (AR), occurs and the pHi-dependent polymerization of actin leads to the extension of acrosomal tubule, which exposes a new bindincovered membrane which will fuse specifically through a receptor to the egg [6,7]. The nematode C. elegans is an unsegmented, 1 mm long, vermiform and bilaterally symmetrical worm, living in the ground, having hermaphrodites and males. In particular the 99.95% of the individuals are hermaphrodite and the 0.05% of the population is composed by males. Males have a single-lobed gonad, a vas deferens, and a tail specialized for mating [8]. Hermaphrodites have two ovaries, an oviduct, a spermatheca (a kind of chamber where oocytes are fertilized by sperm), and a single uterus. C. elegans eggs are laid by the hermaphrodite and, after hatching, they pass through four juvenile stages (L1-L4). Hermaphrodites produce all their sperm in the L4 stage (150 sperm per gonadal arm), then they switch over to produce oocytes. The sperm are stored in the same area of the gonad as the oocytes until the first oocyte pushes the sperm into the spermatheca. The male can inseminate the hermaphrodite, which will preferentially use male sperm (both types of sperm are stored in the spermatheca). When self-inseminated C. elegans will lay approximately 300 eggs, on the contrary when inseminated by a male, the number of eggs can exceed 1.000 [8]. C. elegans spermatozoa are characterized by the absence of the flagellum and the lack of the acrosome, while they content many membranous vesicles, the Membranous Organelles (MOs). The motility is acquired by a process (the spermiogenesis or sperm activation), in which the spherical spermatid extends a pseudopod, conferring to the cell an amoeboid motility. This event is made possible by an important reorganization of the cytoskeleton (the C. elegans spermatozoa content MSP instead of actin) and of membrane involving the remodeling of specific microdomains [9]. In Human the reproduction is obtained by internal fertilization and the spermatozoa were ejaculated in vagina. Male gametes travel along the female genital tract until they reach the oviduct, where they reside for a relatively long period (from hours to days) thus acquiring the fertilizing ability. This process, called "the capacitation", involves virtually all the cellular structures: the intracellular calcium concentration rises [10,11], the protein phosphorylation pattern changes [12,13], the actin cytoskeleton reorganizes [14], the plasma membrane and the outer acrosome membrane become more instable and gradually acquire the ability to fuse each other [15,16], the spermatozoa motility is hyperactivated [17]). Aim of the present work was to carry out a comparison among the different biochemical events occurring during sperm activation in these organisms, adopting the graph theory formalism. The molecules that interact each other during sperm activation in sea urchin, C. elegans and Human were represented by a mathematical object called "graph" [18], constituted by a variable number of nodes (the molecules) linked by edges (the interactions) and originating a network, thus, the statistical analysis of the main topological parameters of the networks was carried out. Database realization, network construction The data about human sperm activation were obtained from the already realized database [1] after updating. Since at present a database containing the data about sperm activation of sea urchin (the available data are in particular referred to S. purpuratus,) and C. elegans does not exist, a new database was realized using Microsoft Office Excel 2003. The available information was obtained from peer-reviewed papers from PubMed (www.ncbi.nlm.nih.gov/pubmed/). As a reference were used the data published during latest 10 years. In some cases the record did not represent a single molecule but complex events, such as "membrane fusion" or "protein tyrosine phosphorylation" because all the single molecular determinants of the phenomenon are still unknown. The database fields were: Source molecule: representing the molecule source of interaction. Interaction: representing the nature of interaction (activation, inhibition). Target molecule: representing the molecule target of the interaction. Biological function: representing the functional meaning or the contest of interaction (glycolysis, lipid remodelling, oxidative phosphorylation). Species: representing the species in which the interaction was described. Reference: representing the bibliographic source of information. Notes: representing all the notation such as the presence of synonyms or the intracellular location, if relevant, or the explanation of complex cellular events. The data, extracted from the databases, were used to build the networks using the Cytoscape 2.6.3 software (www.cytoscape.org). The network representing the elements common to the three networks was realized, using the Cytoscape plugin Network Analyzer (function "Compute intersection" of "Compare two networks" menu). This new network was called CE (Common Elements) network. The networks were spatially represented using the Cytoscape Degree Sorted Circle Layout: all nodes with the same number of links are located together around the circle (Cytoscape User Manual, http:// www.cytoscape.org/manual/Cytoscape2_6Manual.html). The node size was proportional to the number of connection and the node color gradient was dependent on the closeness centrality. This parameter is computed as: Cc(n) = 1/avg(L(n, m)), where L (n, m) is the length of the shortest path between two nodes n and m. The closeness centrality of each node ranges from 0 to 1 and it is a measure of how fast information spreads from a given node to the other nodes. Data analysis The statistical and topological analyses of networks were carried out, in agreement with Bernabò [1,2] considering the networks as directed by the Cytoscape plugin Network Analyzer (www.med.bioinf. mpi inf.mpg.de/netanalyzer/help/2.6.1/index.html). Results The networks representing spermatozoa activation of sea urchin, C. The nodes diameter is proportional to the number of links, the color varies depending on the closeness centrality (see text for explanation). The networks were spatially represented using the Cytoscape Degree Sorted Circle Layout: all nodes woth the same number of links are located together around the circle (see Cytoscape's User Manual). Table 1. In all cases the distribution of node linkages follows a power law, represented by the generic equation: y = a x -b The r, R 2 and b coefficients of each network are tabulated in Table 2. The clustering coefficient distribution does not follow a power law, as demonstrated by the results of power law fitting of clustering coefficient distribution (r = 0.598 in sea urchin; r = 0.100 in C. elegans; r = 0.183 in Human). The nodes diameter is proportional to the number of links, the color varies depending on the closeness centrality (see text for explanation). The networks were spatially represented using the Cytoscape Degree Sorted Circle Layout: all nodes with the same number of links are located together around the circle (see Cytoscape User Manual). The number of nodes represent the total number of molecules involved; the number of edges represents the total number of interactions; the clustering coefficient is calculated as CI=2nI/k(k-1), where nI is the number of links connecting the kI neighbours of node I to each other; the network diameter is the largest distance between two nodes; the Averaged n° neighbours represents the mean number of connections of each node; the Char. path length gives the expected distance between two connected nodes. The most connected nodes (the hubs) in the networks (as shown in Table 3 In all the three networks about the 40-45% of nodes have two links. A new network showing the common elements present in the three networks has been created by computing the intersection of the existing networks (CE network). CE network is represented in Figure 4 and its topological parameters are showed in Table 4. The nodes belonging to this network are involved in energy metabolism (glicolysis and mitochondrial phosphorylative oxidation) or in signal transduction ([H + ] i , PKA, PKC, [Ca 2+ ] i ). Discussion This work was carried out to modelize and compare the signaling machinery of sperm activation processes in organisms belonging to different Phyla such as sea urchin, C. elegans and Human. To this aim, the biological networks representing these events were realized and their main topological parameters were analyzed. This bioinformaticsbased approach is a powerful tool to describe biological organization and function of cellular components as well as to understand the principles driving their evolution [19] and has been already used to represent the biochemical events involved in human [1] and boar [2] spermatozoa activation. First of all, it was evident that all the networks have virtually the same topological features (see Table 1). This finding is per se really interesting and poses an important question: why so different organisms, living in completely different environments and displaying different biological characteristics (see Table 5) share a common topology for sperm activation? To answer this question it is necessary to evaluate the biological significance of the networks topological parameters. The most elementary characteristics are the node degree and the clustering coefficient. The first, also known as "connectivity", k, indicates how many links the node has to other nodes. It makes possible to define the node degree distribution, P(k), which represents the probability that a selected node has exactly k links. The clustering coefficient CI = 2nI/k(k-1),where nI is the number of links connecting the kI neighbours of node I to each other, measures the network tendency to develop clusters of nodes: more the clustering coefficient is high, more the presence of clusters increases. On the basis of these topological indexes, the networks were classified in: random networks, scale free networks and hierarchical networks. In random networks, described by the Erdös-Rényi (ER) model, the node degrees follows a Poisson distribution, as a consequence the most of nodes have approximately the same number of links (that defines the network's scale) and the clustering coefficient is independent of the nodes degree. The scalefree networks (Barabási-Albert [20], BA, model) are characterized by a power-law degree distribution of the number of links per node: a relatively small number of nodes is highly connected (the hubs) and the most of nodes is scarcely linked, thus it does not exist a "typical" node (scale free topology). In this case the clustering coefficient (CC) is independent of the number of links per node. In hierarchical networks the scale-free topology and the local clustering coexist: the clustering coefficient is higher in the most linked nodes and, consequently, its distribution follows a power law. The analysis of topological parameters carried out on of sperm activation processes of sea urchin, C. elegans and showed that these networks are preferable to the scale-free networks, as demonstrated by the power law that links the number of edges to the node frequency and the dispersion of clustering coefficient in agreement with the BA model. From a biological point of view, it is possible to speculate that this particular behavior could offer an important evolutionary advantage: the robustness against random failure. In fact a random perturbation, in most of cases, will involve the most frequent typology of nodes, i.e. the less connected ones, with negligible consequences on network architecture [1,20,21], that is on whole cellular function. In our model the probability that one of them will be involved in random damages is <5%. Another interesting finding is the very low values of clustering coefficient which suggest that the architecture of signaling system privileges the rapidity of signal transduction instead the redundancy. Moreover the values of characteristic path length that measures how many links it is necessary to pass through to travel between two nodes, ranging between 6.5 and 8.1 are in line with this hypothesis. These two parameter, kept together with the finding that the higher percentage of nodes display two edges (one input and one output), seem to strengthen the idea that sperm activation is a highly efficient signaling process. In fact: -each node receives an input signal and transfers an output response; -any molecule can interact with any other in a small number of passages, thus, the loss of information due to the signal decrease is minimized and the signal efficiency is maximized; -any local perturbation in signaling system could reach the whole network in a short time increasing the system responsiveness to intracellular and extracellular stimuli. Diameter 23 Avg. n° neighbours 2.508 Char. path length 8.158 The number of nodes represents the total number of molecules involved; the number of edges represents the total number of interactions; the clustering coefficient is calculated as CI = 2nI/k(k-1), where nI is the number of links connecting the kI neighbours of node I to each other; the network diameter is the largest distance between two nodes; the Averaged n° neighbours represents the mean number of connections of each node; the Char. path length gives the expected distance between two connected nodes. In the same context it is important to point out as the terminal events (such as "protein phosphorylation" or "membrane fusion" or "motility") are highly linked. Reasonably this is due to the redundancy of biochemical signaling, as a safety strategy to overlap partial failure of the system. The most linked node was, in all the cases, the same: [Ca 2+ ] i . This finding is not surprising, in fact, as in many other cellular systems, in spermatozoa activation Ca 2+ behaves as a second messenger. More in detail in sea urchin the Ca 2+ enters the sperm cell through voltage dependent channels (Cav1.2 or 1.3) or through cAMP or cGMP (SpHCN1 or 2) gated channels. In the intracellular signalling cascade this ion is involved in the control of cAMP, cGMP and in the PIP2/ IP3 pathway [22]. In C. elegans [Ca 2+ ] i is controlled by membrane channels and by sequestering the ion in intracellular stores and it is involved in PIP2/IP3 signaling pathways regulating the MOs fusion and the onset of motility [9]. In Human it is known that during capacitation the [Ca 2+ ] i increases and the capacitation does not take place in Ca 2+ absence [10,23]. Four major Ca 2+ clearance mechanisms are described in mammalian spermatozoa, two acting on the plasma membrane and two acting on intracellular organelles. The plasma membrane Ca 2+ -ATPase exports a cytoplasmic Ca 2+ ion and imports one or two extracellular protons at the expense of ATP. When [Ca 2+ ] i is elevated, the plasma membrane Na + -Ca 2+ exchanger operates in forward mode exporting an intracellular Ca 2+ ion and importing approximately three Na + ions at the expense of the Na + gradient [24,25]. The best characterized organellar clearance mechanism involves the sarcoplasmic-endoplasmic reticulum Ca 2+ -ATPase pumps and the mitochondrial Ca 2+ uniporter [25]. During the sperm capacitation the Ca 2+ acts converting extracelluar stimuli to chemical response in a myriad of molecular system, such as, protein kynase C (PKC), protein kynase A (PKA), actin, and many others. The other most connected nodes belong to different classes: • Molecules involved in energetic balance, such as ATP. It is the main energetic source for spermatozoa, where the metabolic energy production derives exclusively from the glycolysis and from mitochondrial oxidative phosphorylation [26]. • [H + ] i : in sea urchin and C. elegans the pH is a key element in control of many biochemical events, such as the membrane polarization, the rearrangement of cytoskeleton proteins (MSP) and the motility [9,27]. It is important to note that in Human and mammals spermatozoa the pH modification is an early event in the acquisition of full fertilizing ability [13]. • Molecules playing a key role in signal transduction and amplification: cAMP, cGMP. These two cyclic nucleotides are ubiquitous second messengers and concur to the control of cellular signal transduction by modulating the activity of membrane channels, kinases, phosphatases and many other molecules. The analysis of topological parameters of the CE networks, showed that the elements common to the three networks are those representing the scaffold of spermatozoa energetic metabolism (in all the cases the energy is provided, as ATP, by glicolysis and mitochondrial phosphorylative oxidation) or the signaling system [Ca 2+ ] i , [H + ] i , PKA, PKC. From these data, it is evident that spermatozoa activation in sea urchin, C. elegans and Human recognizes similar biochemical determinants. This finding strengths that some signaling molecules and metabolic processes (such as [Ca 2+ ] i , glycolysis, mitochondrial oxidative phosphorylation) have an ubiquitous role in spermatozoa signaling and biochemistry; in this context it will be interesting to compare the studied networks with the data concerning different cellular systems undergoing an activation process (for instance neurons, leucocytes, myocardiocytes) More interesting is the evidence that the networks topology of the three networks is virtually identical displaying important features: -robustness against random failure; -signaling fastness, rapidity and efficiency. Since, as Dobzhansky [28] famously stated, "nothing in biology makes sense except in the light of evolution", it is possible to hypothesize that the similarity in the architecture of this process in such different organisms could be due to the maintenance of similar ancestral mechanisms that offer important evolutionary advantages. In our opinion these findings in one hand strength the usefulness of bioinformatics modeling in the study of the biochemical asset of different organisms from a comparative and evolutionary point of view, in the other hand could contribute to the knowledge of the spermatozoa activation, i.e. of a process of pivotal importance for the survival of living beings.
5,378.8
2011-11-18T00:00:00.000
[ "Biology" ]
Establishment of in-house assay for screening of anti-SARS-CoV-2 protein inhibitors Developing a potent antiviral agent to combat Coronavirus Disease-19 (COVID-19) is of critical importance as we may be at risk of the emergence of new virus strains or another pandemic recurrence. The interaction between the SARS-CoV-2 spike protein and Angiotensin Converting Enzyme 2 (ACE2) is the main protein-protein interaction (PPI) implicated in the virus entry into the host cells. Spike-ACE2 PPI represents a major target for drug intervention. We have repurposed a previously described protein-protein interaction detection method to be utilized as a drug screening assay. The assay was standardized using Chitosan nanoparticles (CNPs) as the drug and SARS-CoV-2 spike-ACE2 interaction as the PPI model. The assay was then used to screen four natural bioactive compounds: Curcumin (Cur), Gallic acid (GA), Quercetin (Q), and Silymarin (Sil), and their cytotoxicity was evaluated in vitro. Production of the spike protein and the evaluation of its activity in comparison to a standard commercial protein was part of our work as well. Here we describe a novel simple immunofluorescent screening assay to identify potential SARS-CoV-2 inhibitors that could assess the inhibitory effect of any ligand against any PPI. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s13568-024-01739-8. Introduction Coronavirus,the virus that caused the 2020 pandemic, also known as severe acute respiratory syndrome Coronavirus-2 (SARS-CoV-2) has significantly phased out.However, the efforts to develop and repurpose drugs to be used for the treatment of Coronavirus disease-19 are still in progress (Lai and Hsueh 2023;Murakami et al. 2023;Saravolatz et al. 2023).To date, the choices for antivirals against SARS-CoV-2 are very limited with only a few FDAapproved drugs.Remdesivir is a broad-spectrum antiviral that inhibits viral replication by terminating ribonucleic acid (RNA) transcription.It was the first proposed as a target for the discovery of antiviral agents (Jia et al. 2021). Many research groups have sought to inhibit this PPI using different natural compounds and drugs.Screening methods adopted by several research teams varied, but many used commercial ELISA kits, a sensitive and relatively affordable method (Ahamad et al. 2022;Bojadzic et al. 2021;Tedesco et al. 2022).However, importing such kits to low-income countries is unaffordable and time-consuming. For this reason, we focused on developing an in-house immunofluorescent assay to screen several inhibitors, with the possibility of repurposing this assay for the screening of ligands, e.g., small molecules, against any other PPI.We sought to exploit a protein-protein interaction detection method that we have developed to detect spike-ACE2 PPI which allowed us to evaluate the binding kinetics of that PPI (Emam et al. 2022).To further increase the affordability of our assay, we opted to produce the spike protein in the lab and use it in the screening assay. Natural bioactive compounds such as phytochemicals are usually preferred as a treatment option, when applicable, due to their relatively few side effects and sustainable production.Many efforts were made to investigate herbs, herbal extracts, or active compounds for anti-COVID-19 activity (Al-Harrasi et al. 2022;Bizzoca et al. 2022;Chakravarti et al. 2021;Khan et al. 2021).In a previous report, we demonstrated that silymarin, which is a flavonolignan found in Silybum marianum (Milk thistle), has an antiviral effect against SARS-CoV-2 and speculated that its mechanism of action may be blocking the viral entry (Loutfy et al. 2022).Gallic acid is a bioactive compound classified as phenolic acid found in many plant families.Gallic acid has shown anti-hepatitis C virus (HCV) and anti-herpes simplex virus-2 (HSV-2) activities in previous literature (Govea-Salas et al. 2016;Jadel Müller Kratz et al. 2008).Quercetin (Q) is a flavonol, which is a polyphenol compound found in plants, with a wide pharmacological activity spectrum ranging from anti-viral and anti-carcinogenic activity to general health improvement and psycho-stimulation ascribed to its potent antioxidant effect (Li et al. 2016;Xu et al. 2019).Quercetin showed an ability to bind to spike protein and ACE2 in silico (Hiremath et al. 2021;Pan et al. 2020).Curcumin (Cur) is an active compound derived from Curcuma longa.In several clinical trials, it was found to ameliorate inflammation in COVID-19 patients, and it was also proposed to be used as an adjuvant drug in COVID-19 treatment (Rattis, Ramos, and Celes 2021;Vahedian-Azimi et al. 2022). Here, we have extended the application of our previously published protein-protein interaction detection method (Emam et al. 2022), to screen these selected compounds.The assay was standardized using chitosan nanoparticles (CNPs), a known anti-COVID-19 compound (Loutfy et al. 2022), and the spike-ACE2 interaction model.It was applied to screen the four promising natural compounds: Curcumin (Cur), Gallic acid (GA), Quercetin (Q), and Silymarin (Sil), to elucidate their potential inhibitory activity against spike-ACE2 PPI.We also produced the spike protein whose activity was evaluated and compared with the commercial protein. Plasmid preparation and transfection The plasmid expressing C-terminal Twin strip tagged spike protein was amplified and purified with a maxiprep kit following the manufacturer's instructions.HEK293T cells were seeded in a 6-well plate, 6 × 10 5 cells per well, in DMEM complete media.The following day, the cells were transfected with the plasmid using Lipofectamine 3000 reagent, following the manufacturer protocol.The cells were harvested 72 h post-transfection and prepared for sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) analysis. Western blot For the preparation of cell lysates, cells were washed twice with ice-cold PBS, harvested, re-suspended in Laemmli's buffer, and boiled for 5 min.Samples were separated on 9% SDS-PAGE and transferred onto polyvinylidene difluoride (PVDF) membranes (Amersham).The membrane was then blocked with 5% non-fat dry milk in TBS and incubated with primary antibody (1:1000) overnight at 4 °C.Following incubation with HRP-labelled secondary antibody (1:1000) at room temperature, the signal was detected using an ECL kit (Amersham) and visualized using Autoradiography. Protein production For protein production, HEK293T cells were transfected at 70% confluency in 6 × 10 cm plates.Cells were collected 48 h post-transfection and lysed using NP-40 lysis buffer (25 mM Tris-HCl pH 7.4, 150 mM NaCl, 1% NP-40, 5% glycerol) supplemented with protease cocktail inhibitor, and the protein-containing supernatant was collected.Spike protein was purified from the supernatant via affinity purification column and quantified using BCA assay and confirmed with western blot.Produced spike protein was stored at -20 °C in 50% glycerol and thawed upon usage. Cell cytotoxicity using MTT assay MTT assay was performed on Vero cells to evaluate the cytotoxicity of the selected compounds as previously described (Loutfy et al. 2022;Mosmann 1983).Briefly, 10 4 cells per well were seeded in 96-well plates in DMEM complete media.The following day, cells were treated with different concentrations of the compounds in triplicates.After 48 h, MTT dye was added to the cells (0.5 mg/mL) and incubated for 2 h.The plate was then decanted and 100 µL of DMSO was added per well.Absorbance readings were obtained at 570 nm using a microplate reader (CLARIOstar plus, BMG LabTech, Germany).The cell viability was then expressed as the percentage of the untreated control and CC 50 (defined as the concentration that reduced cell viability by 50%) values of the tested compounds were calculated by Graph-Pad Prism 8 software. Validation of spike protein binding activity To validate the binding activity of the produced spike protein (S P ) versus the commercial spike protein (S C ) we allowed it to interact with ACE2 using our previously published protein-protein interaction detection method (Emam et al. 2022).Briefly, small glass slides of equal size, 1 cm x 1 cm x 1 mm, were cleaned, coated with APTES, and then activated with EDC-NHS.ACE2 protein was immobilized on the slides, 100 nM on each slide, then they were rinsed with PBS.Both S P and S C were diluted to 2 concentrations 10 nM and 100 nM.Each slide was then incubated with one of the spike proteins followed by the anti-spike antibody (10 nM), then the FITC-labelled secondary antibody (10 nM) with rinsing between each incubation.Two negative controls were included in each run to test for specificity.The negative control was subjected to all the mentioned steps excluding the addition of spike protein. Fluorescence intensity readings were recorded before and after adding spike protein and after the final wash by transferring slides into 24-well plates and using a microplate reader (CLARIOstar plus, BMG LabTech, Germany) for signal detection. Standardization of the in-house immunofluorescent assay for drug screening Chitosan nanoparticles were used as a drug for the optimization of the assay conditions.CNPs preparation and characterization were previously described (Loutfy et al. 2022).The modified assay was performed following the same procedures as described in the previous section except for an extra incubation step between the immobilized protein (ACE2) and CNPs for 40 min, followed by rinsing with PBS.In the current assay, various concentrations of CNPs (1,5,10,20,30, and 50 µg/ml) were used (Emam et al. 2021). Following standardization, we screened the selected compounds: Curcumin (Cur), Gallic acid (GA), Quercetin (Q), and Silymarin (Sil).Taking into consideration the CC 50 values of each compound, different concentrations ranging from 5 µg/mL to 500 µg/mL were used.The IC 50 of each compound, which represented 50% inhibition of the substrate (ACE2) binding to its ligand (or to the spike protein), was determined using GraphPad Prism 8 software. In each run, positive and negative controls were performed.However, the controls were counterintuitive as the negative control included the addition of spike & ACE2 proteins and the antibodies, producing a high signal.While for the positive control, we excluded the addition of spike protein, subsequently, producing no signal. Production of SARS-CoV-2 spike protein As an affordable alternative to the commercial spike protein (S C ), we produced spike protein (S P ) by transfecting HEK293T cells with spike ectodomain-expressing plasmid.The efficiency of transfection and spike protein expression was determined using western blot analysis.Major bands were observed in lanes A and B (transfected cells) (Fig. 1) at 180 kD corresponding to full-length SARS-CoV-2 spike protein.While no bands were observed in lane C showing the non-transfected negative control.Accordingly, we performed transfection on a larger scale and purified 920 µg of S P .(Fig. S1) shows the western blot original autoradiogram. Validation of spike protein (S P ) binding activity to ACE2 In order to compare the binding activity of both the produced S P and the S C to the ACE2 protein, we tested each at 2 concentrations: 10 and 100 nM via immunofluorescent assay.Both S P and S C produced similar signals when detected at each of the 2 concentrations (Fig. 2), proving that S P binding activity remained intact after the purification process. Determination of the cytotoxicity of the four natural compounds using MTT CNPs, Cur, GA, Q, and Sil were tested and their CC 50 values were detected at 175, 13, 18, 73, and 65 µg/mL, respectively (Table 1). Standardization of in-house immunofluorescent assay for the determination of the inhibitory effect of selected compounds As the ability of CNPs to inhibit spike-ACE2 interaction was previously identified (Loutfy et al. 2022), they were used in the standardization of the new immunofluorescent assay for drug screening.Different concentrations of CNPs were screened using the modified protocol of our protein-protein interaction detection assay (Emam et al. 2022).The signal was decreased after the rinsing of the labeled-secondary antibody step, in which the variation in fluorescence signals was linearly dependent on the concentration of the CNPs (Fig. 3A) proving the specificity of the observed binding. This was further proved by the controls.The negative control (containing spike and ACE2) curve shows a high fluorescence intensity that corresponds to the binding between the ACE2 and the spike protein.In contrast, the positive control (containing ACE2 only) curve shows decreased fluorescence intensity which indicates the absence of spike-ACE2 interaction (Fig. 3B).Therefore, inhibition of spike-ACE2 interaction resembles the positive control here. Screening of four bioactive natural compounds using the standardized immunofluorescent assay Results showed that the signal representing spike-ACE2 PPI was markedly hindered after treatment with any of the tested compounds.Hence the decrement of the signal corresponded to the increase in the compounds' concentrations, indicating their ability to disrupt spike-ACE2 PPI.The compounds' inhibitory effect was represented as IC 50 values (Fig. 4).CNPs, Cur, GA, Q, and Sil IC 50 values were determined at 12. 69, 1.352, 4.943, 8.503, and 21.05 µg/mL, respectively. Discussion The development of new laboratory methods that are simple enough to facilitate the drug discovery process is important and of high value, especially during pandemics when multiple research groups are working simultaneously and require affordable methods. This study aimed to explore the applicability of our immunofluorescent protein-protein interaction (PPI) detection method for drug screening (Emam et al. 2022).Focusing on the well-known SARS-CoV-2 Spike protein and Angiotensin Converting Enzyme 2 (ACE2) interaction, we standardized the assay using chitosan nanoparticles (CNPs) as a known inhibitor (Loutfy et al. 2022).The successful screening of CNPs against Spike-ACE2 interaction allowed us to calculate the IC 50 value, representing 50% inhibition of tested protein-protein interaction.We also sought to reduce screening costs by producing the spike protein (S P ) in HEK293T cells, ensuring its binding activity to ACE2 was comparable to the commercial spike protein (S C ). It has been reported that some natural compounds are promising candidates for curing many diseases.Their significant pharmacological effects and mode of action as well as their low toxicity allowed them to be thought of as treatments for COVID-19 (Al-Harrasi et al. 2022;Bizzoca et al. 2022;Chakravarti et al. 2021;Khan et al. 2021).Therefore, After the standardization of the screening assay and spike protein production, we selected four natural compounds, Curcumin (Cur), Gallic acid (GA), Quercetin (Q), and Silymarin (Sil), as candidates to be screened using our assay.We assessed their cytotoxicity using MTT assay on Vero cells to obtain CC 50 values, After cytotoxicity evaluation of the selected natural compounds, they were screened using the standardized protocol for drug screening using our immunofluorescent assay.All the screened compounds were able to effectively disrupt the Spike-ACE2 interaction, with Curcumin demonstrating the strongest inhibitory effect, having the lowest IC 50 value (1.352 µg/mL) which is even lower than that of our positive control, CNPs (12.69 µg/ mL).This is supported by previously reported data about the anti-viral effect of Curcumin against viruses, especially SARS-CoV-2.Curcumin demonstrated antiviral activity against influenza A virus, SARS-CoV, human immunodeficiency virus (HIV), enterovirus 71 (EV71), herpes simplex virus (HSV), hepatitis C virus (HCV), human papillomavirus (HPV), Zika virus, and Chikungunya virus; Multiple mechanisms contributed to its antiviral activity (Babaei, Nassiri-Asl, and Hosseinzadeh 2020;Soni et al. 2020).Furthermore, in-silico studies predicted the binding of curcumin with receptors that mediate viral entry of SARS-CoV-2 into host cells (Soni et al. 2020). We can see that, this approach provides an easy and rapid screening tool for anti-SARS-CoV-2 candidates.However, the main limitation of our assay lies in its inapplicability to High Throughput Screening (HTS).This limitation arises from its reliance on glass slides that need to be transferred to a plate for microplate reader analysis, unlike ELISA assays that can be directly performed in a 96-well plate.Despite this limitation, our study contributes a valuable tool using cost-effective and efficient screening method for drugs under investigation. In summary, our study successfully demonstrated the feasibility of utilizing our previously established immunofluorescent assay for drug screening against the Spike-ACE2 interaction.We produced a cost-effective inhouse spike protein, which was highly comparable to the commercial one.The screening of natural compounds, particularly Curcumin, showed promising inhibitory effects with acceptable safety profiles.While the assay has limitations concerning HTS, its cost-effectiveness, and reproducibility make it a valuable tool for targeted drug screening in the context of specific protein-protein interaction.Further research is needed to confirm the antiviral effects in vitro and in vivo of the screened compounds, determining their potential as additional drugs for COVID-19 treatment. Fig. 2 Fig. 1 Fig. 2 Validation of S P binding activity as compared to S C .The fluorescent intensity readings were detected at 2 concentrations for each protein and plotted with Origin software version 2018 Fig. 1 Spike protein expression.Expression of the spike protein was detected using western blot analysis after transfection.Lanes (A) and (B) represent samples from transfected HEK293T cells, showing bands at 180 kDa corresponding to the spike protein.Lane (C) represents the negative control Fig. 3 Fig. 3 Standardization of the in-house immunofluorescent assay.(A) The effect of different CNPs concentrations on the signal intensity.The reduction in the signal corresponds to the increase in CNPs concentration, and (B) The negative and positive controls, the positive control (containing ACE2 only) shows no signal while the negative control (containing spike and ACE2) shows a high signal Fig. 4 Fig. 4 Inhibitory effect of screened compounds.The drug-response curve was calculated for each drug with GraphPad Prism 8 software.IC 50 values of CNPs, Cur, GA, Q, and Sil were detected at 12.69, 1.352, 4.943, 8.503, and 21.05 µg/mL Table 1 CC 50 values of the screened compounds on Vero cells
3,854.2
2024-09-16T00:00:00.000
[ "Medicine", "Chemistry" ]
Symbol-value association and discrimination in the archerfish One of the most important aspects of mathematical cognition in humans is the ability to symbolically represent magnitudes and quantities. In the last 20 years it has been shown that not only humans but also other primates, birds and dolphins can use symbolic representation of quantities. However, it remains unclear to what extent this ability is spread across the animal kingdom. Here, by training archerfish to associate variable amounts of rewards with different geometric shapes, we show for the first time that lower vertebrates can also associate a value with a symbol and make a decision that maximizes their food intake based on this information. In addition, the archerfish is able to understand up to four different quantities and organize them mentally in an ordinal manner, similar to observations in higher vertebrates. These findings point in the direction of the existence of an approximate magnitude system in fish. Introduction From counting eggs [1] to picking the pool with the most fish [2], to fighting an intruder lion only if enough allies are present [3], all of these behaviors reveal an innate notion of quantities in animals from insects [4] to primates [5]. Furthermore, more complex numerical abilities such as ordinal judgment or the addition of two quantities have been extensively investigated in species like birds, bees and non-human primates [6][7][8] and were found in many cases to equal at least the abilities of human infants [9]. Fishes also demonstrate a natural ability to assess quantities. For example, it has been demonstrated that in various species such as mosquitofish or angelfish, an individual will spontaneously join the bigger of two shoals in an attempt to avoid predators, but only when the difference in the number of individuals is significant and enables an easy discrimination [10][11][12]. In addition, other fish can also discriminate small quantities of objects [13,14], recognize the door with the correct number of symbols [15] or encode ordinal information [16]. They are able to perform above chance even when not given access to all the items together [17] or when they are deprived of the sense of sight [13]. All the previous studies used relative numerosity judgments with non-symbolic stimuli (although see [16]). Eliminating the possible influence of continuous magnitudes in a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 numerosity comparison tasks is very difficult. Efforts were made to negate this influence by cleverly manipulating the relationship between number and continuous magnitudes so that continuous magnitudes will not be a predictive cue of numerosity. However, it has been demonstrated that even under such conditions, continuous magnitudes might influence performance [18]]. This is true for our study as well, although our work is the first to use abstract symbols associated with quantities. Homo sapiens has developed advanced mathematical abilities in a way not observed in other animals, and it started by simply associating a quantity with a symbol. This involves the capacity to assign a value to an abstract symbol and then manipulate these representations as though they were the quantities themselves. Being able to tell for example which of two Arabic numerals is larger is not innate but requires learning. Traditionally it was claimed that this ability emerged with language and thus could not develop in animals lacking language [19,20]. Recently, however, studies have shown that a few animals can represent numerosities in a symbolic manner [21][22][23][24][25][26][27]. Monkeys were shown to be able to choose between two alternatives and pick the one that provided the largest food reward [8,28,29]. Chimpanzees can add Arabic numerals to obtain a reward [5]. Pigeons can associate a number of seeds with a symbol and peck the one that leads to a larger reward [30], and a parrot was reported to have learned to associate the vocalization of numerals with the corresponding number of objects [31]. All these results have contributed to a better understanding of the evolution of mathematical cognition. However, it remains unclear how widespread the ability to associate quantities and symbolic representations is among vertebrates. In particular, we do not know whether evolutionary branches that diverged long ago from mammals and birds also possess this ability. This question is interesting since it may provide clues to the anatomical locations in the brain that support these cognitive capabilities. However one should keep in mind that it is also possible that numerical abilities may be the result of convergent evolution and that different clades developed different mechanisms to solve numerical problems. To address this question, we used the unique capability of the archerfish (Toxotes chatareus) to shoot a jet of water at targets presented on a computer monitor. We trained the fish to associate a quantity of food pellets with symbols and choose the target that was associated with the highest value of reward. Animals All the experiments were approved by the Ben-Gurion University of the Negev Animal Care and Use Committee and were in accordance with the laws of the State of Israel. A total of twelve adult archerfish, Toxotes chatareus, 6 to 12 cm, 10 to 15 g, were used for this study. They were purchased from a local animal seller. The fish were kept individually in water tanks (30 cm X 50 cm X 40 cm) filled with brackish water (2-2.5 g of a red sea salts mix per 1 liter of water) at 25˚-28˚C. The room was illuminated with artificial light on a 16/8 h day/night cycle. The water was filtered and oxygenated by an air pump. Training The fish were trained to spit at targets presented on a LCD monitor (VW2245-T, 21.5", BenQ, Taiwan) placed 45 cm above the water level on transparent tempered glass (Fig 1). The food consisted of pellets of similar size of about 3 mm long (Red Parot Granulat, Tropical, Poland) allowing for clear monitoring of the amount received on each trial. Food pellets were simultaneously dropped manually by the experimenter on the side closest to her, and freely derived in the water until the fish ate them. Depending on the experiment, the fish received between 30 and 100 (+/-20%) food pellets. They got most of their food intake from the experiments and only received the equivalent of 10-20 food pellets during the weekend. From experience, we estimate that the fish reach satiety after eating 100 +/-20 food pellet per day and would stop spitting when reaching that level. Sessions were usually a minimum of 48 hours apart to avoid overfeeding and to maintain the fish in a responsive state. They were not fed between sessions during the week. Initially, the fish were trained to shoot water at images of insects on the screen until they understood the task and shot at the targets every time. Aims at targets were rewarded with one food pellet. Then the images were replaced by a single black shape (square, triangle or circle alternatively). Again, the fish were rewarded with one food pellet for any shot at a target. Stimuli were presented using PowerPoint presentations (Microsoft, Seattle, WA, USA). Not all the fish were able to perform the whole set of experiments but since it was not critical for our study that the same fish performed all the different experiments, it did not influence our work. Some of them (4) died of natural causes. Some fish were dismissed because they stopped spitting or they lost their ability to shoot straight (6). This phenomenon is well known in the archerfish research community. Once a fish was dismissed, it was put in the aquarium with the fish for other projects. All the experiments were recorded with a HD video camera (Handycam, HDR-CX240, Sony, Japan) at 25 frames per second. The videos were then manually analyzed with Virtual-Dub to measure reaction times. Reaction time was defined as the time difference between the onset of the targets and the beginning of the shot of water, similarly to common practice in fish research [32,33]. Because we signal the start of an experiment with a blinking square, fish are already paying attention to the screen when the targets appear. Hence there is no "dead time" when the fish needs to notice the stimuli like in Mamuneas' experiment [32] for example, and thus the reaction time can be a good measure of cognitive performance. Behavioral experiments A session was conducted during a single day and consisted of 36 trials; the fish generally responded 70% to 100% of the trials. The appearance of the targets was preceded by a black square (5 cm in height) that blinked three times in the center of the screen, signaling the beginning of a new trial to the fish. The targets were left on the screen for a maximum of two minutes or until the fish shot at one target. The stimuli were shapes (square, circle and triangle) of the same height (2 cm), which were either green, red or black during the experiments, depending on the fish's preference (see below). The value associated with each target varied across experiments. The positions of the target changed randomly between trials and sessions to avoid bias toward specific locations. They were positioned far enough from each other (minimum 3 cm) to eliminate any doubt as to which target was selected. The experiment was conducted until the fish attained a steady state, defined as at least five days with success rate above 60%. Exception was made for the third experiment with quantities where most of the fish's success rate stabilized around 50%. Not all the fish were involved in every experiment with the different quantities but they always went through an extinction phase (see below) between experiments. Natural preference In the first phase, we measured natural preferences for shapes and colors. These sessions consisted of 75 trials, where all three shapes-circle, square and triangle-were presented simultaneously during 15 seconds, at random positions to avoid bias for location. All the targets were the same color during one trial and the colors-green, red and black-were alternated every trial (Fig 2A), all colors appearing in equal proportions. Each shot at any target was rewarded with one food pellet, and the color and the shape chosen were recorded. This phase lasted in general from 5 to 10 days, and provided a dataset of 150 to 250 choices. All but three fish preferred triangles over squares over circles at a rate that greatly exceeded chance ( Fig 2B). In addition, each fish had a preferred color (green, red or black) which was more or less evenly distributed among the fish. Fish 10 and 11, whose natural preferences were not as strong as the other fish, went through a short session (5 days) of preference reinforcement: the favorite target was rewarded with one food pellet and the least one with none. They reached a choice rate of 92% and 100% respectively. Persistence of natural preference A comparison of the percentage of choices of the preferred target during the first and the second halves of the session showed that the natural preference remained strong over time although the fish received an equal reward for all targets (Fig 2C). We ran a two tails paired ttest to determine whether the initial and final choice rate were similar. We obtained a p-value of 0.7701, which indicates that the initial and final choice rate are indeed similar. These results confirm that the natural preference for shapes is persistent over time. Learning & value association After the natural preference of each fish was determined, the experiments were conducted with its favorite color (green, black or red) and only two shapes (except for the last experiment), usually the circle and the triangle, depending on which one the fish preferred the most and the least. The positions of the targets followed the rules described earlier. In all these experiments, all the fish started with a clear preference for one target. It usually took a minimum of 6 sessions and up to 30 for them to learn to choose the other target. To make sure that the fish could actually learn to choose their least favorite targets if it was rewarded, we first conducted a classical conditioning procedure. Here the least favorite target was rewarded with one food pellet whether the favorite one yielded no reward. During the experiments with two shapes, each target lead to a reward, the least favorite always being associated with the highest reward and the favorite with the lowest. The values of the rewards depended on the experiment. During the first experiment, we tested the ability for the fish to discriminate between 1 and 3 associated food pellets. In the second experiment, we tested it they could choose between 2 and 4, and in the latest, in order to check if they could discriminate when the difference between rewards was only one food pellet, we tested them with 3 and 4. In the experiment with three shapes, the rewards were inversely proportional to the preference for the targets (1 for the triangle, 2 for the square and 3 for the circle). All the different combinations of targets were presented in equal proportion: the three pairs triangle-square, triangle-circle and square-circle, as well as the three shapes together. Extinction In order to control that the fish were able to unlearn something they just learnt, and since we needed to be able to conduct several experiments with the same fish for this study and for future research, we subjected the fish to an extinction procedure between experiments [34]. During these sessions, intercalated between the experiments, the natural preference was reinforced again, in the opposite direction from the experiment they had just completed. This was obligatory since if we had simply switched the rewards associated with the targets, we would have ended up rewarding the favorite target with the highest amount of food. This congruency would have made it hard to determine whether the observed effect could be attributed to the magnitude of the reward or to the natural preference for the shape itself. In this phase, the fish were presented with the same targets but this time they were rewarded with one food pellet for their initially favorite target and zero for the other one, thus re-inducing their natural preference. This enabled us to run several experiments with the same fish by always associating the least preferred target with the highest amount of reward to avoid a congruence effect. The summary of the participation of every fish to the different experiments is presented in Table 1. The fish always went through an extinction period between two experiments. The results of the many extinction procedures that were performed are not shown for the sake of clarity. The order of the experiments goes from left to right. Statistical analysis All statistical analyses were performed on Matlab. For the three experiments with two shapes, the chance level was 50%. For the experiment with three shapes, the chance level was set at 45.83%. This is because each session consisted of an equal proportion of each combination (1 versus 2, 1 versus 3, 2 versus 3, and the three shapes representing of 1, 2 and 3 altogether). The chance level for this experiment was thus the average of three time 50% and once 33%. To determine the statistical significance of each experiment, we calculated the probability of obtaining the observed results by chance. Since each fish started with a strong preference for one target, we used its initial success rate-the choice rate during the first day-as the probability of success in the binomial distribution. We then simulated 10000 experiments of 36 shotsa number identical to the number of daily trials-and calculated the probability to observe a success rate equal or better than the one observed during the last day of the experiment. In this way, the p values were calculated as 'number of simulated distributions with number of correct responses > = observed number of correct responses'. A p-value smaller than 0.005 indicates that we can reject the hypothesis that the observed initial and final results are identical. Learning and extinction After establishing the fish's natural preference for colors and shapes (see Method), the goal of this session was to test their ability to learn to spit at a target they did not choose spontaneously and then check if they were able do unlearn and go back to their initial preference. In the learning phase where only the least favorite target was rewarded, the fish started with a clear preference (Fig 3A) with at least 65% of the shots in the first session toward their preferred target. Nevertheless, all the fish learned to shoot at the rewarded target more than 70% Fig 3B). Similarly, in the extinction phase, all the fish became proficient in five to seven sessions and went back to their natural preference ( Fig 3C) with a minimum of a 70% choice of the favorite target (p<0.0001, Fig 3D). Two shape-value association. In these experiments with two targets, we tested whether the fish could associate different quantities of reward with each shape. In the first experiment, we tested if the fish would prefer three food pellets rather than one. Fig 4A shows that over the course of the experiment the fish changed their preference to shoot at the circle (rewarded with three) significantly more often than the triangle. The average choice rate on the last day was around 80%, which is significantly different from the initial 5% (p<0.0001, Fig 4A inset). Symbol-value association in the archerfish Thus fish appeared to be able to distinguish between one and three food pellets, and change their shape preference accordingly. After an extinction phase (see Methods), the fish had to choose between targets associated with two (triangle) and four food pellets (circle). Again, the results clearly showed that the fish were able to shift their preference (Fig 4B), since they preferred the circle at the end of the experiment (76% as compared to 12% initially, p<0.0001) with results significantly above chance (Fig 4B inset). Next, we tested if the fish could still discriminate when the difference between targets was reduced to only one food pellet. Thus, with the same setup we examined whether the fish could distinguish between three and four (Fig 4C), after an extinction phase. Here, we observed a variety of learning capabilities in the fish population. One fish clearly learned to choose the most highly rewarded target (Fish 4, Fig 4C), whereas another fish did not change its preference, with an initial and final rate of about 40% (Fish 8, Fig 4C). Another three fish changed their preference from the original to about an equal proportion for both targets. Although the final success rate (53%) in this experiment was lower than in the previous ones with a difference of two food pellets, it was still significantly higher than the initial rate (15%, p<0.0001). Three shape-value association. To test the ability of the fish to associate values with symbols in a more complex setup, we tested the fish with three shapes associated with three amounts of reward. Spitting at the triangle was associated with one food pellet, two when the square was chosen and three for the circle. The results showed that although the initial success rate was close to chance, the fish eventually learned to choose the most rewarded targets with an average rate of about 70% (p<0.0001, Fig 5A). When breaking down the results between the three pair comparisons (1 versus 2, 1 versus 3 and 2 versus 3, Fig 5B) and the one with three shapes (Fig 5C), it appears clearly that Fish 4 was able to choose the most rewarded target in all conditions. Fish 11 and 12 succeeded significantly above chance with three targets but their overall results for the two shapes combination do not allow us to clearly state that they succeeded. This more detailed depiction of results for the pair conditions reveals that only Fish 4 ( Fig 6A) shows an upward trend for each combinations, whereas the two other fish seem to oscillate (Fig 6B and 6C). It is also important to notice that since we conducted the experiment until we saw a steady state in the overall score rather than in each subcategory separately, we cannot state that the fish were unable to pick the correct answer in the three two-shape combinations. On the contrary, the fact that Fish 4 succeeded suggests that given enough time, the others might also be able to reach a stable success rate above chance level in every condition. Discussion & conclusion Previous studies revealed that animals such as primates or birds are able to understand and manipulate symbolic representation of quantities [21][22][23][24][25][26][27]. Here we show for the first time that fish are also able of such an abstraction. Building on the natural preference for shapes in archerfish provided a powerful tool to investigate decision making, and made shifts in preference more salient. Almost all the fish became proficient in the tasks and successfully shifted away from their initial preference toward the target associated with the largest food reward. This suggests that fish are able to associate different abstract symbols with different quantities of reward. The different experiments with two targets revealed that the fish are not only able to discriminate between 'one' and 'many' but also between two quantities bigger than one, and do so even when the difference between the two is smaller. While the final choice rates in the third experiment (3 versus 4) were roughly equal, this result nevertheless indicates that fish perceived a difference between the two values. Indeed, had they estimated the rewards as equal, they would have kept choosing the triangle just like they did during the natural preference phase, when all targets led to the same reward (Fig 2). Thus this experiment indicates that the fish can discriminate between quantities that are separated by only one unit, at least when the rewards are smaller or equal to four. To determine whether the fish were able to associate more than two values with symbols, we conducted the three shapes experiment. The difficulty of the decision in this experiment was due to the fact that the combination of symbols changed every trial. Thus, the fish needed an organized representation of the values associated with each shape in order to be able to mentally compare them to each other in every set of choice. Although the progress from the initial to final success rates was considerably lower than in the previous experiments, it remained significantly higher than chance indicating that fish can compare more than two values and organize them mentally. Success rates depend on ratio of rewards Current theories about the existence of an organized representation of magnitudes in humans and animals [35,36] are based on the existence of a correlation between the success rate when comparing two values and the ratio between those two. Studies have shown in humans and other primates that accuracy decreases and latency increases as the ratio between two values gets larger [35,37] and it is true even during comparison [36,38] of symbolic representations. Fig 7A shows the average success rate over the last three days of each experiment of twoshape comparison. We also show the results of the fourth experiment (with three shapes) for the three conditions with two shapes. Scores for the two experiments of 1 versus 3 (first and fourth experiment) are identical and their representations on the graph are not distinguishable. The other scores also follow closely the linear regression. The only outlier is the result for 1 versus 2 in the three-shape experiment that falls below the general trend. However, the mean score for the two experiments with the 0.5 ratio (green dot) falls exactly on the predictive curve. Interestingly, the final success rates per pair for Fish 4 (Fig 6A) also follow exactly this prediction and Fish 12 ( Fig 6C) seems to be on the same track. The analysis of the success rates in pair choices thus suggests the existence of a ratio effect in the archerfish as predicted by Weber's law [39], hinting the existence of an organized system to represent magnitudes. To confirm this intuition we also analyzed reaction times. If the fish made a binary decision ('less' vs. 'more'), the reaction time should not change as the distance between the associated rewards varies. By contrast, if fish are able to map the values associated with each target on a mental magnitude line [9], reaction time should rise as the comparison becomes harder, as observed in humans and other animals [40,41]. Fig 7B shows the average reaction times across all fish during the last day of the experiment as a function of the ratio of rewards. We did not include the last experiment with the three shapes in this analysis since the number of choices increased the complexity of the Symbol-value association in the archerfish experiment exponentially and thus impacted the reaction time of the fish. This analysis reveals that as the ratio of reward approaches one, the reaction time increases, indicating that the task became harder for the fish and confirming the existence of an organized mental representation of values. The results are also coherent with symbol discrimination in humans [42], where the time to determine which one of two Arabic numbers is the biggest also depends on their ratio. Continuous magnitudes as representation of quantities It was thought for a long time that humans and some animals possess a discrete representation of quantities [9,36]. However, recent studies suggest that it is virtually impossible to remove all influence of continuous magnitudes [43], and thus it seems more likely that animals and humans developed instead a continuous representation of magnitudes. We try here to identify the potential magnitudes used by the fish to determine the value of rewards associated with a target. Yet, although it appears clearly that the archerfish successfully did so, our experiments did not permit to determine which parameters the fish used to encode those values. Our behavioral experiments were not based on a simple visual comparison task as was conducted with other fish [10][11][12][13]15]. Indeed, the two values to be compared, the number of food pellets, are never present simultaneously. Therefore, the fish need to encode and store the value they attribute to each symbol in order to mentally compare it with the values they will encounter later. Our experiments show some similarities with the one conducted by Dadda [17] where mosquitofish saw members of a group sequentially and not all at once. However, since this setup did not use symbols, the mosquitofish could easily base their decision on some continuous magnitude directly linked to the stimulus such as the time it takes to see all the individuals. It seems very unlikely that the archerfish based their choice on only one continuous magnitude linked to the amount of food pellets. Indeed, the food repartition was influenced by too many factors. Since the experimenter always dropped the food on the same side of the aquarium, the fish rapidly learned to position themselves where the pellets would fall, just as they do when they hunt [44]. Thus, they often caught some food when it reached the water or even right before, while the other pieces started floating and spreading randomly immediately because of the flow generated by the air pump. In addition, the fish are close to the surface when they expect food, giving them a maximum viewing angle of 90˚ [45] that does not encompass the whole aquarium's surface at once, potentially preventing them from seeing all the food. Therefore, it is likely that if the fish used visual continuous magnitudes, they needed to use all available variables (density, covered area, etc.) and not only one to evaluate the amount of food present. It is also possible that the fish uses additional magnitudes to determine which symbol yields more food. For example, satiety could be a parameter, or the effort required to eat all the food pellets (time, physical effort. . .). However, the random dispersion of the food pellets renders those magnitude very noisy, which is coherent both with the observed ratio effect observed and the literature [36]. Although many factors can influence the continuous magnitudes, since we do not have a way to equate all of them (time to grab, dispersion, satiety . . .) and they are correlated with the number of food pellets, it is likely that the fish used those magnitudes as cues to determine how much food they got. We can thus only conclude that the fish possess a magnitude sense [46,47] of the amount of reward. The influence of continuous magnitudes is not crucial for the conclusions of the current work since it is possible that the symbols are connected with both number and continuous magnitudes, as demonstrated in humans [48]]. Effect of learning on success rate It is worthwhile noticing that not all the fish showed a similar level of success in the different tasks. Several factors could explain this effect: it is probable that just like in humans [49], there is a range of cognitive abilities and not all the fish present the same aptitudes. A recent review of cognitive experiments performed with fish also suggests the existence of such a phenomenon [50]. Additionally, they did not all perform the whole set of experiments. Only Fish 4 went through each step and it is interesting to see that it is also the fish with the best results in the two last experiments. This could indicate that the ability to associate a value with a shape can be trained and improved and that with enough practice, archerfish could succeed in tasks more complex than the ones tested here. To conclude, our experiments revealed that archerfish are able to associate values of rewards with abstract symbol, compare them and consistently choose the more rewarding target. We also saw that their success rate and reaction time are correlated with the ratio between the rewards. These results suggest that archerfish possess an organized mental representation of magnitudes that enables them to perform comparisons, rather than a simple binary notion of 'more' and 'less'. Supporting information S1 Data. Experimental raw data. One line correspond to a session for tabs 1-6 and first part of tab 7. Second part of tab 7 contains the details of each session. Tab 8 contains the number of shot during the last session as well as the reaction times. (XLSX)
7,085.4
2017-04-05T00:00:00.000
[ "Biology" ]
Effects of Flow Baffles on Flow Profile, Pressure Drop and Classification Performance in Classifiers This paper presents a study of the use of flow baffles inside a centrifugal air classifier. An air classifier belongs to the most widely used classification devices in mills in the mineral industry, which is why there is a great interest in optimizing the process flow and pressure loss. Using Computational Fluid Dynamics (CFD), the flow profile in a classifier without and with flow baffles is systematically compared. In the simulations, turbulence effects are modeled with the realizable k–ε model, and the Multiple Reference Frame approach (MRF) is used to represent the rotation of the classifier wheel. The discrete phase model is used to predict the collection efficiency. The effects on the pressure loss and the classification efficiency of the classifier are considered for two operating conditions. In addition, a comparison with experimental data is performed. Firstly, the simulations and experiments show good agreement. Furthermore, the investigations show that the use of flow baffles is suitable for optimizing the flow behavior in the classifier, especially in reducing the pressure loss and therefore energy costs. Moreover, the flow baffles have an impact on the classification performance. The impact depends on the operation conditions, especially the classifier speed. At low classifier speeds, the classifier without flow baffles separates more efficiently; as the speed increases, the classification performance of the classifier with flow baffles improves. Introduction The air classification of gas-particle flows is an essential step in mineral, pharmaceutical, food, coal and cement industries [1][2][3][4][5]. Especially for high product flows and fine target products, centrifugal classifiers are commonly installed. The general function of a classifier is to separate particles into coarse and fine particle fractions. This is achieved by rotating the classifier wheel, whereby centrifugal forces act against the drag forces, and particles are classified according to size. Coarse particles are thus rejected at the outer edge of the classifier wheel, while finer particles are transported inwards with the air to the product outlet [6]. In order to reduce the process stages in a system, grinding and classification usually take place in a single apparatus. This enables continuous operation, since particles that are too coarse are returned directly to the grinding process after being rejected at the classifier. The high mass flows cause high operational energy costs. Therefore, interest in optimizing the process is of great importance. For this purpose, several experimental and numerical studies have been carried out in the past. A detailed description of the flow profile in the classifier was first given by Toneva et al. [7]. According to this, the flow profile shows three characteristic regions. The first region is located between the classifier blades. Here, the flow forms a forced vortex with high tangential velocity. Depending on the speed of the classifier, a dead zone is created, which decreases the classification performance of the apparatus. Between the classifier blades and the center of the classifier, two other regions appear with opposite physical behaviors. At the inner edge of the classifier blades lies the second region, where the tangential velocity increases with decreasing radius. In the center of the classifier is the third region, which is in contrast to the second region. This has similar characteristics to a forced vortex, and the tangential velocity decreases rapidly. The size of the individual regions depends on the process parameters and the design of the classifier. Many researchers have now confirmed Toneva's studies [8,9]. For example, Stender et al. [8] were able to gain optical access to the processes in the classifier. By using a camera, the real particle movement during classification was observed for the first time. In recent years, research has mainly focused on the optimization of geometric aspects of the classifier blades or the differences between vertical or horizontal classifiers [10][11][12][13][14]. For this purpose, CFD was primarily chosen, as it is a cost-effective and time-saving tool and provides a sufficiently accurate representation of characteristic values such as pressure drop and classification efficiency [15][16][17]. Nevertheless, the inner region of the classifier, characterized by Toneva as the second and third regions, has always been neglected during optimization. This zone, however, requires special attention, since high velocity and pressure changes occur in this region. The potential to optimize the flow profile and reduce pressure losses shows studies of Guizani et al. [18]. During the investigations, the fine material outlet of a classifier was changed, which significantly reduced the pressure loss. This work investigates the effect of flow baffles inside the classifier on the flow profile, pressure drop and classification efficiency. For this purpose, different classifier geometry are simulated. The geometries are different inside the classifier, namely no and twelve baffles are integrated. The baffles are attached to the rotor so that they have the same speed as the classifier. The speed of the classifier is varied, as the performance of the baffles also depends on it. The Multiple Reference Frame model (MRF) represents the rotation of the classifier. To evaluate the classification efficiency, the discrete phase model (DPM) is implemented to estimate particle trajectories. In addition, experimental tests for both types of classifiers are performed. Finally, this works compares two operating conditions for experimental and simulation results for validation. Experimental Set-Up A schematic view of the complete process is shown in Figure 1. In the process, air is recirculated in the system with a fan. To prevent water from accumulating, a part of the air is exchanged. The main part of the process is the mill and the classifier. The mill itself consists of four rollers held in a fixed position on a rotating bowl. The mill and the classifier are also illustrated in Figure 1. Air enters the mill from below via a nozzle ring and transports comminuted particles thrown off by the rotating bowl to the top. The classifier is above the mill. A first classification takes place in the transport section between the mill and the classifier, since very coarse particles do not reach the top due to gravity. Finer particles pass with the air through the static blades and reach the classifier. The rotating classifier rejects coarser particle due to centrifugal forces, while finer particles pass inside with the air and reach the fines. The coarse particles move in a circular path around the classifier and slowly sediment to the bottom, where the particles are fed again into the grinding process. In addition, solid material is fed onto the grinding bowl via a screw conveyor. A cyclone separates the fine particles after the classifier from the air, where a sample of the fine product is taken and analyzed. The particle size distribution of the sample is measured by laser diffraction with a Mastersizer 2000 provided by Malvern Panalytical. Measuring points for measuring pressure drop are set up in front of the static guide vanes and after the classifier. Static pressure is measured with a barometer that uses a liquid column. The static pressure versus atmospheric pressure is determined for the two measurement points, and then the difference is formed. Water is used as the liquid. The left-hand side of Figure 1 illustrates the measuring points of the pressure drop measurement. The fluctuations of the measurement were about 0.2 mbar, i.e., negligibly small compared to pressure losses of 30 mbar. This was due to the uniform velocities in these geometric sections. Due to the low fluctuations, the measured values could be used for comparison with the simulations. It was expected that the influence of the baffles on the performance of the classifier would depend on the operating conditions, especially the classifier speed. Therefore, the classifier was operated without and with 12 baffles at two different operating conditions. The flow baffles were located inside the classifier and were attached to the classifier and shaft. They can also be seen in Figure 1. The operating conditions differed in terms of classifier speed, rotating bowl speed, solid feed, volume flow and processed material. A detailed description of the operating conditions is shown in Tables 1 and 2. The classifier has a diameter of 1.6 m and consist of 48 blades. The static blades have an outer diameter of 2.1 m and comprise 60 blades. The mill has 4 rollers. The grinding bowl diameter is 1.85 m. The focus of the investigations was on the classifier; therefore, The fluctuations of the measurement were about 0.2 mbar, i.e., negligibly small compared to pressure losses of 30 mbar. This was due to the uniform velocities in these geometric sections. Due to the low fluctuations, the measured values could be used for comparison with the simulations. It was expected that the influence of the baffles on the performance of the classifier would depend on the operating conditions, especially the classifier speed. Therefore, the classifier was operated without and with 12 baffles at two different operating conditions. The flow baffles were located inside the classifier and were attached to the classifier and shaft. They can also be seen in Figure 1. The operating conditions differed in terms of classifier speed, rotating bowl speed, solid feed, volume flow and processed material. A detailed description of the operating conditions is shown in Tables 1 and 2. The classifier has a diameter of 1.6 m and consist of 48 blades. The static blades have an outer diameter of 2.1 m and comprise 60 blades. The mill has 4 rollers. The grinding bowl diameter is 1.85 m. The focus of the investigations was on the classifier; therefore, Processes 2021, 9, 1213 4 of 13 this study neglected the influence of the mill, and no further information on the mill will be provided. Numerical Set-Up The basic task of the numerical investigations is the impact of baffles within the classifier on the flow field, pressure drop and thus on grade efficiency of the apparatus. The high complexity and size of the classifier together with the high number of particles requires certain assumptions due to the high computational effort. Firstly, the flow in a classifier is typically a high swirling flow due to the rotating parts in the classifier. This symbolizes the Reynolds number, which is around 9.5 × 10 5 in the classifier outlet area and characterize the flow as turbulent. The realizable k-ε, an approach using the Reynolds Averaged Navier-Stokes Equations (RANS), serves as a model to predict the turbulent character of the flow regime. The realizable k-ε is appropriate for fully turbulent rotating flows in which high strain rate and curvature of streamline occurs [11,15]. Secondly, the rotation of the classifier needs special attention. The Multiple Reference Frame (MRF) allows one to simulate the interaction between moving and stationary parts. The MRF Model treats flow differently depending on the region. The rotating sections are frozen in position, and the model solves the Navier-Stokes equation, including the centrifugal and Coriolis forces. Rotating walls rotate as a rigid body. Standing walls receive the no slip condition. The MRF model does not represent nonstationary effects due to rotation. Nevertheless, the MRF model is suitable for the simulation of a classifier due to its advantages of being robust and that there is no tight coupling between moving parts and stationary parts [7,18]. Furthermore, air is assumed to be an incompressible, isothermal, Newtonian fluid. For adequately describing the dispersed phase, the Discrete Phase Model (DPM) is used. The particle laden flow is only studied in the upper part of the machine in the area of the classifier, which is why the particles are abandoned above the mill. Particles which are deposited at the classifier are detected as well as particles falling down to the mill. The fine particles passing through the classifier are also detected. This enables the creation of classification efficiency curves for the investigated classifier. In the experiments, only the fines were available for analysis. A discrete number of particles represents the solid phase. Particles are tracked though the air flow. Particle-particle interaction as well as particle comminution or accumulation are considered. The trajectory of a particle is given by the integration of the momentum balance of a particle. The equations of motion for an individual particle in rotating region can be written as The terms on the right-hand side represents the drag force, the gravity force, the Coriolis force and centrifugal force. Coriolis and centrifugal forces result from the rotation of the classifier. x is the diameter of a particle, → u P the particle velocity, ρ P the density of the particle, C D the drag coefficient, ρ F the density of the air and → v rel the relative velocity between the fluid and the particle. → g is the gravity vector, m P the mass of a particle, Ω the angular velocity and → r the radial position vector. Other forces such as Saffmann, virtual mass force and Basset term can be neglected. The drag model uses an assumption of solid spheres: Re ≤ 1000 0.424 Re P Re > 1000 (2) The particle Reynolds number Re P is where µ F is the kinematic viscosity of the air. The particle Reynolds number is in the range of 0.1 to 5 in the classifier outlet area. Moreover, a stochastic dispersion model considers the effect of turbulence during the particle tracking. Therefore, the velocity fluctuation is perturbed in a random direction with a Gaussian random number. Particlewall collisions are elastic and specular. Additionally, the approach neglects the influence of particle rotation. In order to investigate the classification efficiency of the classifier, 10 7 spherical particles were tracked in the simulation. The feed zone was above the mill with an initial velocity of 4 m/s. The air outlet and mill area were escape boundaries, which means that the calculation stopped once particles reached them. The solid loading and the volume flow was assumed to be the same as in the experiments. In the experiments, the material was comminuted in the mill, which is why the particle distribution reaching the classifier was not known. Therefore, a fictive particle distribution in the particle size range from 1 to 500 µm was specified in the simulation. The continuous operation of the plant and the impossibility to simulate the comminution process did not allow for a direct comparison between experimental and numerical results. Only a quantitative comparison took place in the results. All simulations were performed with the software environment OpenFoam-6. A solver was adapted according to the equation described above, and two-way coupling between the continuous and dispersed phase was included. Standard no-slip boundaries were applied to all walls, including the classifier. As boundary conditions, a relative total pressure of 0 Pa was specified at the outlet of the classifier. Furthermore, the same volume flow as used in experiments was defined at the mill inlet. An overview of the selected boundary conditions is shown in Table 3. The geometry used in the simulation was a faithful representation of the original geometry, except for minor details such as welds, minor edges or closed volumes that are not of practical importance to the flows and were omitted to limit model complexity. Nonetheless, due to its complexity, the two grids, without and with baffles, consisted of around 15 million elements. The grids were composed of an unstructured, hexahedral mesh with prism layers created by the OpenFOAM meshing tool snappyHexMesh. The value of y+ was for all walls between 30 and 300 and thus fit the requirements of the turbulence model. The mesh quality for both grids was as follow: maximum skewness of 2.8, non-orthogonality of less than 66 • and an aspect ratio of less than 17. An overview of the complete grid and the classifier with baffles is illustrates in Figure 2. Three different meshes were generated. Table 4 compares the pressure loss from the simulation with experimental data for the three grids. The comparison is for the second operation conditions. The standard deviation was calculated to pressure loss in experiment for all grids. Table 4. Comparison of pressure loss for three grids between simulation and experiment at second operation conditions in Table 2 and classifier without baffles. Grid Number Three different meshes were generated. Table 4 compares the pressure loss from the simulation with experimental data for the three grids. The comparison is for the second operation conditions. The standard deviation was calculated to pressure loss in experiment for all grids. Table 4. Comparison of pressure loss for three grids between simulation and experiment at second operation conditions in Table 2 and classifier without baffles. Grid Number In order to investigate mesh sensitivity, further values were studied. Firstly, the average radial and tangential velocity between the classifier blades were compared. Secondly, we investigated the velocities in the transport section between mill and classifier. Thirdly, the resulting overall pressure drop in the classifier and mill was plotted against experimental data. This last comparison also determined the accuracy of the numerical model. The comparison of measured and simulated values is shown for the two operating conditions for the classifier without blades in Tables 5 and 6. In order to investigate mesh sensitivity, further values were studied. Firstly, the average radial and tangential velocity between the classifier blades were compared. Secondly, we investigated the velocities in the transport section between mill and classifier. Thirdly, the resulting overall pressure drop in the classifier and mill was plotted against experimental data. This last comparison also determined the accuracy of the numerical model. The comparison of measured and simulated values is shown for the two operating conditions for the classifier without blades in Tables 5 and 6. The results show that the pressure loss was slightly underestimated in the simulation. The standard deviation was around 10%, which is probably due to fact that turbulence effects were modelled with the realizable k-ε approach. However, a complete resolution of turbulence effects is too time consuming. Results The time-averaged velocity distributions inside the mill and classifier with no and twelve baffles were investigated, and the CFD results for the second operating condition are reported in Figure 3. The second operation condition had a higher classifier speed of 500 rpm, and the outer edge of the classifier rotated at about 42 m/s. Since only the inner area of the classifier was changed, the flow profiles in the grinding area did not differ for both cases. The air was guided through the nozzle ring after the inlet, causing the air in the outer area to rise upwards to the classifier. The highest velocities occurred between the classifier blades and the inner edge of the classifier cage. In the center of the classifier, the velocity decreased. In general, the flow profiles for both geometries were very similar; differences only became apparent when taking a closer look within the classifier wheel. Results The time-averaged velocity distributions inside the mill and classifier with no and twelve baffles were investigated, and the CFD results for the second operating condition are reported in Figure 3. The second operation condition had a higher classifier speed of 500 rpm, and the outer edge of the classifier rotated at about 42 m/s. Since only the inner area of the classifier was changed, the flow profiles in the grinding area did not differ for both cases. The air was guided through the nozzle ring after the inlet, causing the air in the outer area to rise upwards to the classifier. The highest velocities occurred between the classifier blades and the inner edge of the classifier cage. In the center of the classifier, the velocity decreased. In general, the flow profiles for both geometries were very similar; differences only became apparent when taking a closer look within the classifier wheel. Table 2. Therefore, Figure 4 shows the velocity profiles for the tangential, axial and radial component in an axial section through the classifier and the static blades. Figure 4 also shows the resulting pressure drop for investigated geometries. Without baffles inside the classifier, the flow profile with the three regions described by Toneva [7] was formed. The tangential velocity reached the maximum velocity on the inner edge of the classifier cage. Table 2. Therefore, Figure 4 shows the velocity profiles for the tangential, axial and radial component in an axial section through the classifier and the static blades. Figure 4 also shows the resulting pressure drop for investigated geometries. Without baffles inside the classifier, the flow profile with the three regions described by Toneva [7] was formed. The tangential velocity reached the maximum velocity on the inner edge of the classifier cage. Moreover, the forced vortex inside the classifier was strongly impressed, and the tangential velocity dropped sharply with a lower radius. This vortex led to a huge reduction in static pressure. In addition, the flow formed a vortex between the classifier blades. Behind the trailing blades, the radial velocity was shown by blue zones, indicating a radial velocity outwards, in front of the leading blade the air flows inwards, which was indicated by red velocity. Positive axial velocities occurred mainly outside of the static blades, between the static blades and the classifier, and inside the classifier. In the center of the classifier, negative axial velocities were present. The flow baffles changed the typical flow profile, which resulted in a slight reduction of the maximum tangential velocity. However, negative tangential velocity inside the classifier and negative axial velocities could be minimized or avoided. This significantly reduced the pressure loss in the classifier with baffles. A qualitative indication of the reduction in pressure drop is provided by Tables 4 and 5. There, the pressure loss in the experiment and simulation is presented for both classifier and operation conditions. The tables show firstly that, as already mentioned above, the pressure drop in the simulation was always lower than in the experiment, and secondly, that the pressure drop for both operating conditions could be significantly reduced by using flow baffles. For the first operating conditions at low classifier speed and high solid loading and volume rate, the reduction was around 12%; at the second operating condition at higher classifier speed and lower solid loading and lower volume flow, the pressure reduction was around 26%. Thus, thirdly, the tables show that the reduction in pressure drop depends on the operating conditions. The higher the classifier speed and the lower Table 2. From top to bottom: the tangential velocity, the axial velocity, the radial velocity and the static pressure; (a) classifier without baffles; (b) classifier with baffles. Moreover, the forced vortex inside the classifier was strongly impressed, and the tangential velocity dropped sharply with a lower radius. This vortex led to a huge reduction Table 2. From top to bottom: the tangential velocity, the axial velocity, the radial velocity and the static pressure; (a) classifier without baffles; (b) classifier with baffles. In addition to pressure loss, classification efficiency is also an important parameter to evaluate the classifier performance. In the simulations, particles in the size range of 1-500 µm were tracked. The number of particles varied in the range of 100,000 and 1,000,000. The particle distribution in the feed as well as the number of tracked particles had only a minor influence on the classification efficiency in the classifier. Since the particle distribution reaching the classifier in the experiments after grinding is unknown, an exact validation of experiment and simulation was not possible. Particles were fed between the classifier and the mill in the simulation, and their pathlines were recorded through the classifier. The end of tracking was reached when a particle left the classifier through the air exit or was rejected by the classifier and sediments towards the mill rollers. As an example, two different particle trajectories for a coarse particle of 50 µm and a fine particle of 10 µm for the classifier with twelve baffles and the second operating condition are shown in Figure 5. that the pressure drop for both operating conditions could be significantly reduced by using flow baffles. For the first operating conditions at low classifier speed and high solid loading and volume rate, the reduction was around 12%; at the second operating condition at higher classifier speed and lower solid loading and lower volume flow, the pressure reduction was around 26%. Thus, thirdly, the tables show that the reduction in pressure drop depends on the operating conditions. The higher the classifier speed and the lower the volume flow, the greater the effect of the baffles. Fourthly, the simulation predicted very well the pressure loss reduction. In addition to pressure loss, classification efficiency is also an important parameter to evaluate the classifier performance. In the simulations, particles in the size range of 1-500 µm were tracked. The number of particles varied in the range of 100,000 and 1,000,000. The particle distribution in the feed as well as the number of tracked particles had only a minor influence on the classification efficiency in the classifier. Since the particle distribution reaching the classifier in the experiments after grinding is unknown, an exact validation of experiment and simulation was not possible. Particles were fed between the classifier and the mill in the simulation, and their pathlines were recorded through the classifier. The end of tracking was reached when a particle left the classifier through the air exit or was rejected by the classifier and sediments towards the mill rollers. As an example, two different particle trajectories for a coarse particle of 50 µm and a fine particle of 10 µm for the classifier with twelve baffles and the second operating condition are shown in Table 2 and classifier with twelve baffles: blue 10 µm and red 50 µm. Table 2 and classifier with twelve baffles: blue 10 µm and red 50 µm. The particle trajectories refer to the absolute motion of a particle, which is why they also move through rotating walls such as classifier blades and baffles in Figure 5. As expected, the fine particle enters the fine material while the coarse particle is rejected several times on the classifier. The particle trajectories can be used to calculate the grade efficiency T(x), which describes what fraction of the feed is in the coarse material after classification. Therefore, the grade efficiency results in where m C is the coarse material mass, m Feed is the feed material mass, q 3,C is the particle density distribution of the coarse material and q 3,Feed is particle density distribution of the feed. Figure 6 shows the results of the classification efficiency curves for both operating conditions and classifier types of the simulation. At the second operating condition, finer separation is achieved because the speed of the classifier is much higher and therefore greater centrifugal forces act on the particles. It is also noticeable that at low classifier wheel speed, i.e., the first operating condition, the classifier without baffles separates more effectively, while at high speed the grade efficiency of the two separators is almost the same. Furthermore, the fish-hook effect is more pronounced in the classifier without baffles than in the classifier with baffles at the second operating point. In the experiments, only the fines were available for analysis. Therefore, two values were measured to evaluate the grade efficiency in the experiments. Firstly, the Blaine value was used, which is a standardized measure for the degree of fineness of a material and is determined via the specific surface area [19]. The higher the Blaine value, the finer the material and thus the better the separation of the classifier. Secondly, the residue on the sieve in µm was compared between classifier with and without baffles. Tables 7 and 8 summarize the measured valued for the two classifier types and operation conditions. T(x), which describes what fraction of the feed is in the coarse material after classification. Therefore, the grade efficiency results in where is the coarse material mass, is the feed material mass, , is the particle density distribution of the coarse material and , is particle density distribution of the feed. Figure 6 shows the results of the classification efficiency curves for both operating conditions and classifier types of the simulation. At the second operating condition, finer separation is achieved because the speed of the classifier is much higher and therefore greater centrifugal forces act on the particles. It is also noticeable that at low classifier wheel speed, i.e., the first operating condition, the classifier without baffles separates more effectively, while at high speed the grade efficiency of the two separators is almost the same. Furthermore, the fish-hook effect is more pronounced in the classifier without baffles than in the classifier with baffles at the second operating point. In the experiments, only the fines were available for analysis. Therefore, two values were measured to evaluate the grade efficiency in the experiments. Firstly, the Blaine value was used, which is a standardized measure for the degree of fineness of a material and is determined via the specific surface area [19]. The higher the Blaine value, the finer the material and thus the better the separation of the classifier. Secondly, the residue on the sieve in µm was compared between classifier with and without baffles. Tables 7 and 8 summarize the measured valued for the two classifier types and operation conditions. Table 7. Blaine value and residue on sieve in experiment at first operating condition. Without Baffles With Baffles Blaine value (cm 2 /g) 3680 3580 Residue in % of 125 µm 8.9 9.5 Figure 6. Classification efficiency curves of the two types of classifier at two operating conditions in the simulation. Table 7. Blaine value and residue on sieve in experiment at first operating condition. Without Baffles With Baffles Blaine value (cm 2 /g) 3680 3580 Residue in % of 125 µm 8.9 9.5 Table 8. Blaine value and residue on sieves in experiment at second operating condition. Without Baffles With Baffles Blaine value (cm 2 /g) 9005 9275 Residue in % of 32 µm 0.7 0.4 The experiments confirmed the results obtained from the simulation for the grade efficiency curves. At the first operating condition, the classifier without baffles separated better, which is why the Blaine value was higher and the residue on 125 µm lower. In the second operating condition, the classifier with baffles achieved a better separation result. The following section focuses on the improved classification performance of the classifier with flow baffles at higher speeds only. Therefore, Equation (5) describes the theoretical cut size of a spherical particle, which has no interaction with other particles. The theoretical cut size x t,Th results from the superposition of centrifugal force and drag force and is where η is the dynamic viscosity of the air, v r is the radial velocity, r is the radius, ρ P is the density of the solid particle and v ϕ is the tangential velocity. Therefore, the cut size primarily depends on the radial and tangential velocities in the classifier. Classification takes place between the classifier blades. Figure 7 shows the radial velocity profile between two classifier blades. The classifier rotates clockwise, negative velocities point inward and positive velocities point outward. A vortex forms behind the leading blade, which constricts the radial transport inwards and significantly increases the radial velocity in zones where particles reach the inside. This effect increases with increasing classifier speed. Therefore, higher classifier speeds increase not only the tangential velocity, but also the radial velocity. However, since the tangential velocity has a quadratic influence on the cutsize, this influence is in general greater. Many studies have already investigated the flow profile and the influence of several parameters such as classifier speed or flow rate [7,9]. Therefore, this work focused exclusively on the effects of the flow baffles. Figure 4 shows that the installation of flow baffles reduced the tangential velocity inside the classifier, which also slightly reduced the tangential velocity between two classifier blades and deteriorated the classification efficiency of the classifier. With increasing speed, however, this effect decreased, since the relative proportion of the reduction of the tangential velocity became smaller and smaller. The right-hand side of Figure 7 shows that the baffles had a positive effect on the inward constriction of the radial transport. where is the dynamic viscosity of the air, is the radial velocity, is the radius, is the density of the solid particle and is the tangential velocity. Therefore, the cut size primarily depends on the radial and tangential velocities in the classifier. Classification takes place between the classifier blades. Figure 7 shows the radial velocity profile between two classifier blades. The classifier rotates clockwise, negative velocities point inward and positive velocities point outward. A vortex forms behind the leading blade, which constricts the radial transport inwards and significantly increases the radial velocity in zones where particles reach the inside. This effect increases with increasing classifier speed. Therefore, higher classifier speeds increase not only the tangential velocity, but also the radial velocity. However, since the tangential velocity has a quadratic influence on the cut-size, this influence is in general greater. Many studies have already investigated the flow profile and the influence of several parameters such as classifier speed or flow rate [7,9]. Therefore, this work focused exclusively on the effects of the flow baffles. Figure 4 shows that the installation of flow baffles reduced the tangential velocity inside the classifier, which also slightly reduced the tangential velocity between two classifier blades and deteriorated the classification efficiency of the classifier. With increasing speed, however, this effect decreased, since the relative proportion of the reduction of the tangential velocity became smaller and smaller. The right-hand side of Figure 7 shows that the baffles had a positive effect on the inward constriction of the radial transport. Table 2. Therefore, the radial velocity between two classifier blades (see left-hand side of Figure 7) for both classifier types is plotted. This effect improves the classification efficiency of the classifier and becomes more important at higher classifier speed. Both effects are superimposed and explain why the classifier without baffles separates better at lower speeds and the classifier with flow baffles has advantages in classification at high speeds. Discussion In this paper, the air flow inside a classifier was numerically investigated by applying Computational Fluid Dynamics (CFD). The DPM was employed to predict the motion of particles. Two different geometries at two operating conditions were compared to evaluate the effect of flow baffles within the classifier on flow profile, pressure loss and classification efficiency. For validation, a comparison with experimental data was carried out. The experimental results confirm those from the simulation. The results allow the following conclusions to be drawn. Firstly, flow baffles inside the classifier break the vortex formation associated with an abrupt drop in tangential velocities inside the classifier. Therefore, a significant reduction in pressure drop is achieved. The reductions of the pressure loss depends on the operating conditions and reaches about 25% at high classifier speeds. Secondly, flow baffles also influence the grade efficiency of the classifier, as they have an impact on the radial and tangential velocities between the classifier blades. Therefore, the evaluation of the release properties is a function of the operating conditions. At low classifier speeds, the classifier without flow baffles separates better; at higher classifier speeds, the classifier with flow baffles separates better. Finally, the use of flow baffles depends on the operating conditions but is particularly suitable for high classifier speeds, which are usually associated with high energy costs. Thereby, the CFD is a suitable tool because it is a cost-effective and time-saving tool and provides a sufficiently accurate representation of characteristic values. With the help of numerical simulation, further investigations into the number and design of the flow baffles should be carried out in order to improve the flow processes in the classifier and operating costs.
8,313.2
2021-07-15T00:00:00.000
[ "Engineering" ]
Engine Using Intel Many Integrated Core Architecture . This paper describes the acceleration of the most computationally intensive kernels of the Blender rendering engine, Blender Cycles, using Intel Many Integrated Core architecture (MIC). The proposed parallelization, which uses OpenMP technology, also improves the performance of the rendering engine when running on multi-core CPUs and multi-socket servers. Although the GPU acceleration is already implemented in Cycles, its functionality is limited. Our proposed implementation for MIC architecture contains all features of the engine with improved performance. The paper presents performance evaluation for three architectures: multi-socket server, server with MIC (Intel Xeon Phi 5100p) accelerator and server with GPU accelerator (NVIDIA Tesla K20m). Introduction The progress in the High-Performance Computing (HPC) plays an important role in the science and engineering. Computationally intensive simulations have become an essential part of research and development of new technologies. Many research groups in the area of computer graphics deal with problems related to an extremely time-consuming process of image synthesis of virtual scenes, also called rendering (Shrek Forever 3D -50 mil. CPU rendering hours, [CGW]). Beside the use of HPC clusters for speeding up the computationally intensive tasks hardware accelerators are being extensively used as well. They can further increase computing power and efficiency of HPC clusters. In general, two types of hardware accelerators could be used, GPU accelerators and MIC coprocessors. The new Intel MIC coprocessor is composed of up to 61 low power cores in terms of both energy and performance when compared to multi-core CPUs. It can be used as a standalone Linux box or as an accelerator to the main CPU. The peak performance of the top-of-the line Xeon Phi is over 1.1 TFLOP (10 12 floating point operations per second) in double precision and over 2.2 TFLOPS in single precision. MIC architecture can be programmed using both shared memory models such as OpenMP or OpenCL (provides compatibility with codes developed for GPU) and distributed memory models such as MPI. The implementation presented in this paper has been developed and tested on Anselm Bullx cluster at the IT4Innovations National Supercomputing Centre in Ostrava, Czech Republic. Anselm cluster is equipped with both Intel Xeon Phi 5110P and Tesla K20m accelerators [AHW]. For production runs, once the algorithm is fully optimized, new IT4Innovations Salomon system will be used. Salomon will be equipped with 432 nodes, each with two coprocessors Intel Xeon Phi 7120P [SHW]. Rendering Equation and Monte Carlo Path-Tracing Method In 1986 Kajiya first introduced the rendering equation in computer graphics [KAJ]. One of the last versions of this equation is represented as where ω o is direction of outgoing ray, ω i is direction of incoming ray, L o is spectral radiance emitted by the source from point x in direction ω o , L e is emitted spectral radiance from point x in direction ω o , Ω is the unit hemisphere in direction of normal vector n with center in x, over which we integrate, L i is spectral radiance coming inside to x in direction ω i , f r (x, ω i , ω o ) is distribution function of the image (BRDF) in point x from direction ω i to direction ω o , ω i · n is angle between ω i and surface normal. Rendering equation is fundamental algorithm for all algorithms of image synthesis based on ray tracing principle such as Path-Tracing. Solving the rendering equation is computationally extensive in general. The most common solution methods are based on numerical estimation of the integral (1). One of the most commonly used methods for numerical solution of equation (1) is Monte Carlo (MC) method. More information about this method can be found in the dissertation thesis of Lafortune [LAF]. MC is also employed in different areas beside the computer graphics, e.q. in statistics [GRE]. Quasi-Monte Carlo and Sobol's Sequence One of the MC drawbacks is slow convergency, which is O 1 √ N , where N is number of simulations. Due to this reason techniques that speed up the convergency and effectivity of the whole computation have been developed. Deterministic form of Monte Carlo method is called quasi-Monte Carlo (QCM) and its convergency speed is O (log N ) s N , where s is dimension of the integral. In order for O (log N ) s N to be smaller than O 1 √ N , s needs to be small and N needs to be large [LEM]. In QMC method pseudo-random numbers are replaced by quasi-random numbers that are generated by deterministic algorithms. Typical property of such numbers is that they fill in the unit square more uniformly. This property is called low-discrepancy (LD). More details about it can be found in [MOR], [NIE]. Let elements x 1 , . . . , x N are sequence of s-dimensional space [0, 1] s , then approximation is expressed as Well known types of LD sequences are Halton's, Faure's, Sobol's, Wozniak's, Hammersly's or Niederreiter's sequence. Blender uses Sobol's sequences [JO3], [JO8], since they fit their needs -runs well on the CPU and accelerators, supports high path depth and can perform adaptive sampling. Due to its property Sobol's sequences can be used for progressive sampling. Unlike the Halton's sequence which can be used for progressive sampling as well, Sobol's sequences are not correlated in higher dimensions, and so do not need to be scrambled. The algorithm for generating Sobol's sequences is explained in [BRA]. Any number from any dimension can be queried without per path precomputation. Each dimension has its own sequence and when rendering the i-th pass, Blender gets element i from the sequence. A sequence is defined by a 32 × 32 binary matrix, and getting the i-th element in the sequence corresponds to multiplying the matrix by i. With binary operations this ends up being quite quick. These matrices are not as simple to compute, but the data to generate them up to dimension 21201 is available online. Blender currently uses 4 dimensions initially to sample the subpixel location and lens and 8 numbers per bounce, so that limits us to a maximum path depth of 2649 [BLS]. Implementation and Parallelization for MIC Accelerator The implementation presented in this paper is based on source code of the Blender version 2.73 that has been obtained from [BLE]. The code of Blender is compiled using GNU compiler version gcc/4.9.0 and the library running on MIC coprocessor was compiled using intel/14.0.1 compiler. In order to enable newly developed OpenMP and MIC accelerated rendering engines new OMP computing device has been added to GUI setting of Blender. Parallelization for Multi-core CPU's with Shared Memory The core of the Cycles engine computation method implements quasi-Monte Carlo method with Sobol's sequence. Rendered scene is represented as a C++ global variable. If GPU or MIC acceleration is used this global variable has to be transferred to the accelerator before rendering (solving the rendering equation (1)) is started. The synthesized image of size x r × y r is decomposed into tiles of size x t × y t (see Fig. 1). The way rendering algorithm itself is executed inside a tile differs for each computing device. We compare the performance of the following computing devices: CPU (POSIX threads for CPU only, this is the original computing device used by Blender Cycles), OpenMP (for CPU and MIC -newly developed computing device by our group), OpenCL (for CPU, MIC and GPU) and CUDA (for GPU only). Original Implementation. The original computing device from Blender Cycles uses POSIX threads for parallelization. Parallelization is done in the following way: the synthesized image of resolution x r × y r is decomposed into tiles of size x t × y t . Each tile is then computed by one POSIX thread/one CPU core. The situation is shown in the Fig. 1. Fig. 1. The decomposition of synthesized image with resolution xr × yr to tiles with size xt × yt by original implementation. The one tile is computed by one POSIX thread on one CPU core for xt × yt pixels. This is an example of CPU16 -see Section 3. OpenMP Parallelization of Intra Tile Rendering for CPU and MIC Architecture The newly proposed OpenMP parallelization is implemented in the OMP computing device. A hierarchical approach is used where each POSIX thread forked by the Cycles renderer is further parallelized using OpenMP threads. In order to provide enough work for each core of MIC coprocessor we need to decompose the larger tile x t × y t to smaller sub-tile x o × y o to fully utilize the CPU hardware (this is an example of OMP15MIC or OMP8CPU2MIC -see Section 3). The OpenMP parallelizes the loop of calculation across sub-tiles of the tile in a way that single OpenMP thread is processing single sub-tile of a tile. An example in The decomposition of synthesized image with resolution xr × yr to tiles with size xt × yt and to sub-tiles with size xo × yo by OpenMP implementation. One tile xt × yt is computed by eight threads using eight cores for xt × yt pixels. Each sub-tile xo × yo is computed by one OpenMP thread. This is an example of OMP8CPU2 -see Section 3. The computation time of each sub-tile is different. To achieve an effective load balancing we have to adjust the workload distribution by setting up the OpenMP runtime scheduler to schedule(dynamic, 1) and the values of x o and y o have to be set to small number (ex. x o = 32 and y o = 32). This setup produces the most efficient work distribution among processor cores and therefore minimizes processing time. As of now the POSIX threads are used to control the tiles, but in our future work, we would like to use MPI processing for computer clusters, so that single image can be processed by multiple computers or computing nodes of the HPC cluster. For this scenario the POSIX threads will be substituted by the MPI threads. The acceleration of the rendering process using Intel Xeon Phi accelerators is built on the similar basis as the approach to multi-core CPUs. In this case one POSIX thread is used to control single MIC accelerator. This means that one accelerator works on a single tile and multiple accelerators can be used in parallel to render multiple tiles. The architecture of the Intel Xeon Phi is significantly different from the regular Xeon processors. It is equipped with 61 processing cores where each core can run up to 4 threads, which gives 244 threads in total. This means that in order to fully utilize the hardware and to provide enough work for each core to enable load balancing, each tile has to be significantly larger than in case of CPU processing (see Fig. 3). In addition, the computation of each pixel takes different time. To enable load balancing using OpenMP runtime engine we have to set the scheduler to schedule(dynamic, 1). Using the coprocessor with separate memory also requires the data transfer between CPU memory and the memory of the coprocessor. Blender uses complex C++ structure that represents entire scene (KernelGlobals) and therefore in order to transfer data it has to be retyped to binary array (mic alloc kg()). When computation ends, allocated memory of the coprocessor is cleaned (mic free kg()). Parallelization by OpenCL and CUDA The OpenCL computing device can be used for multi-core CPU's as well as for MIC or GPU accelerators. Scene decomposition is similar to OpenMP processing for MIC. Only one POSIX thread with a large tile for optimal performance is used due to previously discussed reasons. The parallelization is based on task parallelism where for computing of one pixel a separate task is created (see Fig. 4). The original code had to be modified in order to run on Intel Xeon Phi devices. Unfortunately rendering with textures did not work on MIC coprocessor. Due to this shading and advance shading had to be disabled. There is no intention to fix this malfunction, because the OpenCL is not a targeted platform for us (the OpenCL is limited like CUDA and it is hard to use for development and optimization). As OpenCL and CUDA programing model are very similar, so is the decomposition. There is again one POSIX thread for main computation per accelerator. On GPU a rendering kernel uses single CUDA thread to render single pixel of a tile. The GPU needs thousands threads for better performance. This is reason, why we need the large tile (see Section 3). The GPU acceleration is a part of the original render engine. When compared to the our proposed approach it has limited functionality: the maximum amount of individual textures is limited, Open shading language (OSL) is not supported, Smoke/Fire rendering is not supported, GPU has smaller memory then MIC, GPU does not support cooperative rendering with CPU. We need the all feature from CPU -that's reason, why the combining of the CPU and GPU is useless for us. Benchmark In this paper two scenes are used for all benchmarks; one scene with and one without textures. The first scene with Tatra T87 has 1.2 millions triangles and uses HDRI lighting (see Fig. 5). The other scene with bones was generated from CT images and has 7.7 millions triangles (see Fig. 5). It does not use textures. This scene is used to evaluate the performance of the OpenCL implementation for Xeon Phi as it does not support textures. The benchmark was run on single computing node of the Anselm supercomputer equipped with two Intel Sandy Bridge E5-2470 CPU's (used by CPUn, OMPn engines -n is number of cores used) and one Intel Xeon Phi 5110P (MIC) or one NVIDIA Tesla K20m coprocessor (GPU). In the next test we exploited the CPU-MIC hybrid system (OMP15MIC). The two tiles with large size are computed at the same time (2×POSIX threads were created, one for CPU and one for MIC). First tile is computed by CPU using 15× OpenMP threads and the other tile is computed by MIC coprocessor (1× core is used to manage MIC), see the results in the Table 1 and 4. Another test we performed was a combination of OMP8 and CPU2. That means the computation of two tiles was parallelized (2×POSIX threads were created) and each tile was computed by 8×OpenMP threads, see the results in the Table 1 and 4. We also combined the CPU2 (2×tiles, each has one POSIX thread), OMP8 (the each POSIX thread has 8×OpenMP threads) and MIC, see the results in the Table 1 and 4. This combination is designed for the systems with multiple MIC accelerators (this will be the architecture of the Salomon -see Section 5). The division of the image into tiles has to be made carefully. You can see the big time when CPU16 is used and the size of tile is 1024×1024. In this example the 12×cores are idle (see the Table 1 and 4). Benchmark 1: Tatra T87 For this scene resolution 2048 × 2048px was used. First we compared the calculation for different size of tiles for the same resolution (see Table 1). In next two tests the resolution and count of samples were changed (see Table 2 and 3). Benchmark 2: Bones The original implementation of OpenCL does not work on Intel Xeon Phi. The problem is with shading and advance shading which has to be disabled. For this reason, we created a new scene, just for OpenCL testing. The Table 4 compares runtimes (in minutes) for different numbers of tiles. The Table 5 shows the result using OpenCL. Conclusion Both benchmarks show that the best performance could be obtained in case when combination of CPU and MIC coprocessor (OMP15MIC) is used. We can see from the results that the MIC coprocessor behaves like a 6-cores CPU unit. On the other hand, the MIC accelerator adds only 1.37 speedup over CPU implementation, which is less than expected. We would expect at least the same performance boost as in case of GPU accelerators. The reason why Intel MIC does not provide the expected performance boost is due to insufficient vectorization in the code for calculation the rendering equation (512 bit registers (able to hold 16 floats) are wasted now). We also show that the combination of full CPU utilization and MIC (OMP15MIC) has the advantage over GPU parallelization. Advantages are as follows: -GPU parallelization does not use all CPU cores -GPU does not offer all features of our MIC implementation, which has identical feature set as the original CPU implementation -The combination of 2 POSIX threads, each thread running on one socket, and 2× MIC is designed for the systems with multiple MIC accelerators (this will be the architecture of the Salomon -see Section 5) Future Work In the future work we will focus on vectorization of the code to improve its performance on Intel Xeon Phi devices. We will also modify the existing implementation of the Path-Tracing method using MPI technology. Our new benchmarks will target the new supercomputer Salomon which will be equipped with two Intel Xeon Phi for better performance.
4,004
2015-09-24T00:00:00.000
[ "Computer Science" ]
Note on initial conditions for small-field inflation We show that initial conditions for small-field inflation can be determined quantum mechanically by introducing a suitable flattened region in the scalar potential. The inflaton is then driven towards the slow-roll attractor solution exponentially fast, desensitising inflation from the initial velocity and partially evading the so-called overshoot problem. We give an explicit example in the context of hilltop inflation by introducing an ultra slow-roll plateau around the maximum of the potential and analyse its effect on the phase-space trajectories. Introduction Inflation is the dominant solution to the shortcomings of the standard cosmological model [1][2][3][4][5], providing us with a theoretical explanation of the observed Gaussian, adiabatic and scale invariant Cosmic Microwave Background (CMB) temperature fluctuations. A fundamental aspect of the inflationary dynamics is the problem of initial conditions [6]: how possible is it for inflation to begin in a potential landscape? Among the questions regarding the initial state of the Universe lies the issue of fine-tuning the phase-space patch from which a succesfull slow-roll trajectory begins. If this patch is too small compared to the available phase-space then the model under consideration is plagued by the so-called overshoot problem [7]. Inflationary models can be roughly divided into two broad categories of large-field and smallfield depending on the distance in field space that the inflaton has to trace during its slow-rolling from an initial value, associated with the period when the modes responsible for the CMB anisotropies exit the Hubble horizon, to a final value signalling the onset of reheating; that is, along the 60 e-folds required to solve the horizon and flatness problems (or a bit less, up to 50, when the scale of inflation is at lower energies, up to TeV). The fine-tuning of the initial phase-space patch is then easy to treat in large-field models like chaotic inflation: no matter how rare a causally connected flat patch is, if there is one -and there must be one, statistically speaking -then it will inflate and result in our observable Universe. Attracting as they might be from this perspective, large-field models face other issues. For example, the framework used to address inflation, General Relativity (GR) itself, should be viewed as an effective field theory, with the Planck mass M Pl 2.4 × 10 18 GeV the ultraviolet (UV) cutoff marking the quantum gravity regime. The fact that the field has to trace a trans-Planckian distance has thus implications on the validity of inflation as an effective process. Recently, this has led to a quite active discussion on the possibility of realising inflation in a quantum gravity framework (swampland program, for reviews see [8,9]). On the other hand, in small-field inflation, where the coarse-graining of spacetime into causally connected patches is less refined, it becomes less probable for the right initial condition to be met and consequently for inflation to begin. A realisation of this is the overshoot problem in hilltop inflation (or in general, inflection point inflation): if the kinetic energy is too large or the initial inflaton value is not too close to the maximum, then inflation lasts for much less than the required 60 e-folds. In this note, we focus on small-field models and consider a flattened region of the potential around the point when CMB scales exit the horizon. Along this ultra flat patch, the classical dynamics is driven by the friction due to the expanding spacetime, which eventually forces the field to a halt, no matter how large the kinetic energy is. This shares many features with the plateau models studied in [10]. However, here we extend the idea and consider a plateau region as a generic module of any small-field model. We show that as the velocity diminishes, at some point the contribution of the long wavelength quantum fluctuations becomes dominant and one can set initial conditions quantum mechanically. This is an initial condition in the sense that, given a long enough plateau, all the previous dynamics is rendered irrelevant, since the field eventually becomes dominated by the fluctuations. Restricting the initial field value to be close to the end of this plateau allows us to compute the root-mean-square values of the fluctuations via the standard quantisation procedure around de Sitter spacetime, which uniquely fixes the initial velocity. Considering the phase-space from this point onwards, there is no overshoot. However, depending on the ratio of the kinetic to potential energy, the field might have travelled a trans-Planckian distance until it diminishes enough so that the long wave fluctuations take over, which is where we set initial conditions for slow-roll inflation. By adding such a plateau module, either far away from the end of inflation or on the hilltop, we thus turn a small-field model into a large-field one, trading the overshoot problem with a trans-Planckian displacement. There are several mechanisms leading to asymptotic plateaux or flattened hilltops, which we briefly outline. Alternatively, in hilltop scenarios, starting at the maximum of the potential with classically zero speed may be motivated by symmetry, if at high temperatures this point is a symmetric minimum which became a maximum after spontaneous symmetry breaking occurring in a different field direction when temperature cooled down. In this case, our calculation of initial conditions is still valid. Before passing to the main subject let us first set the notation that we will use in the rest of the paper by discussing a few general points. Preliminaries/Notation The background FLRW spacetime, sourced by the homogeneous inflaton fieldφ(t), is described by a line element given by where τ ∈ (−∞, 0) is the conformal time, defined as a(τ )dτ = dt. The Hubble expansion rate is given by 2) and the e-fold number is defined as Depending on the situation, and since there is a one-to-one map among them, we will use the cosmic (t), conformal (τ ) and e-fold number (N ) as time variables interchangeably (where there is no confusion). We denote cosmic time derivatives with overdots (˙) and derivatives with respect to e-fold with a prime ( ). Moreover, when refering to specific points in field-spacecorresponding to specific moments -we will further replace the time variable with an index; to summarise: Organisation of the paper We begin in Sec. 2 by describing the problem of initial conditions in small-field hilltop inflation and discuss our proposal to compute the initial conditions quantum mechanically around de Sitter space (subsection 2.1). A possible implementation is by introducing an extended flat region around the maximum that could be a consequence of effective interactions of the inflaton with other fields present in the underlying fundamental theory of gravity at high energies (subsection 2.2). In Sec. 3, we study the implications of such a flat region resulting in an ultra slow-roll phase (subsection 3.1) and perform a quantum computation of the initial conditions for inflation at the horizon exit that should be somewhat earlier than the beginning of the usual classical slow-roll trajectory (subsection 3.2). In Sec. 4, we present an explicit example in the hilltop framework and work out its implications and predictions. Finally, we conclude in Sec. 5. Overshoot in small-field models In order to exemplify the problem, we will consider a well studied small-field model, hilltop inflation [11]. In this scenario the potential has a maximum V m , located atφ =φ m , around which we may expand the potential as Focusing on the initial dynamics of the field we may neglect the higher order terms in the potential. We further assume that |η| 1 so that the field slowly rolls down the potential for at least 60 e-folds. The dynamics is thus specified, in cosmic time, bÿ and Being second order in derivatives, the system classically requires two independent initial conditions (φ i ,φ i ) to specify a solution. However, since the potential is such that the field is slowly rolling and the expansion of spacetime induces friction, via the term 3Hφ, the initial velocity is irrelevant, or put in other words, the system flows to an attractor solution for which |φ| |Hφ| and H 2 ∝ V . Nevertheless, if |φ i | is large and/orφ i is slightly displaced from the maximum, the system might never reach this attractor within the 60 observable e-folds. This is the so-called overshoot problem [7] which plagues small-field inflation models. For instance, a numerical evaluation of (2.2) and (2.3) with (φ i ,φ i ) = (φ m , 10 −4 M 2 Pl ) reveals that for this case the field spends only 25 e-folds in the slow-roll phase, which is much less than the 60 e-folds required for succesfull inflation. Invoking symmetries One way to evade this is to assume that the aforementioned symmetry at high energies classically selects the initial condition (φ i ,φ i ) = (φ m , 0), i.e. the field is lying at the symmetric point with vanishing initial velocity. This configuration leads to a dS background withφ(N ) =φ m and 3M 2 Pl During inflation the homogeneous field undergoes quantum perturbations resulting in a weakly inhomogeneous field which sources the observed CMB temperature anisotropies and eventually the large-scale structure of the Universe. In the case under consideration, the long wavelength Fourier modes of these fluctuations will spontaneously destabilise the initial background configuration. Since the only energy scale in the problem is the dS radius, set by H, the average position/velocity of the long modes will induce a quantum initial condition for the background Nowφ/(Hφ) ∼ e −3N , such that after a couple of e-folds we haveφ/(Hφ) 1 and the attractor is reached. Solving the system (2.2) and (2.3) numerically we can verify that indeed in this case the field spends around 60 e-folds slowly rolling before it reaches the minimum of the potential at zero energy, where the neglected terms become relevant stabilising the potential and triggering a reheating phase. However, in the most general situation where inflation is embedded in some ultraviolet framework, there might be more energy scales, and hence, from a bottom-up point of view, the initial velocity should be considered as a free parameter setting the UV cutoff of the effective field theory description, with an upper bound given by the Planck scale. As mentioned in the previous paragraph, this brings us back to the overshoot problem. Initial decelerating phase In order to ameliorate this problem we will consider a situation where the field begins in a decelerating phase such that whatever the initial velocity is, it (classically) gets dissipated away rendering any previous dynamics irrelevant. At this point, when the background field lies nearly frozen at some position, we can consider the long wavelength modes of the quantum fluctuation as setting initial conditions for the later evolution. There are a number of ways to slow down the inflaton field like particle production [21,22], or the presence of a nearly flat region of the potential, which can also be attributed to interactions with heavy degrees of freedom [23] (see also [24] for recent developments in a supergravity framework). Here, we will consider the latter possibility: an extended ultra flat region around the point where we set initial conditions. A simple example of this sort, due to the presence 3 of a heavy field, is given in Ref. [23] (see also [25]): consider a light inflaton ϕ and a heavy scalar Assuming M χ H, minimising along the heavy direction and considering the large field region ϕ M χ /g, results in which is quite flat around where one sets the initial condition, far (at least 60 e-folds) from the point marking the end of inflation. Such potentials arise typically in brane inflation [26,27]. A similar effect of flattening the potential far away from the minimum can be also obtained in the Palatini formulation of GR where the metric and the connection are treated as independent variables. Although pure gravity is equivalent in both formulations, this is not the case when scalar fields are present with non-minimal couplings, as well as in the presence of an R 2 term 1 The exact values, which we compute in the next section, are given in terms of the dS temperature T dS = H/2π. Furthermore, there is a time dependence in the bothφi andφi stemming from well-known properties of field theory on dS space [12][13][14][15][16][17][18][19][20]. 2 The parameter |η| here is fixed by the spectral index of the curvature power spectrum measured by Planck to |η| ∼ 0.02. 3 As pointed out in [23] a heavy field coupled to the inflaton does not guarantee the flatness of the potential; in a realistic case, details of the UV physics, such as e.g. the compactification mechanism, might spoil this feature. (see [28] for a review). In this case, it turns out that the Palatini formulation of a model described by a potential V (φ) leads to the creation of a plateau for the Einstein-frame potential Pl at large field values where α is the R 2 coupling, providing new possibilities for slow-roll inflation [29,30]. Indeed such plateau models have been shown free of the fine-tuned initial conditions [31]. Another mechanism leading to the flattening of the potential around extrema -hence, appropriate for hilltop scenarios -is the running of the inflaton mass under renormalisation group flow [32,33]. This induces a dependence of the curvature of the potential on the field value, and according to the UV embedding of inflation, it can be made small enough so that the dynamics around the maximum exhibits a decelerating phase. Finally, the higher order terms neglected in (2.1) may have a similar effect. Our purpose here is not to study a particular inflationary potential but to argue that a decelerating phase as a module of an inflationary model desensitises inflation from UV completions, in the sense that the initial kinetic energy is dynamically set by the Hubble scale. Perturbation theory along such extended flat regions in the inflaton potential has been considered in the literature for several purposes, among which are the production of primordial black holes [34][35][36][37], if this plateau lies close to the end of inflation, as well as the study of ultra slow-roll (USR) inflation [38,39], whose non-attractor nature serves as an interesting single-field counterexample [40] to the local non-Gaussianity consistency condition [41]. Here, we will use the friction-driven USR motion in order to dynamically set quantum initial conditions for small-field models given by Eq. (2.5). We thus trade the problem of a fine-tuned initial phase-space patch with the presence of a long and flat enough region on the hilltop; in the latter case though, the flatness of the potential might be justified from a UV point of view. 3 Quantum initial conditions on ultra slow-roll plateaux 3 .1 Background dynamics We may define the USR motion along the ultra flat valley by the following condition: The width of this region may be determined by the breakdown of this condition atφ =φ c : such that for N < N c , the dynamics is governed approximately bÿ and the Friedmann Eq. (2.3), with V m now specifying the plateau's height. Fixing (φ m ,φ m ) at some point along this flat region, the system can be integrated exactly to yield and where we have switched to the e-fold time variable. From the velocity in (3.5), we see that whatever valueφ m has, the Hubble friction will (asymptotically) force the field to a halt [38], say at positionφ 0 . The higher the initial velocity the longer it takes for this to happen. Moreover, from the Friedmann equation this implies that the spacetime approaches dS even faster. To quantify this let us parametrise the inflationary scale via the Lyth bound [42] as H 10 −5 rM Pl , with r the tensor-to-scalar ratio, and assume that for most of the 60 observable e-folds the field is slowly rolling down its potential. Since the potential will eventually dominate the inflationary motion, from (3.5) we have Moreover, as already mentioned, the most conservative upper bound we can impose on the initial velocity is the Planck scale The interval ∆N 0 ≡ |N m − N 0 |, during which the field decelerates while the Hubble rate approaches a constant, can be estimated by the conditioṅ during which the classical solution (3.4) will have travelled For the extremal caseφ m = M 2 Pl and for r ∼ 0.01, from Eq. (3.9) we obtain ∆N 0 4 and ∆φ 0 ∼ 10M Pl . Alternatively, forφ 2 m < 2V m , this happens within ∆N 0 1, while ∆φ 0 M Pl . We thus see that such an extended plateau of at least 4 e-folds is enough to dissipate away even a Planckian kinetic energy. Due to the shift symmetry of the potential, we may now choose this terminal point 4 as the origin of the inflationary trajectory, i.e. set N 0 =φ 0 = 0, such that, for times 0 < N < N c , the solution (3.4), (3.5) reduces tō (3.11) As we will show in the next section the quantum fluctuations then provide a natural initial condition for the slow-roll phase, which removes the fine-tuning of the initial phase-space to an adequate degree. However, as already mentioned, the price to pay for inserting such a decelerating plateau is that we have effectively turned a small-field model into a large-field one: even if during the 60 e-folds of slow-roll inflation -which begins at some time N i ≥ 0 -the field variation remains sub-Planckian (thus avoiding a large tensor background), we see that from φ m till φ 0 the field can traverse a region of 10M Pl . Even though, due to the flatness of this region, the potential and kinetic energy differencies remain always bounded by M Pl , a trans-Planckian field displacement can be problematic [43]. This is an ongoing issue affecting all large-field models which we will not address in this paper. Before discussing the perturbations, let us summarise the time scales involved in the problem (see also Fig. 1): N m : some arbitrary early time when the effective description of the system becomes operative. We take φ m lying along a potential plateau with some velocity |φ m | M 2 Pl setting the UV cutoff N 0 : the time when the initial kinetic energy becomes smaller than the potential and the spacetime reaches dS. ∆N 0 ≡ |N 0 − N m | defines the interval during which the field is decelerating. Since a Planckian kinetic energy takes around 4 e-folds to dissipate, in order to cover all phase-space we impose that this region last at least 4 e-folds. Exploiting the shift symmetry along the nearly flat region we set φ 0 = N 0 = 0 N c : the time when the curvature of the potential becomes relevant marking the end of the USR phase. At this point the dynamics transits to slow-roll N i : the time when the observable CMB scales leave the Hubble horizon. This is where we set the initial conditions for the following approximate 60 e-folds of inflation. N i can, in principle, be anywhere after N 0 (see however the discussion at the end of Sec. 3) Dynamics of fluctuations The long wavelength k/a < H modes of δφ and its conjugate momentum 5 δφ can be treated as classical Gaussian random fields, which may be identified with their root-mean-square values: As argued in the previous section, at any time N i ∈ [0, N c ], whenφ =φ i , the background inflaton, that is, the collection of long wavelength Fourier modes of the field (2.4), is dominated by the fluctuations (3.12): Therefore, if we can compute these we may set the field and velocity values at time N i . This is an initial condition in the sense that, given the presence of the plateau, any field value before N 0 = 0 is irrelevant; thus the interval [0, N c ] spans the initial patch of the phase-space. N i can thus be thought of as just a coordinate charting the one-dimensional phase-space, such that for any N i ∈ [0, N c ] there is a fixed trajectory, which, as we show in the next section, does not overshoot. As already stressed, this rather trades the problem of a finely-tuned initial phase-space patch with that of a nearly flat region around the maximum, than solving it. Doing so though, allows one to argue that the latter situation can be justifiable on theoretical grounds. For example, in the context of the discussion around Eq. (2.7), one can ask what particle masses and couplings would be necessary in order to make inflation a generic outcome, i.e. in order to sustain at least 4 e-folds of USR before the 60 e-folds of slow-roll inflation. In order to compute the rms values of (3.12), we may notice that around N 0 , the fluctuation δφ(t i , x) and thus φ(t i , x) itself, can be treated as a quantum field on de Sitter space [44] sourced by V m as in (3.11). We may then consider the coincident limit of the 2-point field and velocity correlation functions at some time t 0 = 0 when the two fluctuations are in causal contact, as defining the background rms values at some later time t i , when the modes are well beyond the Hubble scale: (3.14) We may now proceed with the canonical quantisation of a scalar tachyon on a fixed dS spacetime obeyingφ with a potential of the form (2.1), in the standard way: where the creation/annihilation operators satisfy canonical commutation relations: [a(k), a † (q)] = (2π) 3 δ 3 (k − q), while the amplitude v k (τ ) obeying a Bunch-Davies asymptotic condition reads: Here, H ν is the Hankel function of the first kind and we have defined with the tachyon mass m related to |η| in (2.1) as m 2 = 6|η|H 2 . The coincident limit correlators defining the rms values in (3.14) now read and (3.20) In the strictly massless limit, the field correlator is IR divergent [12][13][14][15][16][17][18][19][20], while the velocity one is not. Indeed, for any non-vanishing velocity the field will reach an infinite value at kτ → 0. For a non-vanishing mass, the massive and tachyonic cases are distinct. Had we been discussing a (non-tachyonic) light field, the Bessel index ν in Eq. (3.18) would have been ν →ν = 3/2 − µ 2 /3 and for µ = 0 the integral (3.19) could have been evaluated exactly with the use of a UV cutoff. In other words, for a concave potential the field will travel only a finite distance before it rolls back down towards zero. In our case though, a tachyonic instability of the form (2.1), renders both correlators IR divergent: a convex potential will eventually drive the field and its velocity to infinity. Here, of course the potential is tachyonic locally around the maximum and at some point higher order terms will regularise the correlator. Nevertheless, for our purposes the dS horizon and the finite duration of the dS phase -see Fig. 1 -provides a natural cutoff which regularises the divergence, even without considering higher terms. In order to see this, let us parametrise the physical momentum range as Now consider initial (UV) and final (IR) physical time-slices marked by t 0 and t i and a time t ∈ [t 0 , t i ]. These are to be identified with the corresponding times summarised in Fig. 1, so we can set t 0 = 0 and normalise the scale factor to a 0 = 1. The smallest upper bound for the co-moving momentum in the interval [0, t i ] is thus k < Λ UV , since 1 < a < a i . Similarly, the largest lower bound is given by Λ IR a i < k. Thus the co-moving momentum cutoffs read Let us further refine these. Assuming that the Hubble rate is constant for t ∈ [0, t i ], the size of the horizon is λ H ∼ 1/H through out the whole evolution. The co-moving wavelength that reaches this scale at t i is given by λ UV = λ H /a i , while the one that is comparable to the horizon size at t 0 = 0 is λ IR = a i λ H . Then, the IR cutoff should be set such that physical scales that are larger than the horizon during the entire interval [0, t i ] do not affect the Minkowski limit. That is, we exclude scales that never were in causal contact for t < t i . For the UV cutoff the reasoning is similar; we are interested in computing the 2-point correlator at the final time-slice t i . The modes that contribute to the homogeneous background are super-horizon at this point. Therefore, k UV should be chosen such that the physical scales which are always sub-horizon during t ∈ [0, t i ] do not contribute to the background at time t i . Therefore, With proper cutoffs defined and the mode function (3.17) at hand, we may now compute the coincident limit 2-point field and velocity correlators, at time t i . In the massless limit -we discuss finite (tachyonic) mass corrections right below -the Hankel function acquires a simple form: , leading to [20] (3.25) and where we have switched to the e-fold number as a time variable. For µ = 0 the field correlator is indeed divergent in the IR (defined by N i → ∞), while the velocity is regular. However, in our set-up the dS dynamics only holds as long as the potential is flat enough such that the USR condition (3.1) is met. Therefore, the time t c -defined in (3.2) -provides a maximal IR cutoff. Also note that since the cutoffs properly take into account the existence of the dS horizon, we need to keep the low energy cutoff in the φ 2 correlator even if it is IR regular, which will also serve to regularise the aforementioned divergence due to the finite tachyonic mass corrections. Equations (3.25) and (3.26) can be viewed as a parametric form of the phase-space curve given by . Before passing to a specific example, a few comments are in order. Firstly, note that another way to obtain the qualitative form of the correlators is the stochastic formalism 6 introduced in [18]. This approach captures correctly the cutoff dependence of the 2-point functions (3.25) and (3.26) but disagrees in the numerical coefficients as a result of the different treatment of short modes [20]. Secondly, in Ref. [20], the field and leading-order velocity correlators (3.25), (3.26) were derived in the context of a scalar field on an expanding spacetime for a general UV cutoff. Here, we have placed the computation in the context of initial conditions for inflation, via the argument presented in the previous section, while taking into account the physical meaning of the cutoffs and the tachyonic nature of the mass correction. By doing so one can then view the IR cutoff as a parameter running along the one-dimensional phase-space (3.27) setting the initial point of the observable inflationary trajectory. Thirdly, when the mass is non-vanishing, the time dependence of the field and velocity correlators, (3.25) and (3.26), gets modified and consequently the phase-space curve (3.27) gets deformed, albeit by a small amount. One may compute the corrections in µ order by order [20] upon using known expressions for the derivatives of the Bessel functions with respect to the index. Alternatively, one can choose values for µ and N i and perform the integrals numerically. For example, the value of the mass that we will use in the next section in the context of hilltop inflation is µ 2 = 0.06, while N i runs from zero to its maximum value N c , with N c = 4. The corrections to the correlators in this case are less than 10% of the leading µ = 0 values, which induces a negligible deformation of the phase-space curve as can be seen in Fig. 2. Nevertheless, in what follows we shall take the mass corrections into account by preforming the integrals (3.19) and (3.20) numerically. Finally, note that our discussion is valid only when the field lies in the plateau region, i.e. for initial time 0 < N i < N c . Moreover, N i must be small (order 1 number) since otherwise, as we comment in the next section, the scalar metric fluctuations induced by δφ dominate the background spacetime and one has to resort to a quantum treatment of gravitational dynamics. Therefore, our results hold as long as N i (the time when CMB scales exit the horizon) is not far from N 0 (where the potential becomes the dominant contribution to the Hubble flow, which is where we set our zero). Thus the problem discussed here is the one of a fine-tuned initial kinetic energy. Quadratic hilltop with quantum initial conditions As mentioned, our purpose is to argue that a USR plateau as a module of a small-field model can ease the initial condition problem by allowing sufficient time for the friction to slow down the background so that the quantum contributions (3.25) and (3.26) become dominant. Indeed, as argued around Eq. (3.10), a plateau of at least 4 e-folds (during which the field excursion is large) can dissipate away initial kinetic energies of the order of the Planck scale. As discussed in Sec. 2, there are a few mechanisms that lead to such a flattened region away from the end of inflation, including interactions with massive fields, quantum corrections, etc. Let us note that in any case the USR dynamics must eventually come to an end giving way to the slow-roll phase [40,45]. The reason is that during the USR plateau the Hubble slow-roll parameter is given by Upon defining the curvature perturbation in the comoving (δφ = 0) gauge via we may compute its dimensionless power spectrum as and observe that due to Eq. (4.1) it receives an exponential enhancement of e 6N . This stems from the exponential growth of the would-be decaying super-horizon modes. Indeed, solving the Mukhanov-Sasaki equation [46][47][48] on the USR background one finds on super-horizon scales In the slow-roll case where 1 is constant, the C 2 mode would decay rapidly leaving the standard frozen contribution C 1 which sets the typical amplitude of a fluctuation. On the USR background however, from (4.1), we see that this would-be decaying contribution now grows exponentially as e 3N , leading to a strong enhancement of the power spectrum. In order not to violate the CMB normalisation, we thus need to connect the USR initial phase to a slow-roll one; if that is not the case the growth during the 60 e-folds would imply an inflationary energy scale so low that would have been already ruled out by surveys [40,45]. Let us thus denote the end of the USR phase by N c , defined in (3.2), such that ∆N c ≡ N c −N i characterises how many of the observable 60 e-folds are spent in this phase. Since the modes freeze at the end of this period, during the transition to the slow-roll regime, the final value of the power spectrum reads where c denotes the value of at the end of the USR phase. The observed value of the power spectrum is P R ≈ 2.2 × 10 −9 , while c can be fixed by the tensor-to-scalar ratio r. For ∆N c ∼ 60, (4.5) gives too small Hubble rate. The non-attractor USR inflation thus can only last for a limited number of e-folds [40,45]: ∆N c 4. In order to study the transition to the slow-roll (SR) phase 7 , we may follow [49] and expand the potential around the transition point φ c , where we have to match with observations, fixing c and η c . In this note, however, we will assume the existence of a plateau smoothly extending the maximum at the hilltop into a region where (3.1) holds. Such a region may be the result of a running mass due to quantum corrections or of the higher order terms that stabilise the potential far from the maximum. We may now observe that hilltop inflation, by construction, begins in such a USR phase, i.e. there always exists a finite region around the maximum for which the condition (3.1) is satisfied; this region, however, is too short to serve as a decelerating patch. Therefore, we may set the point where the plateau is "glued" to the hilltop to be φ 0 = 0 and expand the potential around there. In other words, whatever the reason of an extended plateau around the maximum is, at φ = 0 we consider that the theory is well described by the standard quadratic tachyonic potential: where we have assumed a small but non-vanishing gradient. The dimensionless parameters α 7 The transition from the USR to the SR dynamics can also generate non-Gaussianity [45,49]. and µ are related to the slow-roll parameters at the origin as Around the maximum of the scalar potential, where V (φ) is given by (4.6), the Klein-Gordon equation can be solved exactly to give This solution satisfies the initial condition φ(N i ) = φ i and φ (N i ) = φ i , with (φ i , φ i ) given by (3.25) and (3.26). Note that the classical solution is now uniquely determined by a single parameter, the initial time N i charting the curve (3.27). Let us define the ratio β(N ) ≡ ∂V /∂φ φ(t) , such that the USR condition (3.1) is satisfied for β 1. By using the above classical solution, we obtain . (4.10) As the field evolves, the curvature of the potential becomes more important with the transition from ultra slow-roll to slow-roll happening when β(N c ) = 1. The transition time N c can be determined analytically: Since our quantum computation holds for 0 ≤ N i ≤ N c , we have to impose ∆N c > 0. In Fig. 3 we show the allowed values for N i and α that meet this condition accommodating a possibly observed USR phase for fixed µ 2 = 0.06 (or equivalently η V ≈ −0.02). For each pair of points (N i , α 2 M Pl H ), the shading corresponds to the value of ∆N c obtained by (4.11). As shown there, in order to have ∆N c > 0, the slope α needs to satisfy the constraint By using (4.8) and r = 16 V , the existence of an observable USR phase gives the following constraint on the tensor-to-scalar ratio (4.13) Hence, for H around the GUT scale, the tensor-to-scalar ratio must have a vanishingly small value. We explore this case in Fig. 4 by considering the parameter space (N i , µ 2 ) in the r = 0 limit. The shading corresponds to the value of ∆N c obtained by (4.11). The maximum duration of the USR phase for µ 2 = 0.06 can be determined by taking the limit N i → 0 in which we get ∆N max c ≈ 1. In order to study the dynamics, we may introduce phase-space variables such that x(N i ) = x i and q(N i ) = q i , with the latter defined in (3.28). In Fig. 5, we plot the solution of the equation of motion with initial conditions (x i , q i ) lying along the (mass corrected) curve (3.27) as dictated by the quantum fluctuations. As can be seen the phase-space trajectories begin in the non-attractor phase and rapidly (within 4 e-folds) transit to the slow-roll profile q ∝ |η V |x, which lasts for 60 e-folds, avoiding the overshoot issue [50] and hence desensitising hilltop inflation from the initial kinetic energy. Conclusions Hilltop models are attractive from an effective field theory perspective but suffer from fine-tuning; for a succesfull (60 e-folds long) phase of slow-roll inflation, one has to begin inside a very small patch of the available phase-space. The reason why this happens is easy to understand: in smallfield models there is not enough time for the Hubble friction to drive the dynamics towards the slow-roll attractor, hence one has to set initial conditions very close to it in order for this to happen within 60 e-folds. In this work, we have considered the presence of an ultra flat region around where one sets initial conditions as a generic module of any small-field model. If this plateau is long enough the classical field comes to a halt exponentially fast at which point the super-horizon quantum fluctuations dominate the background dynamics. The latter can be treated as quantum fields on de Sitter space whose correlators can be computed via the standard quantisation procedure. If CMB scales exit the Hubble horizon shortly after the classical background has frozen, the rms values of the quantum fluctuations provide initial conditions for the field and its velocity in the sense that whatever their values were before that becomes dynamically irrelevant. Setting initial conditions in this way reduces the phase-space to one dimension and partially evades the overshoot problem in the sense that once the initial position is chosen close to the would-be maximum, the velocity is automatically fixed. A plateau of around 4 e-folds is enough to dissipate away even a Planckian initial kinetic energy at the cost of a trans-Planckian field displacement during this phase (although this takes place before the CMB scales exit the horizon). We have thus recasted the initial velocity problem in small-field models as a condition on the flatness of the potential around the initial field value by considering a "largefield" module, which is well-motivated in embeddings of inflation in UV frameworks featuring a variety of additional degrees of freedom.
8,894.2
2020-08-06T00:00:00.000
[ "Physics" ]
Design of Mobile-Based Parent-Student Contribution Payment Information System , Introduction Technology is a necessary tool for facilitating any human work.Information technology is currently advancing at such a quick pace that it is influencing many areas of human existence in all activities and is necessary to be a supportive instrument that can assist the precise and efficient flow of information [1].The advancement of science and technological sophistication in this era, gradually but surely proves that the verses of the Al-Quran are truly amazing.That is why science is important; knowledge may be accessed anywhere, including an educational institution.With the knowledge obtained from educational institutions, students are not enslaved by technology.To achieve educational goals, it is crucial to have components that are interconnected with one another in order to achieve the intended educational goals.Due to the results of human perception that is increasingly modern and developing advanced.An excellent system is one that is simple to use and that many people utilize.The developed method is widely used in a variety of professional fields [2]. One of them is in an educational institution.Educational institutions provide a variety of activities directly related to the institution.In the academic department, examples include the new student registration system, the system for paying parental contributions, the process of borrowing and returning library books, and the procedure of accepting new students.For this reason, educational institutions need a good system to manage all data in these institutions.In the teaching and learning process, each student 38 provides school administration funds which are called School Committee funds [3].In general, there are three sources of school education administration funds, namely governmental, parental, and social [4].The contribution of the parents of these students is a fund donated for the ongoing educational activities in an institution or educational institution [5].Usually, the parents' contribution payment system still uses the old system, namely using a book as a payment record. With the advancement of technology and the sophistication of computer technology, it is preferable for kids to use computers or laptops, as well as smartphones or mobile devices, as a tool or system for collecting contributions from their parents.So that technological developments have a good impact on schools by improving the existing system at the school. One of the most recent parental donation payments is based on the Web.Apart from the web, payments for student parents' contributions can be mobile-based.This system is an information system, an information system is a set of systems that are integrated with each other to create information that can be useful to users.With an information system, data and information will be stored in a database so that the data becomes organized and makes it easier for users when the data is needed [6].The mobile-based information system makes it easier for users because it can be accessed anywhere and anytime by the public on mobile devices or Android-based mobile devices [7].Android is a mobile operating system based on a modified version of Linux [8].PHP programming can be used to build an Android-based system.PHP (Hypertext Preprocessor) is a web-based programming language that can process and process data in real time.PHP is a server-side embedded script language, which implies that any syntax or program code you create will be performed by the server yet can be placed on a standard HTML page [9].And use MySQL as database.Information systems that have been designed must be able to run well. Where the research was carried out still using the manual, namely by recording manually using a book.So that there are many risks in recording these contributions.Examples of incorrect notes that have paid or lost proof of dues.Therefore, writer is interested in making an application for paying student contributions using PHP programming and MySQL as a database. Type of Research This research belongs to the type of research and development (R&D).R&D is research that produces a particular product and tests the validity and effectiveness of the product in its application [10].The R&D research step used refers to the software development life cycle (SDLC) model. Research Procedure System Development Life Cycle (SDLC) is the process of developing or modifying a software system using models and methodologies that people use to develop previous software systems [11].SDLC has several models in its process execution.Among the various available models, the author has applied the waterfall model.According to Pressman, the waterfall model is a systematic, sequential model for building software.The actual name of this model is the "Linear Sequential Model".And this model is often referred to as the "Classic Life Cycle" or the waterfall method [12].Pressman's perspective in the Permadi Setiawan Journal suggests that the waterfall model envisions a systematic and sequential approach to system development.The system development begins by defining requirement specifications and proceeds through various stages of planning, modeling, construction, and deployment to customers or users, culminating in ongoing support for the resulting system [11].This method is often used and called waterfall because each stage must be completed to proceed to the next stage [13].The stages of this research can be described as in Figure 1.The description of the research stages based on Figure 1 above can be explained as follows [14] : (a) Communication, the purpose of the communication stage is to understand the goals of the project to be built and to determine the features and functions of the product.The communication phase begins with project initiation.In this stage, the author conducts interviews with potential users to ensure how the product or system model will be created, resulting in a beneficial product or system; (b) Planning, this stage is executed to understand all aspects of creating this information system.In this process, the author designs all aspects of the application creation.For example, gathering data that has been interviewed from sources, identifying all risks in the system creation process, and designing a system creation schedule so that the author is more focused and on track in the system creation process; (c) Modeling, this stage involves creating design models of the software to be developed.This stage makes the system creator feel more at ease in building the system.In this phase, the author designs the system model using UML (Unified Modeling Language) and Interface Design.This includes designing use case diagrams, sequence diagrams, activity diagrams, class diagrams, and designing the database; (d) Construction, this stage involves the process of creating code (code generation).Code is a programming language understood by the machine or computer.The coder, or programmer, creates code that will be understood by the computer, resulting in a system used by users.This stage is the actual work of creating a system.After program development in the system is done, the system is tested using Black Box testing.Black box testing is a way of testing whether the software meets the expected criteria and focuses on the functionality of the system itself.In this stage, the author creates the program using the PHP programming language and MySQL as the database; and (e) Deployment to Customers/Users, this process is considered complete when the software or system being developed is ready for use by users.After development and testing, the system can be distributed to users, and periodic or scheduled maintenance is performed.In this stage, the author provides the application to the school. Results and Discussion This research yielded a product in the form of an information system for collecting contributions from students' parents at SMAN 1 2X11 Enam Lingkung, created with the PHP/MySQL programming language and made available online.This method can be used by the school to keep track of parental contribution payments.The existence of this information system is expected to help the school track parental contribution payments at SMAN 12X11 Enam Lingkung more efficiently and effectively.The parental contribution information system was created with the PHP programming language and a high-quality MySQL database.System development refers to the following waterfall stages : 40 Communication In this communication stage, the author needs to understand what is required to solve the issues related to the product or information system for collecting contributions from students' parents.This stage involves several needs, including: Project initiation Before developing the system, the authors interviewed financial holders at SMAN 12X11 Enam Lingkung by requesting data and interviewing parental contributions. Requirement gathering This stage consists of two requirements, namely functional requirements and non-functional requirements.Functional requirements, these are necessities that encompass various processes and information created by the system.Some of these requirements include: (a) The system should provide information to assist and facilitate teachers in viewing data of students who have made parental contribution payments; (b) Manage user administration, used as a data repository detailing parental contribution payment information. Non-functional requirements, these requirements consist of properties stored by the system.For the successful implementation of the generated system, two components of information technology are required: hardware and software. Estimating The first stage involves the author gathering the necessary data to build this system.This data includes student information, teacher information, and the procedure for parental contribution payments.In the second stage, the author designs the required system logics, including input and output logics.Moving on to the third stage, these logics are translated into the PHP programming language using a framework.The fourth stage involves creating a system interface that aligns with the expected requirements.In the fifth stage, testing is conducted.The sixth stage encompasses the process of rectifying various errors that may not have been detected during the system development. Schedulling In creating a system, the author needs to design a schedule or scheduling to ensure that the system operates on time.To achieve this, the author conducted initial interviews, analyses, and field surveys in early October 2022.Subsequently, in the third week of October 2022, the author interviewed the school personnel, gathering data and information about the application to be developed.From November 2022 to December 2022, the author undertook system design, database design, system architecture, and system development.In January 2023, the author conducted system testing, operational procedures, and system maintenance.Upon completing the design and development phases, the system will be handed over to the school. Tracking The process of this information system is carried out, the steps taken are the process of making the system using the bootstrap CSS (front end) framework and using sublime text, then hosting it so that it becomes online and can be accessed by many people and can make mobile applications. Analysis The first stage in building an information system for paying parental contributions is analysis.At this stage, problems encountered throughout the payment process are identified, and solutions are developed. General design Use case diagram.This diagram is useful for managing and modeling the required system behavior and obtained by the user.A use case represents a relationship between actor and system [15].Figure 2 Activity diagram.This diagram will explain all activities in the system being designed.Figure 3 and Figure 4 are a activity diagram for this research.Sequence diagram.This diagram a subset of interaction diagrams.They explain the objects involved in the use case as well as the messages that are sent between them over time [15].Figure 5 is a sequence diagram for this research. Display design The main page, as well as the login page, will appear on the first page.User can input a username and password on this screen to access the application or dashboard page.Figure 6 is the login page.After entering the username and password, the admin can add a user whose user is a teacher and add student names.Figure 7 is the admin dashboard.In this system, user can download reports/recaps as requested.For example reports that have not paid for a certain month or class reports.Figure 9 is download page. Construction This stage contains the syntax used in making the system.At this stage, the author checks to see if the system is working properly.When the program is completed and data has been entered so that the system can run effectively, the program is sent to the field.The following is the procedure for testing the Black Box method.Table 5 below is the rusult of black box testing. Delivery The installation of the system at the school will be done through social media, specifically using WhatsApp.It will be in the form of a RAR file that will be directly installed by the SMAN 1 2X11 Enam Lingkung school. Support This system provides benefits to SMAN 1 2X11 Enam Lingkung in the process of collecting contributions from students' parents.The system is user-friendly, efficient, and accessible online. Feedback The author has updated the system and addressed various shortcomings through multiple stages of testing and system evaluation. Conclusion The system developed is an information system for collecting contributions from students' parents that employs PHP as the programming language and MySQL as the database.It is used at SMAN 1 2X11 Enam Lingkung to help with the recording of parental contribution payments.The system is online and can be accessed via desktops or Android smartphones.The application offered to the school greatly aids in the recording of reports and data linked to parental contribution payments, removing the need for manual methods or books. Figure 1 . Figure 1.Research Stage is a use case diagram for this research. Figure 2 . Figure 2. Design Use Case Diagram Table design The table design is a table in this research database.This table data is used to facilitate making the application.Table 1 is user table. Table 4 below is a class table.
3,070.2
2023-06-30T00:00:00.000
[ "Computer Science" ]
Exploring the Potential of Chemical Constituents of Datura metel in Breast Cancer from Molecular Docking Studies . Breast cancer remains a pervasive health challenge worldwide, prompting the exploration of novel therapeutic prospects. Datura metel has long been recognized for its pharmacological properties, particularly in containing various bioactive compounds like alkaloids, flavonoids, and terpenoids. This review focuses on the potential of chemical constituents sourced from Datura metel , a traditional medicinal plant, in combating breast cancer, primarily through molecular docking studies. The review meticulously scrutinizes the chemical composition of Datura metel , emphasizing the identified compounds known for their therapeutic attributes. Through an extensive analysis of molecular docking studies, the interactions between these Datura metel constituents and crucial molecular targets associated with breast cancer are elucidated. The phytoconstituents (compound 1-13) were found to be more potent as compare to Tomoxifen citrate as standard anticancer drug. The findings presented herein beckon for further exploration, highlighting a promising avenue in the pursuit of effective and targeted treatments for breast cancer. In conclusion, this review emphasizes the synergistic integration of computational approaches with traditional knowledge, accelerating the discovery and development of innovative breast cancer therapies. Introduction Breast cancer is the most common cancer in women, with approximately 2.3 million cases reported each year.Over the past three decades, there has been a notable increase in its occurrence and mortality rates.This rise can be attributed to advancements in cancer detection, better recording of cancer cases, and changes in risk factor profiles.A combination of modifiable and non-modifiable risk factors significantly contributes to the overall array of risk factors associated with breast cancer [1].As per WHO, they released a framework to achieve the goal of preserving 2.5 million lives from cancer by the year 2040 [2].The breast contains glands responsible for producing milk, situated in front of the chest wall.Positioned on the pectoralis major muscle, the breast is supported and connected to the chest wall by ligaments.Comprising 15 to 20 lobes arranged in a circular pattern, the size and shape of the breast are regulated by the surrounding fat.Eachlobe is composed of lobules housing glands that respond to hormone stimulation by producing milk Breast cancer typically progresses silently, with most individuals discovering the condition through routine screenings.Alternatively, some may notice a breast lump incidentally, experience changes in breast size or shape, or observe nipple discharge [3].The medicinal plants have been consistently utilized as remedies for diverse illnesses.Most of the people residing in developed nations are reported to rely on traditionalmedicine practices [4].The global increase in the use of medicinal plants can be attributed to their demonstrated efficacy in treating specific diseases and assertions about their safety for consumption [5].The promising outcomes have been observed with natural plant derived product in serving as effective agents against tumors and cancer.These plant products not only demonstrate efficacy but also exhibit reduced toxicity in their application [6].Datura metelL. is a perennial herbaceous plant of Solanaceae family that can grow up to the height of 1.5 meters.Its leaves are simple, alternate, dark green, broadly ovate, shallowly lobed and smooth.The large, solitary, trumpet-shaped flowers emit a sweet fragrance, and display diverse range of colors from white to yellow and light to purple dark.These flowers are pollinated by insects, and the resulting fruits takes the form of capsules covered with spines.D. metel is rich in various phytochemicals, including alkaloids, phenols, tannins, saponins and sterols [7].The plant is employed for addressing conditions like diarrheaand skin diseases.It is also used in managing epilepsy, insanity, hysteria, rheumatic pain, hemorrhoids, painful menstruation, skin ulcers and wound healing.Its use is in burns, aids in soothing coughs [8]. The molecular modeling with docking stimulations involves examining the interaction between a ligand and macromolecular targets.Analyzing the preferred alignment can then be utilized to predict the strength of association or binding affinity between two molecules, employing scoring functions [9].The computational chemistry has facilitated the identification of novel effective medications with reduced adverse effects through in silico analysis [10].The objective of this computational study was to investigate the potential of D. metelin breast cancer by in silico molecular studies. Analysis of in silico Molecular Docking The molecular docking studies were conducted on different phytoconstituents of D. metelto investigate the intermolecular interactions.Tomoxifen citrate was used as a standard drug for comparing molecular interactions.Through the help of Auto Dock Tools 1.5.6 the molecular docking was carried out.The phytoconstituents of the plant were interacted with the target protein with PDB id: 7ldg.By using command prompt, the docking score was calculated.To get the docking results, receptor and ligand were both interacted by using a tool named as Discovery Studio 2021. The molecular structures of the chemical constituents of D. metelsuch as 6-Hydroxyhyoscyamine, Apohyoscine, Atropine, Beta sitasterol, Citrostadienol, Datumetine, Daturadiol, Daturametelin, Daturaolone, Hyoscyamine, Scopolamine, Withametelin, Withanolide were obtained from https://www.pubchem.organd 2D structures were drawn by using software ChemDraw Professional 15.0.The structures were saved by using extension(.cdx)and saved as file type (ChemDraw.cdx).The energy of these molecular structures was minimized by using Chem3D 15.0 and were saved by using extension (.pdb) and saved as file type {Protein Data Bank (.pdb)}.Then, with the use of AutoDock Tools 1.5.6, the PDB file of these chemical constituents were saved as PDBQT file format [11]. Preparation of Target Protein 7ldg was selected as target protein and was downloaded from RCSB Protein Databank [12].The water moleculeswere removed and for the preparation of protein, Kollman charges wasadded and polar hydrogens were also added to stabilize.From the target proteins, the undesired bonding, ligands were eliminated from the structure [13].The crystal structure has a resolution of 2.56 Å and had 253 amino acids.At last, using AutoDock Tools 1.5.6 the selected PDB was then converted into PDBQT file format.Its image was prepared using Biovia Discovery Studio 2021 as shown in Virtual Screening and Molecular Docking The grid box was created for MEILB2-BRCA2 by using AutoDock Tools 1.5.6 (X= 67.587, Y= 142.726, Z= 108.729) and the dimensions in which the grid box was created are 40.00x 40.00 x 40.00 Å.In Command prompt,the log file, out_ligand_.pdbqtwithbinding affinity values in kcal/mol and output.pdbqtwas generated.The same was done for all the chemical constituents.Here, the conformation of protein-ligand interactions determined that the configurations with the lowest binding energies, characterized by the most negative values, indicated the strongest binding affinities.The different binding modes of ligands were acquired from the results of docking investigations, and these files underwent analysis using Discovery Studio 2021 to identify the binding mode within the binding site cavity.The protein-ligand interactions, encompassing hydrogen bonding, carbon-hydrogen bonding, van der waals interactions, alkyl interactions, pi-alkyl interactions, and pi-pi alkyl Results In a computational molecular study, interactions with the designated target macromolecule were explored to assess the binding interactions of ligands.With the help of AutoDock vina 1.5.6 the structure of MEILB2-BRCA2 was docked with the standard drug Tomoxifen citrate with the binding energy of -4.5 kcal/mol given in Table 1.[15,16] The docking score between MEILB2-BRCA2receptor and ligands of chemical constituents such as 6-Hydroxyhyoscyamine, Apohyoscine, Atropine, Beta sitasterol, Citrostadienol, Datumetine, Daturadiol, Daturametelin, Daturaolone, Hyoscyamine, Scopolamine, Withametelin, Withanolide was integrated by using AutoDock Vina with the binding energy of -6.9 kcal/mol, -7.3 kcal/mol, -6.9 kcal/mol, -7.6 kcal/mol, -7.7 kcal/mol, -7.0 kcal/mol, -7.9 kcal/mol, -7.5 kcal/mol, -8.4 kcal/mol, -6.5 kcal/mol, -6.7 kcal/mol, -9.0 kcal/mol, -8.1 kcal/mol respectively as given in The diverse interactions observed suggest that Citrostadienol, Datumetin, and Hyoscyamine bind closely to the active site, indicating their potency and efficiency in inhibiting the enzyme.The 2D interaction diagrams illustrate the docking orientation and depict ligand interactions.Across the thirteen bioactive phytoconstituents, a total of 86 types of interactions, encompassing Conventional H-Bond, C-H bond, alkyl, pi-alkyl, pi-sigma, pipi T-shaped, as well as various Van der Waals interactions, were identified.These findings, when compared to standard medications, highlight the robust potential of Datura metel L as a promising treatment for breast cancer. , Fig1 Fig1 MEILB2-BRCA2 Receptor binding protein of Breast cancer interaction TRP C: 129, three Alkyl interactions such as LEU C: 232, VAL C: 231, LYS C: 228 were present.At last, two Pi-Alkyl interactions were present that is PHE C: 184 and TRP C: 132 as shown in Fig. 3F.The interaction between Daturametelin with MEILB2-BRCA2 consists of two Conventional H-bond that is ALA A: 143 with the distance of 3.29 Å, GLY C: 133 with the distance of 3.60 Å. Six Van der Waals forces of attraction VAL C: 134, SER C: 136, GLU A: 139, LEU C: 130, GLY A: 146, THR C:129.Only one Pi-Sigma bond found TRP C:129 and one Alkyl interaction ALA A: 143 as shown in Fig. 3G..The molecular interaction between Apohyoscine with MEILB2-BRCA2 consists of one Carbon H-bond that is GLU C: 287 with bonding distance of 3.53 Å.It also contains five Van der Waals forces of attraction that is VAL C: 282, LEU C:260, VAL C: 283, LEU C: 300, SER C: 299.Along with that it contains two Pi-Alkyl bond interactions that are ILE C: 257, VAL C: 279.At last it contains Pi-Pi T-shaped interaction TRP C: 261 as shown in Fig. 3H.The molecular interaction of Datumetine with MEILB2-BRCA2 consists of one Conventional H-bond that is THR C: 329 with the distance of 2.57 Å.It having three Carbon H -bonds GLU C: 285 having distance of 3.43 Å, VAL C: 282 having distance of 3.47 Å, SER C: 298 having distance of 3.59 Å.The Van der Waals forces of attraction are four in number which are ASP C: 326, LEU C: 327, SER C: 299, VAL C: 279 as shown in Fig. 3I.Molecular interaction between ligand, 6-Hydroxyhyoscyamine with MEILB2-BRCA2 consists of one Carbon H-bond that is VAL C: 283 having bonding distance of 3.30 Å.Other than this it contains two Van der Waals forces of attraction that is VAL C: 282, LEU C: 257.It contains four Alkyl interactions that is VAL C: 283, VAL C: 279, LEU C: 300, ILE C: 257.Also, it has two Pi-Alkyl interactions that is VAL C: 283, TRP C: 261 as shown in Fig. 3J.Atropine molecularly interacted with MEILB2-BRCA2 consists of one Conventional H-bond that is GLU C: 285 with the distance of 2.48 Å.One Carbon H-bond was present that is VAL C: 283 with the distance of 3.55 Å.Three Van der Waals forces of attraction were present LEU C: 300, VAL C: 282, LEU C: 327.Other than that, two alkyl interaction was present ILE C:257, VAL C: 279.At last, two Pi-Alkyl interactions were present VAL C: 283, TRP C: 261 as shown in Fig. 3K.Scopolamine showed molecular interaction with MEILB2-BRCA2 which consists of only Conventional H-bond GLU C:285 having distance of 3.04 Å. Carbon H-bond was one SER C:298 with the distance of 3.72 Å.The Van der Waals forces of attraction were two VAL C:282, LEU C:327.And only one Pi-Alkyl VAL C:283 was present as shown in Fig. 3L.Hyoscyamine when interacted with MEILB2-BRCA2, then it consists of one Conventional H-bond that is GLU C: 285 having a distance of 2.63 Å.Only one Carbon H-bond VAL C:283 with the distance of 3.23 Å was seen.Two Van der Waals forces of attractions VAL C:282, LEU C:327 were present.Two Pi-Alkyl bonds were present TRP C: 261, VAL C:283.Alkyl interactions were four in number ILE C:257, LEU C:300, VAL C:279, VAL C:283 as shown in Fig. 3M. Table 1 . [17,18,19] The molecular interaction between standard drug Tomoxifen citrate with MEILB2-BRCA2 consists of three conventional H-bonds that is PRO C:254 and its bonding distance is 1.94 Å, SER C:253 with bonding distance 2.76 Å and ASP C: 212 with bonding distance of 2.17 Å.Along with Conventional H-bonds, it contains Van der Waals forces of attraction that is GLN C:216, PHE C: 256, LEU C: 215 as shown in Fig.2.The interaction between Withametelin and MEILB2-BRCA2 consists of one Conventional H-bond SER C:253 with the distance of 3.26 Å. Carbon H-bond was only one PRO C:254 with the distance 3.21 Å. Van der Waals forces was GLY C:255.Alkyl and Pi-Alkyl were LEU C:259 and TRP C:262 respectively as shown in Fig.3A.. A: 149, LIE A: 144.One Pi-Alkyl interaction TRP C: 132.And one Unfavourable Acceptor-Acceptor ALA A: 143 is also present as shown in Fig.3E.Beta sitasterol interacted with MEILB2-BRCA2 consists of one Conventional H-bond that is GLU C: 270 with the distance of 2.24 Å.One Van der Waals forces of attraction were shown as THR C: 129.One Pi-Sigma
2,844.8
2024-01-01T00:00:00.000
[ "Medicine", "Chemistry" ]
Intelligence Architecture: An Exploratory Study Authors Business intelligence (BI) architecture based on service-oriented architecture (SOA) concept enables enterprises to deploy agile and reliable BI applications. However, the key factors for implementing a SOA-based BI architecture from technical perspectives have not yet been systematically investigated. Most of the prior studies focus on organisational and managerial perspectives rather than technical factors. Therefore, this study explores the key technical factors that are most likely to have an impact on the implementation of a SOA-based BI architecture. This paper presents a conceptual model of BI architecture built on SOA concept. Drawing on academic and practitioner literature related to SOA and software architectural design, we propose fourteen key factors that may influence the implementation of a SOA-based BI architecture. This study bridges the gap between academic and practitioners. Introduction In recent years many enterprises are turning their effort towards implementing Business Intelligence (BI) systems to improve decision making process (Gartner, 2011).A typical BI system includes functions such as reporting, querying, multi-dimensional analysis, online analytical processing, forecasting and datamining, (Sharda et al., 2010;Turban et al., 2011;Becerra-Fernandez et al., 2010;Larson, 2009).Often, a typical enterpriselevel BI architecture constitutes a data warehouse as a common back-end layer.According to Inmon (1992), data warehouse with its analytics capabilities is a non-volatile; integrated; theme-oriented repository to support long term decision making process.More significantly, a distributed multi-tier architecture has been widely adopted as a modern enterprise-scale BI architecture (Davenport et al., 2007). In the traditional approach, client-server architectures are tightly coupled.Any changes made to the client-side might require relevant changes to be implemented on the server-side as well and vice versa (Dennis et al., 2010;Ashrafi et al.,2009;Cognos Corporation,2008).However, in the ever evolving business environments, enterprises require an architecture that can easily include, remove or integrate additional services 'on the fly'.In view of this, architectures built on SOA principles can separate implementation components from the underlying infrastructure.The separation enables the architecture to support the flexibility, interchangeability, resiliency, security and availability needed in a modern enterpriselevel BI architecture (Dennis et al., 2010; Communications of the IBIMA 2 Ashrafi et al., 2009;IBM Global Technology Services, 2007). A few academic researchers have proposed their business intelligence success frameworks from managerial perspective (Yeoh and Koronios, 2010;Arnott, 2008;Dinter et al., 2009).However, none of the past studies was carried out in the context of business intelligence architecture from technical perspectives, nor from SOA concept.The effectiveness of business intelligence architecture might be influenced by internal and external factors.Therefore, this study aims to investigate the key technical factors that influence the implementation of an enterprise-scale SOAbased BI architecture. In the remainder of this paper, we first outline the typical BI architectures from major practitioners and propose a conceptual model of SOA-driven business intelligence architecture.Section three provides a matrix of factors studies in prior research and presents the analysis of factors derived from both academic and practitioner literature.We then propose and discuss a new set of key technical factors for the implementation of a SOAbased BI architecture in section four.The final section provides the conclusion and indicates future research on this subject. A Conceptual Model of SOA-Driven Bi Architecture Erl (2005) defines service-oriented architecture (SOA) as a group of welldefined services which can be merged, reused and communicated with each other over networks.The introduction of serviceoriented architecture changes many business operations into reusable services which are accessible over a network on demand (Erl, 2005;IBM, 2007).Within SOA environment, one of the advantages is the individual services that can be accessed without knowing the underlying system platform (Kodali, 2005).Nickull (2005) argues that reusability and repurposing are the main reasons for adopting SOA in implementing enterprise-scale BI systems.SOA enables low-cost system development but with good system quality because it is able to provide a flexible and standardised architecture that supports data sharing and integration of various systems.Other advantages of SOA includes flexibility, responsiveness, reusability, ease of connection, development cost reduction and agility (Erl, 2005;Dinter, 2009;IBM Global Technology Services, 2007;García-González,2009). One of the main reasons for a BI project failure is largely due to the selection of inappropriate BI tools that fails to meet the specified business requirements.Therefore, Ponniah (2001) recommends that enterprises should design the architecture first, only then select the tools to match the functions and services stipulated for the architectural components.According to Friedman et al. (2004), there is a limited awareness of the value of architecture in the BI context.This creates challenges in data quality for many enterprises and the "hidden" aspects such as reliability, scalability and flexibility of BI are not being emphasized.Davenport el al. (2007) assert that developing a robust BI system is more than just gathering and storing huge amount of data since it involves facets such as data quality, business processes, incentives, skills, organisational cultures, and sponsorships.Therefore, a BI architecture should be considered first when scoping a BI solution so that a more appropriate BI solution can be developed in meeting the actual needs of an enterprise.The finding was supported by Friedman et al. (2004), whom state that architecture is the foundation of BI and paying attention to the architecture will ensure success in BI implementation.Watson et al. (2005) also emphasize that architecture has significant impact when users can easily access the business data, which in turn leads to improved decision-making capabilities.In order to meet the requirements of an emerging business environment, a BI architecture should be designed in a flexible and adaptable manner (Davenport et al. 2007).Invariably, an agile BI architecture design will be better off serving the needs of an ever-changing business environment. According to Ponniah (2001), architecture is the combination of different unique components which provide a mean to store data, and deliver information to users.Architecture is a template that contains rules and functions used for serving business requirements.The various elements in architecture such as the standards, measurements, designs and other supporting techniques, aim to enable the smooth data flow from the source to the destination within a framework (Ponniah, 2001).A BI architecture that comprises of different systems, applications, and processes in an enterprise enables decision makers to access the valuable information for complex analytical processes.The BI architecture should be able to provide the correct information promptly in supporting the decision making processes.Hence, the information must be distributed via multiple channels including emails and page alerts spreadsheets, analytic queries, scorecards and dashboards (Davenport et al., 2007).In other words, BI architecture is anything that enables the transformation of data into useful information in decisionmaking process and to acquire evolving business advantages.Howson (2007) asserts that a fundamental architecture is the essential element for all BI system deployments.A lower level BI architecture can be formed by transactional systems and front-end tools.On the other hand, a higher level BI architecture deployment may consist of data marts, data warehouses, ETL (extract, transform and load) tools and BI front-end tools. A study was separately performed on the BI architectures by two major BI vendors, namely, IBM and Microsoft.We find that both of the proposed architectures consist of five similar layers: data access/presentation layer, data analysis layer, data repository/data storage layer, data integration layer, and data source layer (IBM, 2004 andMicrosoft, 2006).Moreover, Cognos (2008) provided a business intelligence architecture example which is built on service-oriented concept.However, Cognos's BI architecture deploys three distinct tiers to deliver BI capabilities; namely presentation tier that handles interaction with users over the network; application tier to handle all BI processing with special-built services; and data tier that provides access to various data sources (Cognos Corporation, 2008) Data Integration Layer: BI often involves analysis of aggregated and integrated data from various operational systems.Data is extracted, cleaned, profiled, staged and filtered from the operational systems before it is loaded into a data warehouse.A data integration layer primarily consists of ETL process (extracting, transforming, and loading processes) (Howson, 2007). Data Source Layer: Data are collected from various resources at the data source layer where it includes different sources, different platforms and different operating systems.It could be operational data which are generated by internal departments, unstructured data, informational data and external data.Operational data can be generated from internal operational systems as well as external enterprise systems such enterprise resource planning system, customer relationship management system and supply chain system.Unstructured data may also come from external sources in various data formats such as from suppliers, customers or any other information on the Internet. Literature Review of Factors for SOA and Software Architectural Design There is limited academic research into the technical factors that affect SOA-based BI architecture implementation.As depicted in Agility The extent to which underlying architecture must be able to adapt the changes in enterprise business strategy. Grigoriu(2006) Lewis(2009) Manageability The extent to which administrator able to manage it effectively by identifying potential issues proactively before they become problems and maintaining optimal system performance. Security The capability to ensure the system is protected from loss and unauthorized access to information as well as the possibility of a malicious code attack. Response Time The extent to which system must performs its functions within a specific duration. Availability The extent to which a BI system has to operate on a 24x7 basis with the exception of scheduled maintenance.The extent to which as BI system has to grow in fulfillment of its functions and sophistication, the application must remain responsive. Operational Dimension Operational requirements specify the operating environment in which a BI system must perform over a period of time (Dennis et al.,2010;Ashrafi et al.,2009;Bennett et al.,2006).Referring to past studies shown in Interoperability is one of the major benefits of SOA because it enables service interoperate seamlessly on different platforms.O'Brien et al. (2007) emphasize that the SOA architecture should be designed and evaluated with care in order to keep away from any drawbacks in performance since interoperability may cause performance overhead.Besides that, location transparency in SOA is able to improve system performance via availability of a system when services are available in multiple places and transparent to the users.Usability defines how well the application meets the user requirements.O'Brien et al. (2007) indicate that the nature of SOA in distributed computing has a significant impact on usability when user request to remote service providersis processed.According to Clements et al. (2002), modifiability is the ability for a system to make changes promptly and costeffectively.This factor is semantically similar to agility.Both O'Brien and Meier that this is an important factor so that a software system can continually sustain in an ever changing business environment.Manageability is the extent of how easy it is to manage, monitor, and debug an application (Meier, 2009).Manageability is a run-time quality of the system.Meier (2009) identifies three key issues for manageability.These key issues are lack of information on monitoring, tracing, and diagnostic, lack of runtime configurability and lack of troubleshooting tools.O' Brien et al. (2007) further assert that security is a major concern for SOA and Web Services; architects should pay attention to factors that directly impact security.Dong et al. (2008) mention that a service-oriented system is able to support fault detection, authentication capabilities as well as automation of trust negotiation.Security is associated with three core areas, namely confidentiality, integrity and authentication in which can be addressed by a string security protocol such as Secure Socket Layer (SSL).However, distributed components in SOA can enhance scalability, manageability while cutting costs. Performance Dimension Performance requirements specify the response time and the input-output volume that the system can handle within a particular timeframe (Dennis et al., 2010;Ashrafi et al.,2009;Bennett et al.,2006).O'Brien et al. (2007) highlight that prior to a system implementation, the SOA architecture should be designed and evaluated because performance has negative impact on SOA in most cases.However, the key factors that contribute to performance issues mentioned by O'Brien et al. are more related to technical perspective such as the use of interaction protocol, the use Simple Object Access Protocol (SOAP) and the use of a standard messaging format via Extensible Markup Language (XML).In addition, availability, reliability, scalability are the common factors in both SOA and software architecture design leading to the success of a SOA-based system.We propose that response time, throughput, availability, reliability, scalability to be mapped into performance dimension.O'Brien et al. (2007) point out that availability is a concern for the success of a SOA from the user's and provider's perspectives.According to Meier (2009), failure of a physical tier such as the database server or application server may lead to failure of entire system.Availability defines the proportion of time that the system is functional by measuring the percentage of the total system downtime over a predefined period.Liu et al. (2007) assert that availability is the quality attribute of whether the web service in SOA is ready for use.Availability may be affected by system and infrastructure fault, malicious code attacks, and system load (Meier,2009).The implementation of distributed transactions involve services that may be implemented in different languages and platforms, thus it is critical for a SOA-based system to operate correctly without failure (O'Brien et al., 2007).Meier (2009) defines reliability is the capability of a system will not fail to remain operational over a specified period.Availability can be affected by system errors, infrastructure problems, malicious code attacks, and system workload.O'Brien et al. (2007) emphasize that since Web services technology does not offer any inherent scalability feature, thus an increasing number of users or system size and volume without performance degradation can be a major issue in SOA.Meier (2009) also address that scalability is a critical issue because the system is supposed to be able to function well when there are changes to the load or demand. Fraiß Ashrafi et al.(2009),Dennis et al. (2010),Mulik et al. (2009) Throughput Throughput defines rate at which incoming requests are completed and it may be the number of transactions per second.Dennis et al. (2010),Mulik et al.(2009) which as BI system must provide support of ten thousands of users and provide a support of terabytes of data across a global organisation in a linear fashion.Ashrafi et al.(2009),Dennis et al. (2010),Fraiß et al. (2009), O'Brien et al. (2007) This paper has presented a conceptual model of business intelligence architecture built on SOA concept.Drawing on academic and practitioner literature related to SOA and software architectural design, a set of key technical factors that may influence the implementation of a SOA-based BI architecture have been proposed and discussed.Commonality, openness and single open API factors have also been suggested in this paper and subject to further empirical study.Amongst the findings, the research indicates that the technical factors generally exist in two major dimensions composed of operational dimension and performance dimension.Although industry has placed more focus on the implementation of SOA-based BI architecture, the contemporary approach of offering flexible BI solutions, the literature review indicates little academic research on the key technical factors involved in successful SOA-based BI architecture implementation.Therefore, this research provides a fundamental reference point for further empirical research to shed light on where and how the technical factors influence the implementation of a SOAbased BI architecture. Figure 1. Conceptual SOA-Based BI Architecture Presentation Layer: The top layer is the data presentation or data access layer which provides access to the users via various means such as host computers, portal, web service or even handheld mobile devices.Furthermore, users are able to view reports in various formats such as chart, table, report, dashboard, scorecard, and aggregated metrics.The reports are equipped with navigation capabilities like drilling, pivoting, drag-drop so that users can seamlessly analyse the data in any reports.
3,571.4
2012-01-01T00:00:00.000
[ "Business", "Computer Science" ]
Transforming community prevention systems for sustained impact: embedding active implementation and scaling functions Traditional efforts to translate evidence-based prevention strategies to communities, at scale, have not often produced socially significant outcomes or the local capacity needed to sustain them. A key gap in many efforts is the transformation of community prevention systems to support and sustain local infrastructure for the active implementation, scaling, and continuous improvement of effective prevention strategies. In this paper, we discuss (1) the emergence of applied implementation science as an important type 3–5 translational extension of traditional type 2 translational prevention science, (2) active implementation and scaling functions to support the full and effective use of evidence-based prevention strategies in practice, (3) the organization and alignment of local infrastructure to embed active implementation and scaling functions within community prevention systems, and (4) policy and practice implications for greater social impact and sustainable use of effective prevention strategies. INTRODUCTION The translational model of prevention science introduced in this special issue offers a novel and important step forward by incorporating the implementation and scaling of rigorously tested prevention strategies within local, state, national, and global prevention systems (see types 3-5 translation, Table 1). The ultimate success of a translational process cannot be measured by the effectiveness of prevention programs, practices, and policies alone but must also include the ability to bring the full experience of effective prevention strategies to children, families, and communities and achieve intended well-being outcomes at scale. Making the later stages of the translational process complex is that community prevention systems are often sizable, loosely structured configurations of community service organizations and public service agencies. These organizations and agencies employ a workforce with diverse types and levels of training; mix public and private, for-profit and not-for-profit organizational structures with diverse funding sources; and are regulated by an array of different local, state, and federal policies. In many communities, the prevention system may lack formal recognition or visibility. The effective implementation and scaling of prevention science in our communities and the realization of intended benefits have been generally elusive outside of pockets of excellence with unique conditions that are often difficult to replicate (e.g., unusual financial, service system, and human resource factors). Prevention science is not alone in this difficulty. The literature on adopting evidence-based human services interventions has shown that outcomes are not routinely reproduced in real-world service settings [1][2][3][4]. Where effective innovations are adopted, disparate outcomes between research trials and real-world use are often due to implementation concerns: difficulties supporting delivery of the innovation(s) as intended or at scale within naturally occurring service environments [5][6][7][8][9][10][11]. Effective implementation has been identified as an essential yet often neglected component of Implications Practice: To achieve reliable delivery of evidencebased prevention strategies and intended impact on well-being, community prevention systems need to organize and align infrastructure to ensure the presence and quality of active implementation and scaling functions. Policy: Funders and policymakers need to allocate sufficient time and resources to sustainably organize and align team-based implementation capacity within community prevention systems. Research: When interpreting the effectiveness of evidence-based prevention strategies in community prevention systems, researchers need to consider the presence and quality of active implementation and scaling functions. the science-to-service translation process [6,[12][13][14]. Far too often, science-to-service translation has relied only on the processes of diffusion (i.e., the passive spread of intervention knowledge) or dissemination (i.e., the persuasion of a group to adopt an intervention with the compliment of basic training programs and materials) [15,16]. These processes are often insufficient to bring about sustainable support for innovative strategies within long-standing, complex service systems. Furthermore, they place accountability for the consistent, intended delivery of innovative practices largely on practitioners with only the limited knowledge, skills, and abilities they may acquire from the diffusion or dissemination process. In contrast, creating meaningful and lasting change involves dynamic strategies to transform complex service systems into suitable host environments for effective prevention strategies [15,[17][18][19]. Successful and sustainable implementation of effective prevention strategies, and the achievement of expected well-being outcomes at scale, require the alignment of a supportive network of community prevention organizations that transfer accountability from practitioners to the community prevention system as a whole. APPLIED IMPLEMENTATION SCIENCE Given the lack of success with traditional diffusion and dissemination processes, over the past decade, applied implementation science has started to coalesce around core active implementation processes [12,14,15,20,21]. This focus has largely emerged from the identification of processes associated with more effective implementation and greater realization of expected outcomes [6,13]. Synthesizing the extant literature in applied implementation science and the experiences of a broad range of stakeholders, the National Implementation Research Network developed five Active Implementation Frameworks to describe and guide the process of actively implementing evidence-based interventions at scale in typical human service settings. The frameworks-Usable Interventions, Implementation Teams, Implementation Drivers, Implementation Stages, and Improvement Cycles-are well detailed by Fixsen and colleagues [6,13,15] and Metz and Bartley [20]. The Active Implementation Frameworks offer essential concepts and strategies for effectively moving type 2 translational prevention science into practice within real-world community prevention systems. ACTIVE IMPLEMENTATION AND SCALING FUNCTIONS Applying the science of active implementation throughout complex systems environments can be challenging. However, when successful, our experience is that the application of the concepts and strategies within the Active Implementation Frameworks results in an array of coordinated system functions being embedded in community prevention systems to actively support implementation and scaling. As illustrated in Fig. 1, these active implementation and scaling functions, organized within three sets and detailed in Table 2, nest within and rely on each other. Together, they create a nurturing environment for the consistent delivery of effective prevention strategies to achieve social impact. Furthermore, early consideration of these functions allows community leaders and partners, policymakers, and funders a way to consider function before committing to form in transforming their community prevention system to s u c c e s s f u l l y h o s t e f f e c t i v e p r e v e n t i o n strategies-whether individually or in multicomponent packages. For example, implementation teams with clear roles and responsibilities can be created or repurposed around the functions while attending to existing features of the community prevention system, such as size, history, resources, culture, and political and social complexities. As they better understand these functions and their integration, community leaders and partners are also able to more effectively organize and align cross-system implementation infrastructure, such as recruitment and selection, training, coaching, fidelity assessment, and data collection and monitoring systems. To facilitate understanding of how the active implementation and scaling functions emerge from application of the Active Implementation Frameworks, a crosswalk is provided in Fig. 2. Prevention system leadership and coordination The first set of active implementation and scaling functions, collectively organized under "prevention system The fundamental process of translating findings and discoveries from social, behavioral and biomedical sciences into research applied to prevention intervention. Moving from bench to bedside. Translation of applied theory to methods and program development. Moving from bedside to practice and involves translation of program development to implementation. Type Type 3 translation (T3) Type 4 translation (T4) Type 5 translation (T5) Definition Determining whether efficacy and effectiveness trial outcomes can be replicated under real world settings. Wide-scale implementation, adoption and institutionalization of new guidelines, practices, and policies. Translation to global communities. Involves fundamental, universal change in attitudes, policies, and social systems. leadership and coordination" in Table 2, incorporates many elements of a public health approach to prevention and social inequity reduction, such as operating under shared vision and principles, active community involvement, prioritizing and advancing chosen prevention strategies through diverse access points, and the utilization of ongoing assessment and data driven decision-making [17,22]. Given the size and often loose structure of community prevention systems, the successful implementation and scaling of effective prevention strategies require the alignment of collaborative organizations with coordinated leadership and accountability. The scope of community-wide prevention also requires considerable support and management on a day-to-day basis. Thus, within prevention system leadership and coordination, there are three classes of functions: executive, cross-system, and day-to-day. Executive leadership and coordination-The active implementation and scaling functions for executive leaders within the community's prevention system ensure an ongoing commitment that chosen prevention strategies will be effectively and sustainably supported. Those with executive leadership often have the power to re-direct resources, modify staff position descriptions, change the way community organizations work together, raise awareness and support for transformational change within a community, and create an active, involved partnership with community youth and families. By demonstrating a sustained commitment to the implementation and scaling of chosen prevention strategies, incorporating diverse cultural values and experiences from the community, and creating and nurturing the systems changes needed to support chosen prevention strategies, executive leaders across community sectors create a hospitable climate for innovation [15,23,24]. The climate that is fostered by these leaders will go a long way to ensuring the success and sustainability of chosen community prevention strategies or allowing support for chosen strategies to drift or collapse over time [25][26][27]. System-Wide Ongoing Learning Active Problem Solving PRACTICE AND PUBLIC HEALTH POLICIES Cross-system leadership and coordination-Cross-system active implementation and scaling functions ensure the alignment of prevention system activities, stakeholders, and community members. Prevention strategies must be chosen to respond to identified community well-being needs and to work in harmony under often limited resources. Prevention delivery agencies need to be recruited into community prevention coalitions under a common vision and strategic plan for the implementation and scaling of chosen prevention strategies and to create optimal reach into the community via diverse access points (e.g., education, mental health, primary care, and media). These agencies can benefit from shared implementation infrastructure for chosen prevention strategies, particularly related to the training and coaching of practitioners, Table 2 | Active implementation and scaling functions within a community prevention delivery system Prevention system leadership & coordination Executive 1. Demonstrate ongoing commitment to the implementation and scaling of community prevention strategies to achieve intended outcomes for community youth and families. 2. Demonstrate ongoing commitment to community partnerships to ensure that multicultural values and experiences are incorporated into practice and system changes. 3. Create appropriate opportunities for change within the community prevention delivery system. 4. Nurture systems change once it is underway. Cross-system 1. Select community prevention strategies to respond to identified community needs. 2. Align community prevention strategies under a common approach to implementation. 3. Select and align community prevention delivery agencies to attain community-wide reach. 4. Review and recommend solutions to shared implementation barriers and system needs, incorporating the perspectives of key prevention system and community partners. 5. Facilitate and normalize communication about systems changes and successes among and across all stakeholders and community members. Day-to-day 1. Ensure that community prevention strategies are teachable, learnable, doable, and assessable in practice. 2. Assess and create ongoing "buy-in" and readiness across the community prevention delivery system. 3. Install, ensure the aligned operation of, and sustain implementation infrastructure and best practices. 4. Develop and implement action plans to manage stage-based work. 5. Ensure the use of data, including fidelity and outcome data, across the community prevention system for continuous improvement. 6. Involve key prevention system and community partners, including youth and families, in implementation activities and decision-making for system improvement. 7. Organize and direct the day-to-day flow of information to support implementation. 8. Identify and address implementation barriers and ensure the spread of solutions to support successful implementation. Prevention strategy delivery support Practitioner competency and confidence 1. Select practitioners who demonstrate alignment with the philosophy, values, and principles of chosen community prevention strategies. 2. Develop practitioners' initial knowledge, skills, and abilities to deliver chosen community prevention strategies as intended. 3. Improve practitioner's ongoing ability to effectively deliver community prevention strategies across diverse families and contexts. Quality and outcome monitoring for system improvement 1. Assess whether the core components of the community prevention strategies are consistently being delivered as intended. 2. Gather, manage, and report data about community prevention strategies and their implementation to inform ongoing decision-making and continuous quality improvement. System-wide Ongoing learning 1. Prioritize learning for continuous improvement. 2. Value community youth and families' preferences and experiences. 3. Use data to make decisions. 4. Take time to identify and build readiness for the next right steps. Active problem solving 1. Identify local administrative and service delivery needs and respond with facilitative solutions. 2. Identify prevention system needs and advocate for appropriate solutions with system partners. 3. Use appropriate technical and adaptive strategies to respond to prevention system and service delivery challenges. 4. Communicate purposefully and regularly to nurture engagement across the community prevention system. fidelity assessment, ongoing data collection and analysis, media, and system leadership and technical support. By sharing implementation infrastructure, common implementation barriers and system needs can be identified, leading to greater system efficiencies as solutions spread across partner agencies [15]. Finally, those ensuring cross-system implementation functions are tasked with facilitating and normalizing communication about prevention system practice and policy changes to create awareness for the next steps ahead and share successes [15]. Cross-system leadership and coordination functions have been captured within structures such as Communities That Care community prevention boards [28] and Community Development Teams, originally developed by the California Institute of Mental Health and later used to support the implementation of Multidimensional Treatment Foster Care in California [29,30]. Day-to-day leadership and management-Often, the most intensive work occurs in attending to day-today active implementation and scaling functions. These functions require close, ongoing leadership and management from dedicated implementation team members serving the community prevention system. These teams are tasked with ensuring that chosen prevention strategies are usable [31]; creating readiness for new behaviors among system stakeholders; actively involving community partners; and installing, ensuring the aligned operation of, and sustaining implementation infrastructure to support chosen strategies [15]. This work entails strong organizational and system development skills including using data for setting priorities and continuous quality improvement, engaging in active problem solving, and developing trusted relationships with diverse stakeholders with sometimes competing agendas [24,32,33]. Moreover, managing these functions requires commitment to a shared community vision, a tolerance for ambiguity and slow pace, risk-taking, and the ability to create and manage action plans in accordance with identified stages of implementation [15,20,[32][33][34]. In North Carolina, several counties scaling the Triple P-Positive Parenting Program-system of interventions are attending to these day-to-day functions using teams of county public health managers with the collective skills and expertise to actively drive implementation across coalitions of public agencies, pediatric clinics, schools, and other community organizations. Prevention strategy delivery support Represented in Fig. 1, the prevention system leadership and coordination functions provide a nurturing environment for the second set of active implementation and scaling functions, collectively organized under "prevention strategy delivery support" in Table 2. This second set of system functions, grouped into two classes, directly supports the consistent and effective delivery of prevention strategies to achieve social impact. Practitioner competency and confidence-The first class of system functions ensures the competency and confidence of practitioners to deliver chosen prevention strategies as intended and with expected benefits to community youth and families. To start, practitioners with sufficient capacity must be recruited or selected to Utilizing organization implementation drivers practices (decision-support data system) Utilizing organization implementation drivers practices (facilitative administration, systems intervention, databased decision making, technical and adaptive leadership) Taking a stage-based approach to implementation PRACTICE AND PUBLIC HEALTH POLICIES deliver chosen prevention strategies to create community-wide reach. Having a priori selection criteria that go beyond general qualifications and experiences to evaluate the fit of candidates with the philosophy, values, and principles of chosen prevention strategies may greatly facilitate practitioner effectiveness with chosen strategies, reduce resistance to change in the community prevention system, and prevent practitioner burnout and turnover. Although individual community agencies often play a strong role in the selection of their practitioners to participate in new programs, community prevention system leaders can successfully advocate for the cross-system use of selection best practices and selection criteria that are aligned with adopted prevention strategies. Next, selected practitioners need effective training, using adult learning best practices [35], to deliver chosen prevention strategies. Trainings can be well done when outsourced to a qualified prevention strategy developer or purveyor, although local plans are often needed to align resources and timelines for training large numbers of practitioners across multiple agencies and prevention strategies. On the other hand, some prevention strategy developers and purveyors afford the opportunity to develop local training capacity through trainer certification processes. These opportunities may have long-term benefits but also require additional considerations, such as the selection, training, and ongoing quality assurance of locally certified trainers. After training, in-service coaching of practitioners is essential to increase practitioners' use of new strategies [36] and for improving their effectiveness across diverse youth and families [37][38][39]. Ongoing coaching of practitioners may be accomplished through crosssystem coaching sessions, integrating coaching for new prevention strategies within existing agency supervision practices, or some combination. Regardless, coaching service delivery plans can be developed and implemented to ensure that practitioners across the community prevention system receive regular and effective coaching. Quality and outcome monitoring for system improvement-The second class of system functions that directly supports the delivery of prevention strategies involves quality and outcome monitoring for continuous improvement. To ensure the intended delivery and optimize the benefits of chosen prevention strategies within dynamic community and service system environments, implementation processes and prevention strategy outcomes must be continuously monitored and improved [24]. This starts with ongoing fidelity assessment to ensure that the core components of chosen prevention strategies [31] are consistently being delivered across community youth and families. Aarons and colleagues [40,41] demonstrated that fidelity monitoring of evidence-based practices in the context of supportive coaching may not increase burden or burnout among practitioners and may actually increase practitioner retention over time. Furthermore, if program developers have established a link between core intervention components and outcomes, fidelity data allows outcomes to be appropriately interpreted: Favorable outcomes may be attributed to prevention strategy effectiveness (in the known presence of core intervention components) and unfavorable outcomes may be correctly attributed to either a problem with the strategy's effectiveness (in the known presence of core intervention components) or an implementation problem (in the known absence of core intervention components) [31]. Across the community prevention system, practical and efficient fidelity assessments must be developed or adopted for each strategy. Additionally, system-wide policies for reporting fidelity data are needed to inform quality improvement efforts. Additional implementation data, such as the effectiveness of practitioner training and coaching practices, the quality of the implementation climate set by system leadership, and the effectiveness of implementation support teams, can also be used to ensure highquality and sustainable implementation. Youth and family outcome data must be shared across the community prevention system to ensure that socially meaningful benefits are being realized. This may require data sharing agreements that protect privacy and comply with federal and state guidelines. Community prevention systems also need to secure the human resources and information technology needed to support the ongoing collection, analysis, and use of data for local decision-making and system improvement. System-wide All staff within a community prevention system, from front-line practitioners to executive leaders, share a responsibility to create and nurture a culture that supports the successful implementation and scaling of community prevention strategies. Such a culture is enabled by ongoing attention to several functions that are shared at all levels of the community prevention system. This last set of system functions, collectively organized under "system-wide" in Table 2 and depicted as the broader sphere of influence in Fig. 1, are also grouped into two classes. Ongoing learning-First, all community prevention system staff must prioritize their own learning to continually improve those aspects of the system over which they have direct control or influence. Implementing innovations requires an element of risk-taking that can be mitigated by systematic learning and improvement [24], whether at the practitioner level when delivering a new prevention strategy to a community youth or family, at the managerial level when supporting new practice routines, or at the executive level when adjusting agency policies and resources. Across all staff, valuing community youth and family preferences and experiences and using available data must support ongoing learning and decision-making. Listening to diverse community voices and engaging as partners in implementation is necessary in order to collectively move forward with change. Furthermore, timely data can be powerful in making prevention system functioning more objective and transparent to enable PRACTICE AND PUBLIC HEALTH POLICIES continuous quality improvement. Finally, change is not incumbent in human systems [19]. Care must be taken to assess and prepare colleagues and partners for next steps in the implementation and delivery of new prevention strategies. Active problem solving-All staff within a community prevention system must also share responsibility for local problem solving and regular, purposeful system communication. Everyone within a community prevention system has local responsibilities and activities over which they have some control, such as practitioners' scheduling of client services, and organizational or systems practices over which they have less influence, such as practitioners' compliance with mandated service records. To support the successful implementation and scaling of innovative prevention practices, all system participants must be willing to develop and implement supportive solutions to local challenges over which they have direct control and communicate and advocate for supportive solutions to system needs over which they have little or no influence. These activities require staff to be leaders within their own corner of the community prevention system, including diagnosing the adaptive (i.e., unknown or complex) and technical (i.e., known or routine) elements of problems and responding with appropriate strategies [19]. When active problem solving and communication functions are carried out system-wide, they promote practice-policy communication cycles [15,20] by which common implementation barriers and system needs are identified and the spread of solutions to support the successful and sustainable implementation of new prevention strategies is enabled. ORGANIZING AND ALIGNING INFRASTRUCTURE TO ENSURE ACTIVE IMPLEMENTATION AND SCALING FUNCTIONS Given the diversity of community prevention agencies and contexts, finding the right individuals, teams, and processes to support the full array of active implementation and scaling functions can be an early and evolving challenge. However, nurturing a collaborative network of community organizations to deliver the chosen prevention strategies and share accountability for all system functions is essential to achieving large-scale impact. Clearly understanding the scope of the implementation and scaling initiative and the breadth and complexity of the community prevention system involved will help define the challenge and inform the work to be done. An early task in organizing and aligning system infrastructure may be to leverage agencies ready for the tough work ahead while fostering broader system readiness. Agency readiness may include commitment to the shared framework for community prevention, a pledge of staff time and agency resources, the willingness to adjust agency practices and policies to support collective needs and goals, and the commitment of executive leaders to nurture strong working relationships across the community prevention system. Before engaging in community-wide initiatives to implement and scale innovative prevention strategies, some community agencies may benefit from strengthening general organizational skills and abilities [42]. Based on their prior studies, Prochaska, Prochaska, and Levesque [43] suggested that only about 20 % of individuals within organizations may be ready to change. Thus, building readiness for change is likely to be an important and ongoing effort in any community prevention initiative. The co-creation process Among those involved in the implementation and scaling of community prevention strategies, the organization and alignment of implementation infrastructure to successfully support community prevention strategies are processes of co-creation. As described by Metz and Albers [32], funders and policymakers; program developers and purveyors; and local agency leaders, stakeholders, and community members play collaborative roles in the creation of visible implementation infrastructure. While local agency leaders, staff, and community members may have the most intensive roles and ultimate ownership of community implementation infrastructure, strong program purveyors or intermediaries [6] and the active commitments of federal, state, and/or local funders and policymakers to support and sustain community prevention strategies are essential. Purveyors and intermediaries facilitate the local usability of prevention strategies while supporting integrity in the delivery of core intervention components that are associated with expected outcomes. Funders and policymakers actively partner with communities to address system barriers and ensure sustainable resources. In addition to these partners, communities may benefit from external active implementation technical assistance as they shift from traditional methods of diffusion and dissemination to more purposeful and effective implementation strategies [15]. Each of these co-creation partners must sustain their involvement through full implementation of the chosen prevention strategies, although their roles may change in form or intensity over time. Furthermore, as a part of the community prevention system, each co-creation partner benefits themselves and the community from ongoing learning and active problem solving processes. Thus, they share responsibility for the system-wide functions described in Table 2. A robust exploration process The exploration stage of implementation includes assessing the requirements for implementation and scaling, including system barriers that must be addressed [20]. During a robust exploration process, several questions guide the organization and alignment of implementation infrastructure. First, with the full array of active implementation and scaling functions in mind, partners in the co-creation process must ask and answer questions related to who and where. PRACTICE AND PUBLIC HEALTH POLICIES That is, who will be involved in managing and completing activities related to each group of system functions and where will accountability for each group of system functions live within the community prevention system? If a centralized team in the community prevention system ensures many of the functions, that team must have the capacity to be responsible for those functions across all agencies in the prevention system. If agencies ensure some functions locally, partners must ensure that agency teams supporting such functions remain aligned with and accountable to the overall prevention system. It is also not uncommon for activities related to some functions to be shared across multiple levels of a community prevention system. For example, a centralized implementation team may coordinate and schedule training events for new community prevention strategies, a program purveyor may deliver the training, and service agencies may allocate time and reimburse transportation for practitioners to attend trainings. Building a common understanding of and responsibility for these roles may challenge established ways of work, but linking multiple levels of support can ensure that the right partners are available to address emergent barriers in a timely way. Leadership and implementation team structures [15,20,44] with clear articles of organization and purpose, such as terms of reference or memorandums of agreement, and cross-system communication protocols that outline expectations about the frequency and purpose of communication may facilitate such understanding [15]. In addition to formalizing answers to questions about who, where, and how to share, partners must also define plans to carry out activities related to the prevention strategy delivery support functions. Plans for ensuring the recruitment and selection, training, and coaching of practitioners will need to be developed and documented, as well as plans for the collection, sharing, reporting, and use of data to monitor and continuously improve implementation processes and youth and family well-being outcomes. Together with articles of organization for system leadership and implementation teams, these plans will help answer questions about the resources that will be needed to support the implementation, scaling, and delivery of chosen prevention strategies. In addition, during the development of such articles and plans, it is usual for complex, adaptive challenges to be identified, such as how to remove practitioners from the front lines to attend training and coaching meetings or how agencies may participate in shared data collection and reporting. Such challenges have no easy solution [19,45], and strategies to manage each challenge must be developed and implemented. Finally, during the exploration stage of implementation, the early and active involvement of community members, including youth and families who will be served, and front-line agency staff is essential. Community members and agency practitioners are positioned at the nexus of the implementation process, and they often have unique perspectives on system needs and gaps that may prevent the high-quality delivery and sustainability of community prevention strategies [24]. They may also be able to identify complex issues that are more difficult for system or agency leaders to see, such as how families may be unintentionally impacted by certain community prevention strategies. When community members and practitioners are involved early, they can become advocates for chosen community prevention strategies and effectively organizing and aligning infrastructure to transform the community prevention system. Getting started with leadership and an implementation team Given their extent and scope, many stakeholders struggle to get started with the process of organizing and aligning infrastructure to support the active implementation and scaling functions. The identification of a first-generation community implementation team [15,20,29,44] is often an effective place to start. Table 2 [15,20]. Community implementation teams provide the mechanism by which accountability for high-quality evidence-based prevention services is intentionally transferred from practitioners alone to the community prevention system itself. Moreover, emerging evidence suggests that implementation teams may be able to reduce the time for evidence-based strategies to fully and effectively reach youth and families [29,46] and increase sustainability of such strategies over time [46]. CONCLUSIONS AND RECOMMENDATIONS The gap between the processes needed to successfully and sustainably implement and scale effective prevention strategies within community prevention systems and the reality of what is occurring in many local communities poses a challenge for system stakeholders and the youth and families who can benefit from effective prevention. With a focus on closing that gap, applied implementation science and the Active Implementation Frameworks are an important type 3-5 translational extension of traditional type 2 translational prevention science. The active implementation and scaling functions found in Table 2, emerging from experiences applying the concepts and strategies PRACTICE AND PUBLIC HEALTH POLICIES within the Active Implementation Frameworks, acknowledge that many participants at every level of a community prevention system need to be highly engaged, committed, and accountable to make effective use of innovative prevention strategies. This challenge requires the collaborative support of funders and policymakers; program developers and purveyors; technical assistance providers; and local agency leaders, stakeholders, and community members to co-create visible implementation infrastructure for success and sustainability. Policymakers at federal, state, and local levels and other funding entities can respond to the challenge of effective implementation and scaling through the use of funding requirements and program expectations. For example, a portion of allocated funding could be set aside for the development of local implementation infrastructure, such as leadership and implementation teams, as well as the development of local implementation processes, such as practitioner training and coaching protocols and fidelity assessment and continuous quality improvement systems. Additionally, policymakers and funders may employ funding strategies that allow for stage-based implementation activities (e.g., a planning year) and incorporate realistic periods to achieve full implementation and expected outcomes [34,[46][47][48][49][50]. Also important and challenging for states and communities is how they work together to support prevention. There is growing understanding that state-community partnerships are essential for implementing and scaling innovative prevention strategies. While cascading implementation teams and practice-policy communications cycles from the state capital through to community service agencies are essential elements of these partnerships [15,20], additional research will be helpful in this area. Other recommendations for research include the need for program developers to identify in published articles the core components of their prevention strategies so that community practitioners are clear about which strategy components directly affect intended outcomes and which ones might be flexibly adapted to local community context. Likewise, when reporting the outcomes of implementation trials, researchers would be wise to report fidelity data and the quality of active implementation and scaling infrastructure to support using prevention strategies as intended, as these may greatly influence strategy effectiveness and sustainability. Finally, through the establishment of partnerships reflecting trust and mutual self-interest [33,51], prevention scientists and community stakeholders could increasingly come together to co-create implementation research questions from the lived experience of community preventionists. An area of study, for example, could be the co-creation process and how the expertise of each partner is incorporated and used effectively. Early indications may suggest that the quality of the interactions between co-creation partners is essential for a productive partnership [51]. Compliance with ethical standards: All authors declare that they have no conflicts of interest to report for this manuscript and that they have adhered to appropriate ethical standards. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/ licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
7,635.6
2016-02-03T00:00:00.000
[ "Computer Science" ]
Coarse-Grained Many-Body Potentials of Ligand-Stabilized Nanoparticles from Machine-Learned Mean Forces Colloidal nanoparticles self-assemble into a variety of superstructures with distinctive optical, structural, and electronic properties. These nanoparticles are usually stabilized by a capping layer of organic ligands to prevent aggregation in the solvent. When the ligands are sufficiently long compared to the dimensions of the nanocrystal cores, the effective coarse-grained forces between pairs of nanoparticles are largely affected by the presence of neighboring particles. In order to efficiently investigate the self-assembly behavior of these complex colloidal systems, we propose a machine-learning approach to construct effective coarse-grained many-body interaction potentials. The multiscale methodology presented in this work constitutes a general bottom-up coarse-graining strategy where the coarse-grained forces acting on coarse-grained sites are extracted from measuring the vectorial mean forces on these sites in reference fine-grained simulations. These effective coarse-grained forces, i.e., gradients of the potential of mean force or of the free-energy surface, are represented by a simple linear model in terms of gradients of structural descriptors, which are scalar functions that are rotationally invariant. In this way, we also directly obtain the free-energy surface of the coarse-grained model as a function of all coarse-grained coordinates. We expect that this simple yet accurate coarse-graining framework for the many-body potential of mean force will enable the characterization, understanding, and prediction of the structure and phase behavior of relevant soft-matter systems by direct simulations. The key advantage of this method is its generality, which allows it to be applicable to a broad range of systems. To demonstrate the generality of our method, we also apply it to a colloid–polymer model system, where coarse-grained many-body interactions are pronounced. INTRODUCTION −10 Such nanostructured materials present an incredibly large surface-to-volume ratio, which makes them perfectly suited not only for optoelectronic, plasmonic, and photonic applications but also for catalysis, electrodes, and batteries.To prevent aggregation, NCs, such as PbSe, CdS, silver, and gold nanoparticles, are often protected with organic capping layers.For instance, in the case of gold nanoparticles, such molecules are often alkyl thiols. 7,11These ligand monolayers can undergo a temperature-dependent order−disorder transition in solvents like n-hexane or n-hexadecane, which switches the effective coarse-grained (CG) nanoparticle interactions from repulsive to attractive upon lowering the temperature. 11,12The self-assembly of nanoparticles is thus strongly dependent on the ligand type, the molecular solvent, and the temperature.In particular, the effect of the ligand shell on the mechanical and thermodynamic stability of such nanoparticles is not fully understood. 13A better theoretical understanding of the interactions between nanoparticles is thus of paramount importance for predicting the self-assembly process and the resulting structures.Atomistic simulations of ligand-coated nanocrystals may provide more insight but are severely limited by the length-and time-scales that can be achieved with present-day computers.Hence, computational studies of their phase and self-assembling behavior rely heavily on the use of simplified CG models based on effective CG interactions. A common method employed to obtain information on the effective CG interactions of these complex nanoparticles is to perform potential mean force (PMF) calculations using computer simulations.In recent years, a substantial number of molecular simulation studies (including both Monte Carlo (MC) and molecular dynamics (MD) methods) on the effective CG pair interactions between alkanethiol capped nanocrystals in a vacuum and solvent have been reported. 3,7,14,15Nevertheless, while the effective CG pair interactions are the central focus and typical input of any many-body theory, much less is known about the effective CG triplet and higher-body interactions.Coarse-grained pair potentials are valid for large particle separations, but they break down for very high concentrations and short separation distances.The range of this effective CG pair potential involves a certain length scale, which may be, for instance, the decay length of the attractive van der Waals interactions, the Debye− Huckel screening length for charged colloids or twice the radius of gyration for star polymers. 16−19 Russ et al. have used nonlinear Poisson−Boltzmann theory to calculate the effective CG three-body interactions between charged colloids in a collinear, midplane, and equilateral triangle geometry. 20They found repulsive CG pair interactions and attractive CG triplet interactions.In the work of Von Ferber et al., 16 the authors have calculated by theory and computer simulations effective CG triplet interactions between star polymer centers in a good solvent positioned at the corners of an equilateral triangle.In both theory and simulations, they observed that the triplet contribution is weakly attractive even at short distances where the triplet overlap is substantial.However, while their theory can be extended for any triplet configuration, the investigation for arbitrary triplet configurations is computationally expensive.A similar approach has been employed in the work of Schapotschnikow and Vlugt 18 where the authors performed MD simulations of interacting gold NCs in a vacuum for different sizes and varying ligand lengths (C 4 to C 12 ).By computing PMFs for pairs and triplets of NCs, they observed a strong attractive pair interaction due to the van der Waals contributions; the latter case resulted in a positive potential-energy correction of 20% and 40% in triplets consisting of short and long ligands, respectively.Interestingly, they gave estimates of the free energy for different configurations, triangles, and chains and found, in agreement with previous experimental studies, that for long capping molecules, the linear arrangement is energetically preferred over the triangle.The three-body contribution to the free energy has been estimated also in polymer-grafted nanoparticles (NPs) in a polymer matrix. 21To this end, Tang and Arya 21 employed the so-called "blue moon ensemble" method.In such a method, constrained MD simulations are used for a test NP interacting with two NPs along a set of reaction coordinates differing in their orientation with respect to the NP-dimer axis.The three-body contribution was found to be repulsive and anisotropic, with the degree of repulsion increasing with the angular deviation from the NP-dimer axis.In a recent work, Liepold and coauthors 19 proposed a method to study a pseudoatom model of dodecanethiol-ligated gold core nanoparticles in a vacuum, arranged on a square array with periodic boundary conditions to extract both the effective CG three-and four-body contributions.They found that the effective CG pair potential of the mean force in such a configuration is different from the isolated one.In particular, they observed that the combined three-and four-body contributions present an attractive well, implying that these many-body contributions are of comparable magnitude and opposite sign. The computational works described above represent successful examples of approaches that have been used to obtain information on the effective CG many-body interactions between nanoparticles.However, accurately determining the scalar effective CG two-and three-body interactions of ligandstabilized nanoparticles remains a formidable task as it requires the identification of suitable reaction coordinates for effectively integrating the gradients.In the case of a two-body potential of mean force, this can be accomplished by measuring the gradients of the potential of mean force (mean forces) on the nanoparticles at different distances r and integrating these forces along the reaction coordinate r.In the case of threebody potentials of mean force, one is often limited to specific configurations, such as an equilateral arrangement or a linear arrangement of the three particles.Furthermore, the functional dependence of the PMF on the internal coordinates can be quite complex to be represented by semiempirical functions, thus limiting their practical use in computer simulations. 22espite extensive research in the field, full expressions of the effective CG three-or many-body interactions as a function of the coordinates of all nanoparticles have not yet been achieved. In recent years, machine-learning (ML) approaches have been exploited for the construction of effective CG interaction potentials as a function of the local structure. 23,24The majority of these techniques have been developed to speed up ab initiobased MD simulations, where the energy and forces are not anymore directly evaluated every step via costly electronic structure calculations but instead represented by (generally nonlinear) functions of descriptors of local atomic environments. More recently such methods have been successfully employed to represent effective CG many-body interactions in a variety of soft-matter systems such as spherical microgel particles in two dimensions, 25 mixtures of colloidal hard spheres and rods with a nonadsorbing polymer, 26,27 as well as two-body PMFs of ligand-coated rod-like particles and rod-like microgel particles. 27As demonstrated in these works, ML techniques are a powerful tool for speeding up simulations that consider CG many-body effects.It is important to stress that in the aforementioned systems, the local particle environments have been correlated to the CG interaction potential, which was in those cases a well-defined and accessible scalar function. In 2017, Botu and co-workers 28 introduced a ML approach based on a nonlinear association between atomic configurations and quantum-mechanical forces to construct ML force fields for elemental bulk solids with high chemical accuracy.CG many-body interactions of molecules, based on the relationship between atomic positions and mean forces, have also been developed by John and Csańyi using Gaussian process regression. 29More recently, Gautham and Patra proposed a deep learning framework to learn the interactions between a pair of single-chain grafted spherical nanoparticles from their molecular dynamics trajectory, 30 and Koller et al. have proposed a bottom-up coarse-graining method that combines classical force-matching with deep generative modeling, which has been shown to produce computationally efficient CG models that can capture the folding and unfolding transitions of small proteins. 31ere, we introduce a thermodynamic consistent coarsegraining approach, where we match the effective CG forces acting on CG sites with the vectorial mean forces as extracted from high-resolution simulations.In the case of typical truly force-based force fields, computing the (potential/free) energy, required for Monte Carlo simulations, necessarily relies on the definition of a pathway connecting the different configurations in phase space (either in time or along a reaction coordinate) in order to accurately carry out a force integration.This is a limitation of using a truly force-based force field, wherein, one cannot predict (potential/free) energies by simply choosing two arbitrary points in phase space. 28Our proposed ML strategy overcomes such a limitation as one can directly access the scalar effective CG many-body PMF without resorting to force integration or thermodynamic integration.To this end, we represent the effective CG forces, i.e., gradients of the potential of mean force or of the free-energy surface, by a simple linear model in terms of gradients of structural descriptors, which are scalar functions that are rotationally invariant.Due to the linearity of our model and the way we train it, we are able to directly access the analytical scalar CG many-body potential as a function of all nanoparticle coordinates without the need to introduce or identify suitable reaction coordinates to measure the gradients and to subsequently integrate them to obtain a scalar many-body potential of mean force.This enables us to construct effective CG many-body potentials for complex colloidal nanoparticles within a force-matching fashion.These effective CG manybody potentials can subsequently be employed in Monte Carlo simulations.We apply our approach to systems of ligandstabilized nanoparticles in solvents of varying quality and demonstrate that the intricate effective CG potential of the mean force or free-energy landscape can be accurately represented by a simple linear ML model.Using the effective CG potential of mean force in Monte Carlo simulations, we demonstrate that the phase behavior and structure, as predicted by the ML model, are consistent with those exhibited by extensive molecular dynamics simulations of the fine-grained model. Hence, the key result of our paper is that we obtain the effective CG many-body potential, which can directly be used in Monte Carlo simulations by machine learning the mean forces in fine-grained simulations without relying on any thermodynamic integration method.Our simple coarsegraining ML framework that we present is generic and extensible to other systems, and we expect that this approach will enable and accelerate simulation studies on the phase and self-assembling behavior of complex colloidal systems�by overcoming spatiotemporal limitations of fine-grained models. RESULTS AND DISCUSSION Fine-Grained Model of Ligand-Stabilized Nanoparticles.We start by introducing the high-resolution model of ligand-stabilized nanoparticles (NPs), which we will refer to as the "fine-grained" (FG) model.Here, we adopt a detailed representation of the core−corona NPs based on the MARTINI force field, suited for molecular dynamics simulations of macromolecules, such as polymers, copolymers, surfactant molecules, sugars, and a variety of nanoparticles. 32,33n particular, the ligands, covalently bonded to the cores, are represented as chains of 5 "C1"-type MARTINI beads, approximately corresponding to alkyl ligands of 18 carbon atoms and a headgroup (e.g., thiol or amine).NP cores are modeled as rigid bodies with a spherical shape of diameter σ c = 4.2 nm and consist of n c = 275 core beads depicted in pink in Figure 1.For an isolated NP, the effective thickness of the floppy capping layer of ligand λ s depends on the solvent quality and can be estimated from a radial density profile with its origin in the nanocrystal core.Thus, the fully capped NPs can be characterized with an incompressible hard core of diameter σ c and an effective partially compressible soft diameter of σ NP = σ c + 2λ s (see Figure 1).A similar representation has been recently employed to model ligand-stabilized nanorods. 27The surface coverage, calculated as the number of ligands per surface area, is 5 nm −2 . 13Hence, each NP is covered by 275 ligands, corresponding to n l = 1375 ligand beads per NP, and thus, each NP in the FG model consists of n b = n c + n l = 1650 beads in total.To mimic static effects of a solvent in an implicit fashion, we follow the approach by Fan et al., 34 where pair interactions between nonbonded beads are modeled through a modified Lennard-Jones (LJ) potential by the introduction of a "weight" parameter s that controls the strength of the pair interactions relative to the original MARTINI value (see Methods).In particular, we use two values of s here, namely s = 0.1 and s = 0.3, in order to model "good" and "bad" solvent conditions, respectively.We note that the limiting case of s = 1 recovers the original LJ potential, which mimics an extremely bad solvent (or vacuum) for the NPs, while a value of s = 0 corresponds to a fully repulsive Weeks−Chandler−Andersen pair interaction, leading to purely steric interactions between NPs.The adopted implicit solvent representation may omit some features on the effective interactions between NPs.However, Fan et al. 34 demonstrated in their study that the potentials of mean forces obtained from simulations using the implicit solvent representation compared well with those obtained from an explicit solvent model, thereby justifying the implicit solvent approach.In addition, we fixed the temperature T = 300 K of the nanocrystal suspension. Thermodynamic Consistent Coarse-Grained Model.Following the description of the FG model, a given configuration of a collection of N nanoparticles, each comprising of n b beads, is represented by the positions of n = n b N beads r n = {r 1 , ..., r n }, r 3 .The probability density of finding a certain configuration r n in the canonical ensemble reads p FG (r n ) ∝ exp[−βϕ(r n )], where ϕ(r n ) denotes the potential energy of configuration r n in the FG system, β = 1/ k B T the inverse temperature, and k B the Boltzmann constant.In contrast, the CG representation that we aim at obtaining here consists only of N sites with positions R N = {R 1 , ..., R N }, R 3 .Hence we are dealing with the dimensionalityreduction problem: n N 3 3 , which projects FG states r n onto a lower-dimensional representation R N .This conversion is achieved by mapping R N = M(r n ) that defines the CG coordinates in terms of the FG model configuration.The mapping function M can be an arbitrarily complex function of the coordinates of the n beads but is often defined as a simple linear transformation R N = Mr n with M the matrix of mapping coefficients.In the case that the mapping coefficients are only ones and zeroes, the CG coordinate corresponds to the centers of mass of the respective groups of beads.In this paper, we have taken the N CG sites to correspond to the centers of mass of the N nanocrystal cores by mapping the n c core beads of a NP in the FG model onto the corresponding CG site in the CG representation. −37 The probability distribution in the CG space depends on the effective CG interaction potential, Φ(R N ) and reads Using the probability density of the FG system and the mapping as described above yields the following probability density for the CG variables in the FG system: Equating P CG (R N ) and p CG (R N ), we find that the thermodynamic consistent coarse-grained potential reads where By definition, the gradient of Φ(R N ) determines the effective CG force on CG site Using eq 2 and the center-of-mass linear mapping from the n FG beads to the N CG sites as described above, the effective CG force on CG site I can be related to the mean force on CG site I in the FG representation where the angular brackets denote an ensemble average where the centers of mass of the N CG sites are kept fixed and where f i denotes the instantaneous force on the ith FG bead belonging to CG site I.In addition, the summation denotes a sum over all of the forces on the FG beads composing the CG site I.These mean forces can be efficiently sampled from constrained or restrained simulations of the FG model, 38 allowing thus for a direct determination of the effective CG interaction potential Φ(R N ) by integration of the gradients of the PMF along a predefined reaction coordinate.This is a typical approach that has been widely followed to compute effective CG pair interactions of nanoparticles, 7,18,39 where mean forces are collected for a series of pair distances and subsequently integrated over that internal variable to obtain a scalar function approximating the underlying effective CG twobody PMF. Equations 3 and 4 formally define the effective CG interaction potential Φ(R N ) as a function of a constrained average over the FG model.However, it is clear that they simply set a direct relationship between both representation levels but do not provide a closed mathematical form for the CG potential Φ(R N ) in terms of the CG coordinates that can be used without the need to resort to simulations of the FG model in each step.Thus, the practical challenge is to determine an explicit function Φ(R N ) of the CG coordinates R N that represents the true CG many-body PMF.In this context, two practical methods have been developed to approach thermodynamic consistency while retaining tractable functional forms to serve as the CG potential: variational forcematching 40 and relative entropy minimization. 41The latter approach and the closely related method known as iterative Boltzmann inversion (IBI) 42 are data-efficient as they simply require structural sampling (for example, IBI sets a model where parameters are optimized so as to match distributions of the FG model); however, they require the CG model to be resimulated during the iterative training procedure, which can be extremely costly and even lead to failure in convergence. 31n contrast, force-matching is straightforward to implement but slightly more data-inefficient, as it requires the vectorial forces on the CG particles mapped from FG sampling as in eq 4. Here, we follow such an approach and focus on learning the mean forces sampled from constrained simulations of the FG model using linear models that employ vectorial basis functions describing the local structure in the mapped CG representation.We decide to follow this "multiscale forcematching" route as it has already been shown in recent developments on atomic ML potentials that the quantummechanical vectorial force acting on a particular atom, can be accurately learned and predicted directly from a configuration of atoms. 28,29mong the different ML potential variants, those based on Behler and Parrinello symmetry functions (SFs) have been widely used for constructing ML potentials for atomistic systems and more recently, also for colloidal and nanoparticle systems 26,27,43 (see Descriptors: Symmetry Functions and their Gradients).The main idea in such an approach is to represent the total CG (potential or free) energy Φ(R N ) of a system as a sum of per-atom (bead, particle, etc.) contributions Φ K , Φ = ∑ K = 1 N Φ K , where each individual contribution to the potential is in turn a function of a set of N s SFs; Φ K = Φ K ({G 1 (K), ..., G Nd s (K)}), which describe the local atomic environments.Hence, under such a construction, the αth component of the mean force F I,α acting on CG site I with respect to coordinate R I,α with α = (x, y, z) can be represented (by applying the chain rule) as where G J (K) is the Jth SF in the set of N s symmetry functions describing the local environment of CG site K.Note that the term ∂Φ K /∂G J (K) is fully determined by the regression method employed to construct the relationship between the effective interaction potential and the structure of the system Based on previous works, 25−27 we assume a simple linear relationship Strictly speaking, the many-body PMF is defined as a difference with respect to a reference state, and a numerical constant c would typically appear (see eq 2).However, when the reference point is defined as the state R N at infinite dilution (essentially an ideal gas), such a constant is identical to zero.This means that the training set should contain enough lowdensity configurations where the mean forces vanish in the FG representation.Due to the construction of the PMF, this accounts to conceiving each per-particle potential contribution as a difference with respect to the case when they are isolated.By simply using the weights ω J and the analytical derivatives of SFs ∂G J (K)/∂R I,α , the mean forces on the CG particles can be predicted and employed straightforwardly in MD simulations, where only forces are needed to propagate the system.An additional advantage of our approach is evident from eq 7, which establishes a simple and direct way to obtain an effective many-body CG interaction potential Φ(R N ) for a system of N nanoparticles by simply learning the mean forces (extracted, for example, from constrained simulations of the FG model).More specifically, given the linearity of the model, the effective many-body CG potential is readily obtained: We mention here that the chosen fingerprint representation for describing the components of the particle forces in terms of derivatives of the SFs conforms with the required invariance operations, such as permutation, translation, and rotation of particles.For instance, consider a reference particle I and its neighboring atoms within a radial cutoff distance.Information pertaining to neighboring particles of particle I are passed into the summands of the individual SFs as scalar pairwise distances or angles; therefore, permutation or rigid translation of particles does not alter G K (I).In the case of rigid rotations, both the fingerprint ∇G K (I) and vectorial force components F I transform in an identical manner governed by the rotation matrix.Based on the premises above, the first step for a consistent coarse-graining of a system of N nanoparticles is the generation of a set of configurations in the FG representation using simulations of the FG model.Ideally, the configurations should be diverse enough that a suf ficient number of different local particle environments are considered.Each of the FG configurations is then used as the initial state of subsequent simulations, where the centers of mass of the nanoparticle cores are frozen in order to efficiently measure the mean forces in eq 4. The vectorial mean forces are then fitted by simple linear regression using eq 7 with the feature selection method proposed in ref 25 (see Fitting Procedure). CG Two-Body Potential from ML Mean Forces.In order to test the proposed methodology, we first construct a simple linear model for the effective CG two-body forces, which are gradients of the true effective CG two-body PMF Φ (2) .For a direct test of the accuracy of the method, we first compute the effective CG two-body PMF Φ (2) as a function of the separation distance R IJ = |R I − R J | between two NPs using constraint MD simulations of the FG model. 38For each PMF calculation, we perform simulations with the NC nanoparticle cores frozen at 250 different distances R IJ .For each of these simulations, the two-body mean force F m (R IJ ) is calculated as the average force between the two nanoparticle cores in the direction of their center-of-mass distance vector R IJ 18 = • i k j j j j j j y where ∑ i∈I f i and ∑ j∈J f j denote a sum over all the instantaneous forces on the FG beads composing the nanoparticle core I and J, respectively, and IJ is the unit vector connecting the two nanoparticles along the reaction coordinate R IJ .Angular brackets denote ensemble averages in the canonical ensemble with constraint separation distance R IJ .The effective CG PMF Φ (2) We report the results of the effective CG two-body PMF Φ (2) (R ij ) as obtained from the constrained MD simulations of the FG model for a solvent of s = 0.1 and s = 0.3 as filled symbols in Figure 2, which clearly show that the effective CG interactions between the nanoparticles are strongly influenced by the affinity of the ligands with the solvent.More specifically, when the solvent parameter is s = 0.1 (good solvent conditions), the interactions are purely repulsive, while when the solvent has a lower quality in the case of s = 0.3, the interactions become attractive at short distances.In particular, for s = 0.3 we find a minimum of −3.3 k B T at R IJ /σ c = 1.88.At shorter distances, the interactions are repulsive in both cases, signaling the unfavorable compression of ligand chains. Interestingly, we notice that while the distribution of the ligand beads around the cores of isolated NPs in solvents with s = 0.1 and 0.3 is rather similar, the underling effective CG two-body interactions are significantly different. Using the same setup of the constrained MD simulations, we prepare training data sets containing solely the vectorial components of the mean forces acting on the centers of mass of the two individual nanoparticle cores I and J for each separation distance (a total of 750 force components per particle).We proceeded to learn the mean forces using the scheme described above (see eq 7 and Fitting Procedure).The fitting is performed using only gradients of two-body SFs (see Descriptors: Symmetry Functions and their Gradients).The number of descriptors is N SF = 46 and N SF = 38 for s = 0.1 and s = 0.3, respectively.We use a single cutoff value of R c = 2.5σ c .The ML models constructed with the gradients of the SFs present a correlation coefficient R 2 ≈ 0.999 for the two solvents and root mean square error (RMSE) values of 14.78k B T/σ c and 12.80k B T/σ c for the solvents s = 0.1 and 0.3, respectively.Using the optimized weights (linear coefficients) of the gradients of the SFs and following eq 8, we obtain the effective CG two-body ML potentials Φ ML2 (2) (R IJ ) for 100 different configurations, where we place the CG nanoparticle cores at varying separation distances R IJ .The resulting Φ ML2 (2) (R IJ ) predictions are shown as dashed lines for the two different solvents in Figure 2. Considering that the generated particle configurations for evaluating Φ ML2 (2) (R IJ ) are all different from those included in the original training data sets, the agreement with the curves obtained from the constrained MD simulations highlights the ability of the ML models to accurately interpolate between structures and smoothly predict the effective CG two-body potentials solely by learning the mean forces in FG simulations and not necessarily the scalar function itself. CG Many-Body Potential from ML Mean Forces.As mentioned above, when the effective CG potential of ligandstabilized nanoparticles and other colloidal systems is described, typically two-body approximations are employed.However, one can expect many-body effects to be relevant for systems of nanoparticles in which the length scale of the "softdeformable" layer of ligands, 2λ s , is of the order of the core size σ c of the nanoparticles.More specifically, from simple geometrical arguments, one could expect three-and higherbody contributions to the effective CG potential Φ(R N ) in eq 2 to be nonvanishing for size ratios q ≡ (2λ s )/σ c > 0.14 44,45 (see Figure 1).A common approach to deal with effective CG many-body potentials is to accurately determine the effective CG pair potential and treat the higher-body contributions as corrections. 7,27In particular, for a given configuration at fixed R N and in the absence of any external field, the αth component of the effective CG force on CG site I can be seen as the sum of an effective CG two-body (2) contribution (treated in a pairwise fashion) and effective CG three-and higher-body (3+) contributions, F I,α = F I,α (2) +F I,α (3+) .The partitioning of the effective CG forces into individual two-and higher-order contributions is a convenient construction that has been typically adopted to disentangle the role of many-body interactions (for many technical reasons, however, mainly three-body interactions). 7,18,39A clear advantage of learning directly the individual mean forces as in our proposed scheme is that they already include such information as they are, by definition, gradients of the true effective CG many-body PMF. Therefore, in order to demonstrate the generality of our ML approach in constructing effective CG potentials that incorporate many-body effects, we extend the method to larger systems composed of 12 nanoparticles.In this case, any higher-order body interaction (3+) should be captured in the effective CG ML potential Φ ML12 as the to-be learned mean forces are directly the gradients of the underlying true effective many-body PMF.For each solvent quality, a training data set is composed of a collection of configurations of 12 nanoparticles and the mean vectorial force components associated with each particle.Such configurations are obtained by first confining the 12 nanoparticles in a spherical domain using a repulsive wall.Subsequently, we fix the centers of mass of the nanoparticle cores and remove the walls.By varying the diameter of the confining sphere, we obtained high-and low-density configurations.Thus, the data set effectively contains configurations from very low density states, where barely two-body interactions are present, all the way to very highdensity states, where the capping layers of different NPs simultaneously interact.After equilibration is reached in the MD simulations of the FG model, we select 100 samples at random and measure the mean forces on each nanoparticle core using constraint MD simulations.We again use eq 4 to fit the vectorial mean forces using simple linear regression of the gradients of the radial and angular SFs (see Descriptors: Symmetry Functions and their Gradients).The number of descriptors is N SF = 123 and N SF = 138 for s = 0.1 and s = 0.3, respectively.The resulting ML models present a correlation coefficient of R 2 ≈ 0.998 and R 2 ≈ 0.997, and RMSE values of 29.30 k B T/σ c and 21.19 k B T/σ c , for a solvent with s = 0.1 and s = 0.3, respectively.Even with the simple linear model we use, the predicted effective CG forces obtained from our ML approach are in good agreement with those directly measured in MD simulations of the FG model, as shown in the parity plot in Figure 3 showing the many-body forces predicted by the ML models and those calculated from the fine-grained models. As a natural test of the effective CG many-body potential of mean force Φ ML12 as obtained from eq 8 using the 12 NP system, we evaluate the effective CG interaction energy between only two particles Φ ML12 (2) (R IJ ) as a function of their separation distance R IJ and compare it against the one obtained for the strictly two-body system Φ ML2 (2) (R IJ ) as described above, which accurately described the effective CG PMF directly obtained from constrained MD simulations of the FG model as shown in Figure 2. The results are shown in the right panels of Figure 4, where we can appreciate the excellent match between the effective CG potentials for a solvent with s = 0.1 and s = 0.3.We emphasize that no shifting constants have been used to make the Φ ML12 (2) (R IJ ) and Φ ML2 (2) (R IJ ) curves coincide.The observed agreement demonstrates not only that the R IJdependence of the effective CG two-body interaction potential is well captured but that since the model is constructed on the basis of individual particle contributions it is able to accurately differentiate between local configurations. The values of the effective CG PMF Φ ML12 (R 12 ) for different clusters of 12 particles as obtained by summing the two-body effective potential Φ ML2 (R IJ ) between pairs of particles and as predicted by Φ ML12 (R 12 ) are compared in the left panels of Figure 4. We can appreciate that, while both descriptions strongly correlate, the Φ ML12 (R 12 ) potential tends to overestimate the strength of the interactions, as the values are systematically higher (more positive) than those predicted by the Φ ML2 (R IJ ) potential. To better appreciate the effect of many-body effects on the effective CG potential, we focus on the effective three-body correction in a solvent with s = 0.3.To this end, we evaluate the effective CG PMF in systems containing 3 particles in triangular (T) and linear (L) configurations using the ML CG potential constructed with only two-body terms Φ ML2 and the one containing the many-body corrections Φ ML12 from the ML fits of the mean forces.To generate the configurations, we fix two particles at a separation distance R JK /σ c = 1.88 and place the third particle I at different positions h I relative to the two fixed particles (see the pictorial sketches in Figure 5).The resulting curves are shown in Figure 5.As observed, the curves of Φ ML2 and Φ ML12 naturally take the value of Φ (2) (1.88σ c ) ≈ −3.3 k B T at large separation distances, as the effect of the third particle becomes negligible.For the L configurations, we notice that the Φ ML2 and Φ ML12 curves exactly overlap at all h I distances.This is because particles I and K do not effectively interact at any h I , thus ruling out any three-body interactions.However, one can immediately appreciate that, in the T configurations, the three-body contribution is nonvanishing as the Φ ML2 and Φ ML12 curves slightly differ from each other.In particular, we find that the ML effective CG PMF based on two-body contributions Φ ML2 overestimates the effective CG attractive (cohesive) interactions, which is in agreement with the results above.Furthermore, the steric repulsion between a triplet of particles at short distances becomes slightly stronger if the three-body correction is considered.This positive (repulsive) contribution to the CG PMF incorporated in Φ (3+) appears to be common to systems exhibiting an effective CG two-body attractive interaction (e.g., colloid−polymer mixtures 26 ).Finally, we also show in Figure 5 the effective CG PMF for a system of four particles as illustrated in the inset SQ. As expected for such a configuration, the effective CG manybody effects are negligible, but it clearly shows that the model is able to reproduce the correct energies.This also holds for clusters with more particles as we have discussed above. Testing the ML Models: Phase Behavior and Structure.To investigate the effect of nonvanishing manybody corrections on the effective CG many-body PMF, we focus on the phase behavior of NPs in two different solvents.More precisely, we compute the equation of state (EOS) of 3D bulk systems consisting of NPs in both solvents by performing isothermal−isobaric (NPT) MC simulations using the effective CG two-body Φ ML2 and the effective CG many-body Φ ML12 potentials of mean force.We perform simulations on a system of N = 500 nanoparticles and obtain the equilibrium states by either compression from the low-density fluid phase or expansion from a high-density face-centered-cubic (FCC) phase.In Figure 6, we plot the equations of state, reduced pressure Pσ c 3 /k B T as a function of reduced density ρσ c3 , for both solvents.Interestingly, in a good solvent (s = 0.1), both the effective CG Φ ML2 and Φ ML12 potentials yield nearly identical curves.We notice that under such solvent conditions, the system exhibits a low-density fluid phase for ρσ c 3 < 0.10 that eventually undergoes a first-order phase transition to a FCC crystal with a density ρσ c 3 ≈ 0.11.The reasonable match between the two effective CG potentials, which was also observed in the parity plot of Figure 4 leads us to conclude that, for s = 0.1, the effective CG two-body approximation is indeed reasonable and that the many-body corrections do not alter the phase behavior of the system.For a solvent quality s = 0.3, we observe a FCC crystal at a density of ρσ c 3 ≈ 0.21 using both the effective CG Φ ML2 and Φ ML12 potentials.This is due to the bad solvent conditions, which induce a strong effective attraction between the nanoparticles.Indeed, even if we start the simulations from very low-density configurations at very low pressures, we observe the formation of clusters that eventually nucleate into an FCC phase.We have attempted to obtain the density of the coexisting fluid phase by directcoexistence simulations, where we start with an equilibrated FCC crystal slab in the center of an elongated box in contact with a vacuum; however, we do not detect any particles leaving the crystal to migrate to the gas phase in our NVT simulations.The total effective energy for a triplet of particles in a bad solvent (Figure 5) indicates that the linear configuration is energetically more favorable than the triangular one.In systems of NPs interacting with a short-range square-well potential grafted by up to 12 fully flexible chains consisting of up to 14 hard beads 46 as well as in moderately polymer-grafted NPs in a polymer melt, 47 this type of effective interaction may lead to the formation of anisotropic structures like stripes and disks.Nevertheless, in our simulations with N = 500 particles in bad solvent, the small clusters rapidly aggregate into the equilibrium 3D FCC phase, which indicates that the effective two-body interactions outweigh the repulsive contributions.We therefore conclude that the system exhibits a very broad phase coexistence between an infinitely dilute gas phase and an FCC phase with a coexisting density ρσ c 3 ≈ 0.21 for a solvent quality s = 0.3.Additionally, we note that there are small deviations in the equations of state as obtained by using the effective CG Φ ML2 and Φ ML12 potentials, signaling the importance of the many-body effects on the effective CG potential.More specifically, we observe that the pressure is underestimated using the effective CG Φ ML2 in comparison with the Φ ML12 potential at high densities, which is in agreement with the parity plot of Figure 4, showing that the repulsion is underestimated by the effective CG Φ ML2 PMF. Finally, to further validate the effective CG many-body potentials, we compute the radial distribution functions g(R IJ ) as a function of the distance R IJ between the nanoparticle cores in the equilibrium phases of the NPs using MC simulations and compare them against results obtained from extensive NVT unconstrained MD simulations of the FG model.In particular, we focus on a system with solvent quality s = 0.1 as it exhibits a low-density fluid phase and a sof t FCC crystal that, in our simulations, can be continuously compressed from ρσ c 3 ≈ 0.11 all the way to ρσ c 3 ≈ 0.18 preserving the same symmetry but just changing the lattice constant.The g(R IJ )'s as measured from MC simulations using the machine-learned effective CG many-body potentials Φ ML2 and Φ ML12 along with the g(R IJ )'s obtained from MD simulations of the FG model are shown in Figure 7 for varying densities ρσ c 3 = 0.07, 0.12, and 0.18.We clearly observe that the structures of the phases obtained from both CG models; i.e., the effective CG Φ ML2 and Φ ML12 potentials of mean force match very accurately those of the FG model.These results evidence the ability of the proposed ML method in constructing the effective CG many-body potentials by learning solely the mean forces sampled in the FG model. CONCLUSIONS In summary, we have introduced a machine-learning approach to construct effective coarse-grained many-body potentials for complex ligand-stabilized nanoparticles by representing the mean forces sampled from constrained simulations of a finegrained model in terms of gradients of structural descriptors.For the specific model of NPs that we have studied in an implicit solvent of varying quality, the effective coarse-grained two-body contribution Φ (2) to the PMF in the presence of a bad solvent is a nonmonotonic function of the separation distance between two particles and exhibits a deep minimum, corresponding to an effective coarse-grained attractive interaction.In contrast, in a good solvent, the effective coarse-grained two-body PMF is fully repulsive.As judged from the results of MC simulations using the effective coarsegrained two-body potential of mean force Φ ML2 and the effective many-body potential of mean force Φ ML12 , we find that both CG models reproduce accurately the phase behavior and the structure of the fluid and crystal phases of the FG model as they match the equation of state and the radial distribution functions at varying thermodynamic state points.In summary, our simulations indicate that for nanoparticles stabilized by ligands with a commonly used chain length, the impact of many-body effects on phase behavior is minimal, except for high-density crystal phases.This is a significant and nontrivial finding. The multiscale methodology presented in this work constitutes a general bottom-up coarse-graining strategy where the mean forces acting on CG sites, which are extracted from reference FG simulations, are represented using a simple linear model in terms of gradients of Behler and Parrinello symmetry functions.The linearity of the model allows one to define a simple function representing scalar effective CG manybody potentials of mean force as a function of all CG-site coordinates.This allows us to bypass the prior identification of suitable reaction coordinates (collective variables) to measure gradients and subsequently integrate them to obtain a scalar many-body potential of mean force as is required in the methodology of ref 22 and in our previous work in ref 27. Finally, we stress that although we have illustrated the method by applying it to an FG model of ligand-stabilized NPs, the framework is generic and can be extended to other relevant systems.In Figure 8 and in the Supporting Information, we present an application to a system of colloid−polymer mixtures, where coarse-grained many-body interactions are pronounced.We employ the so-called pseudo Asakura Oosawa model, where the colloids are represented by hard spheres, the pseudo colloid−polymer interactions are also treated by a pseudo hard-sphere-like potential, and the polymers are ideal.The diameter σ p of the polymer-coils is set equal to the diameter of the colloids σ c .We show in Figure 8 and in the Supporting Information that a coarse-grained description of such a system based purely on effective pairwise depletion interactions, ML2, leads to strong deviations from the FG model.In contrast, the FG model can be accurately represented by the effective CG potential, ML108, that accounts for many-body contributions. The current method, which relies on descriptors of local environments that are spherically symmetric, has demonstrated promising results for systems with isotropic interactions.In principle, our method could be extended to systems in which the interactions are anisotropic, such as faceted nanocrystal cores or particles with inhomogeneous distributions of ligands on their surfaces.However, this would require descriptors that can encode information about the orientational dependence of the anisotropic interactions, which we will explore in future work.Our simple yet accurate force-matching coarse-graining framework will enable accurate and fast simulations of the effective many-body systems to characterize, understand, and predict the phase behavior and structure of relevant soft-matter systems such as suspensions of charged colloids or microgel particles. METHODS Model and Simulation Details.Implicit Solvent.To mimic the effect of a solvent in an implicit fashion, we follow the approach by Fan et al., 34 where the pair interactions between nonbonded beads in the nanoparticles are modeled through a modified LJ potential: where is the standard LJ potential, σ b = 0.47 nm and ϵ = 0.8365 kcal mol −1 are the length-and energy-scale parameters of the pair interactions, respectively, r is the separation distance between pairs of beads, and r c = 1.2 nm is the cutoff radius of the interaction.The parameter s defines the strength of the attractive LJ interactions relative to the original MARTINI value.In order to mimic good-and poor-solvent conditions for the ligand chains, we use two values of s, namely, s = 0.1 and s = 0.3.Note that the limiting case of s = 1 recovers the original LJ potential and mimics an extremely bad solvent (or vacuum) for the NPs.Intramolecular interactions within the chains, acting on the centers of bonded beads, are described via a harmonic bond-stretching potential: where K b = 149.3787kcal mol −1 nm −2 is the bond force constant, and b and b 0 = 0.47 nm the instantaneous and equilibrium bond distances, respectively.Similarly, the angle-bending between three connected beads is modeled using a harmonic potential, where K θ = 2.9876 kcal mol −1 is the angle force constant, and θ and θ 0 = 180°are the instantaneous and equilibrium angle-bending values, respectively.Interactions between NC cores are neglected, as these forces, for small NCs, are typically much weaker than the ligand− ligand interactions. 48,49olecular Dynamics Simulations of the FG Model.All molecular dynamics (MD) simulations on systems with different number of NPs are performed using the software package LAMMPS. 50The simulation box is typically a cubic cell of volume V where periodic boundary conditions are applied in all directions.Prior to MD simulations, overlaps between ligand beads in the initial configurations are removed by energy minimization using the steepestdescent algorithm.Simulations are then performed in the canonical (NVT) ensemble at a temperature T = 300 K, which is kept constant via a Nose−Hoover thermostat.The equations of motion are integrated with a typical MARTINI time step of δt = 20 fs, where the NP cores can be either "diffusing" as rigid bodies or have their centers of mass spatially constrained within the simulation box volume.Typically, constraint MD simulations for computing the mean forces are run for up to 5 × 10 7 steps, where statistics are collected over the last 3 × 10 6 steps. For validation of the CG potentials obtained by direct fitting of the mean forces, we perform extra MD simulations of the FG model at different densities by using N = 108 particles.These simulations are run for up to 1 × 10 8 steps. Thermodynamic Consistent Coarse-Graining.The effective coarse-grained potential of mean force Φ(R N ) described in the main text can alternatively be formally conceived in the context of the effective one-component Hamiltonian formalism. 45,51To demonstrate this, we define the total Hamiltonian as a sum of interaction Hamiltonians describing the interactions between the N nanoparticle cores R ( ) the interactions between the N nanoparticle cores and the n l ligand beads R r ( , ) N n cl l , and the interactions between the n l ligand beads r ( ) where R N the coordinates of the nanoparticle cores, which are treated as rigid bodies consisting of n c beads, and r n l the coordinates of the ligand beads.In the canonical ensemble (N, n l , V, T), the partition function, after carrying out the trivial integrations over the momenta reads (16) where Λ α denotes the thermal de Broglie wavelength of species α.By integrating out the degrees of freedom of the ligand beads, we obtain where we have mapped the fine-grained system of N nanoparticle cores and n l ligand beads described by an interaction Hamiltonian R r ( , ) N n l onto an effective coarse-grained system of nanoparticles that is described by an effective one-component Hamiltonian . Here, we have defined the Helmholtz free energy of the ligand beads in the external field of a fixed configuration of nanoparticle cores at coordinates R (18) From these definitions, it follows that in the case that an observable ( ) N only depends on the coordinates of the centers of mass of the nanoparticles R N , the ensemble average R ( ) N identical in the full system and in the effective one-component system Hence, the mean force acting on the center of mass of nanoparticle I at a fixed configuration R N can be computed as where the first term R R ( )/ N cc I is the force due to the bare interactions between the nanoparticle cores, and the last term is the force of the ligand beads on nanoparticle I averaged over all possible configurations of the ligand beads. Fitting Procedure.Training Data Set.As detailed in the main text, two different data sets are built: (i) one containing information about a pair of particles at different separation distances and (ii) one for configurations of 12 particles at varying density.In (i), the mean forces acting on the center of mass of the nanoparticle cores are collected over a total of 250 separation distances.Hence, a total of 2 × 3 × 250 examples are used to construct the model.In case (ii), the mean forces on all the particles are measured in 100 constraint MD simulations, resulting in a total number of 12 × 3 × 100 examples. Descriptors: Symmetry Functions and their Gradients.To describe the local environment of a particle, we employ the SFs proposed by Behler and Parrinello 43 to represent high-dimensional potential-energy surfaces based on neural networks.More precisely, to characterize the local environment of a particle within a cutoff distance R c we employ the particle-centered spherically symmetric radial SFs, defined as which is a sum of Gaussians, where the parameters μ and R s control the width and position of the Gaussians with respect to particle I, respectively, and where f c (R IJ ) is a cutoff function which decays monotonically and smoothly goes to 0 in both value and slope at the The second type of SF is the angular SF, G (3) , which contains information on the angular correlations and is defined as (23) where the indices J and K run over all the neighbors of particle I, and ξ and λ are two parameters that determine the shape of the function.The parameter λ can have values of +1 or −1 and determines the angle θ IJK at which the angular part of the function has its maximum.The parameters ξ and μ a control the angular and radial resolution, respectively. The gradients of the radial SF, which we directly use to represent the vectorial forces, can be straightforwardly calculated by the repeated use of the chain rule for derivation.The gradient does take a different form if the gradient is taken with respect to the particle for which the SF is computed ∇ I G (2) (I) or with respect to a different neighboring particle ∇ J G (2) (I), where J ≠ I.These analytical and continuous gradients are given by ) Note that we use the convention that R IJ = R I − R J and R IJ = |R IJ |.For the angular SF, computation of the gradient is still a straightforward application of the chain rule, although it is more involved than for the radial SF.In this case, the gradients read ( ( , ) 2 ( )) and ( )) ( ( , ) 2 ( )) where we have used the following auxiliary functions as introduced in ref 52: 1 (1 cos ( , )) 1 1 1 cos ( , ) (1 cos ( , )) Selection of Descriptors.The feature selection method employed in this work is described in detail in refs 25 and 26.Here, we present only a brief summary.We will use the acronym SFs to refer to the symmetry functions and their gradients interchangeably.The first step of the approach is to create a large but manageable pool of candidate descriptors (gradients of SFs).Such a procedure is accomplished by calculating, for every particle in the data set, M SFs with varying parameter values, which in turn are selected following simple heuristic rules with the goal of capturing most of the many-body correlations within a certain cutoff radius.In all the cases presented in this work, we have used a cutoff value of R c = 2.5σ c .The parameters for the radial SFs employed for the generation of the initial pool for constructing the models of systems containing two nanoparticles for solvents s = 0.1, 0.3 are μ/σ c −2 ={0.00001, 0.0001, 0.001, 1.715, 3.429, 5.144, 6.858, 8.572, 10.286, 12.000, 13.715, 15.429, 17.143, 18.857, 20.572, 22.286, 24.000, 26.000, 28.000, 30.000, 32.000, 34.0000} and R s /σ c = {0.0,0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4}.The ones for the radial and angular SFs used in the models for systems of 12 nanoparticles in both solvents are μ/σ c −2 = {0.001,1.715, 3.429, 5.144, 6.858, 8.572, 10.286, 12.000, 13.715, 15.429}, R s / σ c = {0.0,0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}, μ a /σ c −2 = {0.0001,0.001, 0.01, 0.1, 1.0, 2.0, 4.0, 8.0}, λ = {1, − 1}, and ξ = {1.0,4.0, 8.0, 12.0}. In a second step, an optimal subset of N SF < M SFs is selected from the pool in a stepwise fashion.The optimal subset should capture the most relevant features of the local environment of a particle, as it is subsequently used as the basis of a regression scheme to approximate the mean force of a particle.The first selected SF corresponds to the one with the largest correlation with the target function as quantified by the square of the Pearson correlation coefficient, defined as where represents the sum of the derivatives of the kth SF for all N particles with respect to particle I, as used in the definition of the model in eq 7. F I,α denotes the target mean force component on particle I. k and F̅ correspond to arithmetic means over all the configurations in the data set, and σ SD (Ψ k ) and σ SD (F) to their standard deviations.Note that summation over vectorial components is implied above.The second SF is then selected to be the one that maximizes the increase in the linear correlation between the set of selected SFs and the target data as determined by the coefficient of multiple correlations, whose square is given by where c T (c 1 , c 2 , ...) is the vector whose ith component is given by the Pearson correlation coefficient, c i , between the ith SF (gradient) and the target data, and R is the correlation matrix of the current set of SFs with elements R ij representing the Pearson correlation function between the ith and jth SF.In the case of only one SF, R 2 reduces to c i 2 . R 2 can also be computed as the fraction of variance that is explained by a linear fit (including an intercept) of the target function in terms of the SFs in the set.The latter method of computing R 2 is slightly more expensive but is numerically more stable. 25By maximizing the increase in the linear correlation with the target variable, we guarantee that only SFs that add relevant information are selected, while SFs that poorly correlate with the particle forces or free energy are avoided along with highly correlated SFs which add redundant information. 25This process is repeated iteratively, and further SFs are selected until R 2 stops increasing appreciably.This indicates that the remaining SFs in the pool add negligible (irrelevant) information to the model.Such a procedure constitutes a simple rule to optimize the number of selected SFs, which are then used to approximate the target function via simple linear regression. ASSOCIATED CONTENT * sı Supporting Information The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsnano.3c04162.Additional details of the colloid−polymer mixture system (PDF) Figure 1 . Figure 1.Schematic representation of two nanoparticles with a hard core of diameter σ c and capped with a soft-deformable shell of covalently anchored ligands with a thickness of approximately λ s such that the effective diameter of each nanoparticle is approximately σ NP = σ c + 2λ s .The blue spheres correspond to MARTINI ligand beads, and the pink beads represent the nanocrystal core.Note that in order to highlight the core−corona architecture of the particles, only ligand chains on a hemisphere of the NP cores are shown.In the fine-grained representation adopted here, a single surface-attached ligand (octadecanethiol) consists of 5 MARTINI beads of diameter σ b , which represent 18 carbon atoms. Figure 2 . Figure 2. Coarse-grained two-body potential of mean force (PMF) Φ (2) for a pair of nanoparticles with core diameter σ c = 4.2 nm as a function of the center-of-mass distance of the two nanoparticle cores R IJ for two different solvent qualities, s = 0.1 (good solvent) and 0.3 (bad solvent).Filled symbols correspond to the PMF values obtained from integrating the mean forces measured in constrained MD simulations of the FG model, whereas the dashed lines correspond to the values predicted by the potential constructed by directly learning the mean forces on the two particles (ML2). Figure 3 . Figure 3. Parity plot comparing the vectorial components of the many-body mean forces (in k B T/σ c ) on individual NPs in an implicit solvent with s = 0.1 (left) and s = 0.3 (right) predicted by the ML models with those calculated from the fine-grained (FG) models in systems consisting of low-and high-density clusters of 12 particles (see text for details). Figure 4 . Figure 4. Left: Effective CG PMF values of systems composed of 12 NPs in an implicit solvent with s = 0.1 (top-left) and s = 0.3 (bottomleft) as predicted by the ML models Φ ML2 (R IJ ) and Φ ML12 (R 12 ) (see the text for details).Right: Effective interaction between two particles as a function of the interparticle separation distance R IJ as obtained from the ML Φ ML2 (R IJ ) model (filled pentagons) and as obtained from the Φ ML12 (R IJ ) model (dashed lines) for NPs in solvents with s = 0.1 (top-right) and s = 0.3 (bottom-right). Figure 5 . Figure 5. Left: The effective CG potential of mean force Φ as a function of position h I of particle I relative to the position of the remaining fixed particles as shown in the pictorial sketches.Values are shown for a triplet of particles in bad solvent (s = 0.3) ordered in triangular (T) and linear (L) configurations as well as for a quartet (SQ) (see right bottom panel).Filled symbols correspond to the potential values obtained from the two-body approximation (Φ ML2 ), whereas the dashed lines represent those obtained from the ML potential Φ ML12 constructed from the linear fit of the mean forces in systems containing clusters of 12 particles.Right: Difference between the effective CG many-and two-body PMFs in the considered configurations, Φ (3+) .Note that curves of Φ on the left are obtained by keeping particles J, K, and fixed at a separation distance = R R JK Figure 6 . Figure 6.Equation of state, reduced pressure Pσ c 3 /k B T as a function of reduced density ρσ c 3 , for a good solvent s = 0.1 and a bad solvent s = 0.3 as obtained by MC simulations using the effective CG twobody Φ ML2 potential (filled symbols) and those obtained using the effective CG many-body Φ ML12 potential (dashed lines). Figure 7 . Figure 7. Radial distribution functions g(R IJ ) as a function of the distance R IJ between the nanoparticle cores for a solvent with s = 0.1 as obtained from MD simulations of the FG model (filled circles) and from MC simulations using the effective CG two-body Φ ML2 potential (dashed line) and many-body Φ ML12 potential (dotted line) at density ρσ c 3 : (a) 0.07, (b) 0.12, and (c) 0.18.The corresponding typical configurations obtained from MD simulations of the FG and from MC simulations of the CG model are shown in the panels below. Figure 8 . Figure 8. Colloid-colloid pair correlation function measured in bulk systems at colloid packing fraction η c = 0.45 and (a) reservoir polymer packing fraction η p r = 0.5 and (b) η p r = 2.0.Filled circles correspond to the distributions measured in the FG model, while the dashed pink and dotted blue lines represent the results obtained with the CG ML2 and ML108 potentials, respectively.(c) Typical configurations of the simulated systems.See Supporting Information for more details.
14,256.2
2023-11-27T00:00:00.000
[ "Materials Science", "Chemistry", "Physics" ]
Impaired innate immune gene profiling in airway smooth muscle cells from chronic cough patients Chronic cough is associated with airway inflammation and remodelling. Abnormal airway smooth muscle cell (ASMC) function may underlie mechanisms of chronic cough. Our objective was to examine the transcriptome and focused secretome of ASMCs from chronic cough patients and healthy non-cough volunteers. ASMC gene expression profiling was performed at baseline and/or after stimulation with polyinosinic:polycytidylic acid (poly(I:C)) to mimic viral infection. Supernatants were collected for multiplex analysis. Our results showed no significant differentially expressed genes (DEGs, false discovery rate (FDR) <0.05) between chronic cough and healthy non-cough ASMCs at baseline. Poly(I:C) stimulation resulted in 212 DEGs (>1.5 fold-change, FDR <0.05) in ASMCs from chronic cough patients compared with 1674 DEGs in healthy non-cough volunteers. The top up-regulated genes included chemokine (C–X–C motif) ligand (CXCL) 11 (CXCL11), CXCL10, chemokine (C–C motif) ligand (CCL) 5 (CCL5) and interferon-induced protein 44 like (IFI44L) corresponding with inflammation and innate immune response pathways. ASMCs from cough subjects had enhanced activation of viral response pathways in response to poly(I:C) compared with healthy non-cough subjects, reduced activation of pathways involved in chronic inflammation and equivalent activation of neuroregulatory genes. The poly(I:C)-induced release of inflammatory mediators, including CXCL8, interleukin (IL)-6 and CXCL1, from ASMCs from cough patients was significantly impaired compared with healthy non-cough subjects. Addition of fluticasone propionate (FP) to poly(I:C)-treated ASMCs resulted in greater gene expression changes in healthy non-cough ASMCs. FP had a differential effect on poly(I:C)-induced mediator release between chronic cough and healthy non-cough volunteers. In conclusion, altered innate immune and inflammatory gene profiles within ASMCs, rather than infiltrating cells or nerves, may drive the cough response following respiratory viral infection. Introduction Chronic cough defined as the cough lasting for more than 8 weeks is a common clinical problem present in a proportion of the population [1,2]. A heightened cough reflex is a common abnormality observed in people with a chronic cough, which can be disabling and impairs quality of life [3,4]. Cough variant asthma and eosinophilic bronchitis are two common causes of chronic cough associated with eosinophilia that respond to anti-inflammatory corticosteroids [5,6]. On the other hand, some forms of chronic cough in which no identifiable cause is found, termed as idiopathic cough, can also be associated with inflammatory and airway wall remodelling features in the airways submucosa including an increase in mast cells, goblet cell hyperplasia, increased blood vessels and airway smooth muscle hypertrophy and hyperplasia [7]. Elevated levels of inflammatory mediators including chemokine (C-X-C motif) ligand (CXCL) 8 (CXCL8), tumour necrosis factor α (TNFα) and myeloperoxidase, lipids such as prostaglandin (PG)D 2 and PGE 2 , and leucotriene (LT) B4 and neutrophils are present ιn induced sputum of patients with persistent cough [8]. Increased numbers of mast cells and eosinophils as well as elevated histamine levels have been reported ιn bronchoalveolar lavage fluid (BALF) of patients with chronic cough [9]. The mechanisms by which chronic cough occurs are unclear but there is growing evidence that the interaction of inflammatory factors with epithelial nerves in the airways may be important [10]. An abnormality of the airway smooth muscle cell (ASMC) function in chronic cough is suggested by the increased mass seen in patients with chronic cough [7,11], which may result from an increased proliferative rate. In addition, because ASMCs are known to possess the ability to generate inflammatory mediators such as cytokines, chemokines, proteases and growth factors [12], it is possible that these cells may contribute to the inflammatory process that underlines potential neuroinflammatory mechanisms [13]. ASMCs from patients with asthma are in a hyperproliferative phase, overexpress the chemokines chemokine (C-C motif) ligand (CCL) 11 (CCL11) and CXCL8, and particularly in patients with severe asthma, there is a reduced effect of corticosteroids in inhibiting TNFα-induced release of CCL11 and CXCL8 [14,15]. In addition, in asthma there is an increased number of mast cells in ASM bundles that could set the scene for an interaction of the Th2 cytokines, interleukin (IL)-4 and IL-13, released from mast cells to interact with ASMCs [16]. Whether there are related inflammatory or innate immune abnormalities in the ASMCs from patients with chronic cough is not known. Such changes in ASMCs may lead to inflammatory changes around nerve terminals in the airways for the induction of neuroinflammatory interactions. In this pilot study, we isolated ASMCs from the airways and after culturing, we examined their mRNA and secretome profile to determine whether ASMCs from non-asthmatic chronic cough patients of no known cause were different from those from healthy non-coughing volunteers. Because respiratory viral infections such as the common cold are a common cause of acute cough and because chronic idiopathic cough is very often initiated by the common cold, we analysed ASMCs under stimulation with polyinosinic:polycytidylic acid (poly(I:C)), a viral mimic; in addition, we examined the effect of the corticosteroid, fluticasone propionate (FP), since this is often used as a treatment for chronic cough. Corticosteroids however while efficacious in controlling cough associated with asthma, cough variant asthma and eosinophilic bronchitis, it is not efficacious in non-asthmatic chronic cough [1]. Patients Patients with chronic cough were recruited from a cough clinic at the Royal Brompton and Harefield (RBH) NHS Trust Hospitals ( Table 1). All subjects had reported a dry and persistent cough for at least 8 weeks and had been screened to exclude the diagnosis of other respiratory diseases including asthma. Healthy non-cough volunteers had no smoking history or history of respiratory or cardiac diseases. Chronic cough volunteers were all non-smokers. None of the subjects studied in either group had any evidence for small airway disease/obstruction. The present study was approved by the Ethics Committees of the RBH Hospitals NHS Trust Ethics Committee (11/H0706/4 and 10/H0721/66) and all subjects gave their full informed written consents. Fibreoptic bronchoscopy and endobronchial biopsies Seventeen volunteers (6 cough patients and 11 healthy non-cough volunteers) underwent fibreoptic bronchoscopy. This procedure was performed under sedation according to standard clinical practice at the RBH NHS Trust Hospital. Endobronchial biopsies were collected from the right middle lobe (RML) and were immediately transferred in culture medium containing FBS. ASMC isolation and culture ASMCs were isolated from biopsies and cultured in supplemented Dulbecco's modified Eagle's medium (DMEM) as previously described [17]. ASMCs showed a characteristic 'hill and valley' morphology and expressed smooth muscle α-actin in more than 95% of the cells. Cells between passages 4 and 6 were used and were FBS deprived for 24 h prior to experiments. During this time they were incubated in Phenol Red-free DMEM supplemented with 4 mM l-glutamine, 20 U/l penicillin, 20 μg/ml streptomycin, 2.5 μg/ml amphotericin B, 1:100 non-essential amino acids and 0.1% BSA. ASMs were stimulated with poly(I:C) (5 μg/ml) for 18 h in the presence or absence of FP (10 −8 M) added 2 h prior to stimulation. Statistical analysis Affymetrix arrays were normalized using the robust multi-array average (RMA) approach. Quality control (QC) was performed using the median absolute deviation (MAD) residual mean and the relative log expression (RLE) mean to exclude outliers. Arrays that passed QC were used for differential gene expression (DEG) analysis between chronic cough and healthy non-cough subjects at baseline and between treatment groups using a general linear model. Other data are expressed as mean + − S.D. Statistical analysis was calculated by ANOVA for repeated measures. If significant, post tests were performed. Paired comparisons were compared using paired t test. Differences were considered significant when P<0.05. Statistical analysis was performed using Prism 5 for Windows, v. 5.03 by GraphPad Software, Inc. (La Jolla, CA, U.S.A.). DEGs and proteins in ASMCs at baseline Although 150 genes were either up-or down-regulated >1.5-fold in cells from cough and non-cough healthy volunteers (Supplementary Table S1), these differences were not significant (P<0.05) when analysed by false discovery rate (FDR). Pathway analysis showed the up-regulation of pathways associated with post-translational modifications including histone acetylation and arginine methylation in chronic cough cells. There was a small but significant (P<0.05) reduction in the baseline release of a number of inflammatory and innate immune mediators including TNFβ, IL-1α, IL-7, IL-10, IL-13, CCL4, CCL5, epidermal growth factor (EGF) and granulocyte-colony stimulating factor (G-CSF) in ASMCs from chronic cough patients compared with healthy non-cough volunteers ( Figure 1). DEGs in poly(I:C)-stimulated ASMCs Poly(I:C) stimulation of ASMCs regulated similar genes in both chronic cough patients and healthy non-cough volunteers but the response in healthy non-cough volunteers was greater (1674 DEGs: 897 up-regulated, 777 down-regulated) than that seen in ASMCs from chronic cough patients (221 genes up-regulated, 55 down-regulated) when compared with baseline (Supplementary Table S2). Poly(I:C), as expected, up-regulated genes associated with inflammation, innate immunity and viral responses including CXCL11 , CCL5 (chemokine (C-C motif) ligand) 5 and CXCL10, interferon-induced protein 44 like (IFI44L) in ASMCs from both healthy non-cough and chronic cough volunteers. The pathway called the effect of alcohol exposure on the brain (alcoholism) is up-regulated to a similar extent by poly(I:C) in cells from cough patients and healthy non-cough volunteers ( Table 2). This pathways contains many genes linked to sensory nerve activity or cough including the dopamine D1 receptor, neuropeptide Y, brain derived neurotrophic factor and NMDA-type subunit 1. Pathways associated with viral infection and immunity were up-regulated in ASMCs from chronic cough patients but not in healthy non-cough volunteers' ASMCs. In addition, ASMCs from chronic cough patients showed evidence of reduced inflammatory pathway stimulation, such as the TNFα, JAK-STAT and chronic inflammation (rheumatoid arthritis and inflammatory bowel disease) pathways compared with cells from healthy non-cough subjects in response to poly(I:C). Pathway analysis also demonstrated reduced activation of metabolic pathways in response to poly(I:C) stimulation in cells from healthy non-cough volunteers compared with chronic cough patients ( Table 2). Poly(I:C) had differential effects on inflammatory and innate immune mediator release in ASMCs from healthy non-cough volunteers and chronic cough patients (Figure 2, Supplementary Table S3). Poly(I:C) significantly induced the release of CX3CL1, CCL11 and EGF in ASMCs from healthy non-cough volunteers only. In contrast, the release of CCL3 and monocyte chemotactic protein 1 (MCP-1) was only induced by poly(I:C) in ASMCs from chronic cough patients. The release of poly(I:C)-induced CXCL8, IL-6, CXCL1, IL-1RA, IFNα2, IL-7, IL-1α, IL-13 and TNFβ was significantly impaired in ASMCs from chronic cough patients compared with that seen in healthy non-cough volunteers ( Figure 2, Supplementary Table S3). Overall, there was a reduction in inflammatory gene and protein expression in cells from chronic cough patients compared with healthy non-cough subjects. Table S4). The corticosteroid responsive genes FKBP5 (FK506 binding protein 5) and DUSP1 (dual specificity phosphatase 1) were up-regulated by FP and periostin (POSTN) and the glucocorticoid receptor (NR3C1) were down-regulated by FP to a similar extent in both cough and healthy non-cough volunteers (Supplementary Table S4). Pathway analysis indicated a significant up-regulation of the systemic lupus erythematosus pathway by FP reflecting Th1-type disease processes in cells from chronic cough patients. No pathways were differentially suppressed in chronic cough patients when compared with healthy non-cough volunteers (Table 3) and FP had no effect on mediator release at baseline. Table S5). Pathway analysis highlighted up-regulation by FP of inflammatory, immune and mitochondrial/metabolic pathways in poly(I:C)-stimulated ASMCs from healthy non-cough volunteers compared with chronic cough ( Table 4). In contrast, some inflammatory and innate immune/viral signalling pathways were down-regulated by FP in both groups ( Table 4). Pathways relating to inflammation and innate immune responses were only down-regulated in ASMCs from healthy non-cough volunteers with no effect seen in cells from chronic cough. In contrast, pathways such as viral carcinogenesis and systemic lupus erythematosus were preferentially down-regulated in cells from chronic cough patients. Effect of FP on poly(I:C)-stimulated pathways in ASMCs from chronic cough patients and healthy non-cough volunteers FP had a differential effect on poly(I:C)-induced mediator release from chronic cough and healthy non-cough volunteers (Figure 3, Supplementary Table S3). FP significantly inhibited poly(I:C)-induced CCL5, CX3CL1, IL-7 and CCL4 release from ASMCs from healthy non-cough volunteers (Figure 3, Supplementary Table S3). Conversely, only poly(I:C)-induced MCP-1 and TNFβ release was inhibited by FP in ASMCs from patients with chronic cough (Supplementary Table S3). Overall, there is a reduced ability of FP to suppress inflammatory gene and protein expression in ASMCs from patients with chronic cough compared with healthy controls. Discussion We report that ASMCs from chronic cough patients have an enhanced infection response and a reduced innate immune and inflammatory response to poly(I:C) stimulation compared with cells from healthy non-cough volunteers. Pathway analysis also highlighted the differential effect of the inhaled glucocorticoid FP on inflammatory, innate immune/antiviral, oxidant stress and metabolic/mitochondrial function between cells from chronic cough patients and healthy volunteers. ASMCs from chronic cough patients also have a reduced response to FP compared with that seen in cells from healthy non-cough subjects. This is the first report that highlights differences in gene expression and mediator release in ASMCs from chronic cough patients compared with healthy non-cough controls. Poly(I:C) stimulation up regulated a number of interferon-associated genes, such as CXCL11 (interferon-inducible protein 9), CXCL10 (interferon γ-induced protein 10) and RSAD2 (radical S-adenosyl methionine domain containing 2) as well as OAS2, a gene that encodes a member of the 2-5A synthetase family, which has been involved in the innate immune response to viral infection [19,20]. This response is greater in cells from chronic cough patients than the healthy non-cough volunteers. In contrast, pathway analysis revealed that inflammatory pathways such as TNF signalling pathway, rheumatoid arthritis and NF-κB signalling pathway were reduced after poly(I:C) stimulation in chronic cough ASMCs compared with healthy non-cough subjects. This supports the concept of an impaired inflammatory response of chronic cough ASMCs to poly(I:C) stimulation as reflected in the inflammatory mediator release by these cells. Impaired innate immune responses to infection have been reported previously in asthma. Asthmatic bronchial epithelial cells have a deficient IL-6 and CCL5 induction in response to rhinovirus-16 infection compared with cells from healthy volunteers [21]. Altered innate immune responses particularly TNFα release in response to the bacterial and viral ligands TLR4 and TLR7 have been reported in neonatal monocytes [22]. How this impairment in the innate immune function of the ASMCs link to chronic cough mechanisms is unclear but this may allow viruses to replicate more readily allowing viruses such as rhinoviruses to interact directly with epithelial nerve endings. Cross-talk featuring innate immune mediators between airway structural cells is becoming increasingly recognized as an important mechanism of disease [23]. On the other hand, the increased oxidative stress pathways at baseline and induced by poly(I:C) in ASMC from chronic cough patients is of interest. It has been previously reported that increased airway oxidative stress is associated with chronic cough [24]. Recent work has indicated that oxidative stress, in particular downstream of mitochondrial dysfunction, activates airway nociceptive sensory nerves that may contribute to an excessive cough reflex [25]. One of the mechanisms underlying oxidative stress is through the increased expression of transforming growth factor β (TGFβ) levels in BALF and in the bronchial mucosa and in particular ASMCs and the airway epithelium of patients with chronic cough [26]. TGFβ causes an oxidant-antioxidant imbalance in ASMCs and plays a major role in the redox-dependent pathways as it regulates antioxidant responses in ASMCs [17,27]. Although corticosteroids are not effective in controlling chronic cough of non-asthmatic origin, it was interesting to note that FP-treated ASMCs resulted in DEGs that have been previously linked to steroid responsiveness and inflammation, such as FKBP5 [28,29], DUSP1 [28,30], TSC22 domain family member 3 (TSC22D3) [28,29,31], period circadian clock (PER1) [28,32,33] and Kruppel-like factor 15 (KLF15) [28,34]. FP caused fewer changes in gene expression in ASMC from chronic cough patients than in healthy non-cough volunteers, which may be linked to the general lack of efficacy of corticosteroid treatment in chronic cough of non-asthmatic origin [1]. The differential expression of chemokine release that we have identified in the present study might affect infiltrating cell recruitment and have downstream effects on ASMC function. It is evident that the presence of infiltrating immune cells in close association with ASMCs modifies ASMC function and corticosteroid responses in asthma [16]. We have found that although there was an increase in mast cells in the airways submucosa of patients with chronic cough compared with subjects with no cough, there was no increase in the mast cells in the ASM compartment [7,11]. Therefore, it is unlikely that there is an increase in the direct interaction of immune/inflammatory cells with ASMCs in chronic cough. However, the increased number of mast cells in the submucosa may result in an increased potential for interactions with epithelial nerves which have increased transient receptor potential cation channel subfamily V member 1 (TRPV1) expression [35] that could underlie the increased sensitivity of the cough reflex found in chronic cough. There are some limitations to the present study. The use of poly(I:C) as a stimulus may not replicate all the effects of live virus. In mitigation, we see activation of key viral pathways with poly(I:C) in these cells at the gene and protein level. The comparison of the effects of rhinovirus in disease models is often difficult due to the use of different isolates obtained from patients rather than the use of a GMP virus for example, and that the dose of virus often used to infect cells in vitro results in extensive cell death which is not observed in vivo. The mechanism underlying the altered response of ASMC cells from cough patients has not been elucidated. The cells were cultured over 4-6 passages which suggests that some degree of epigenetic reprogramming may exist. To our knowledge, differences in miRNA expression, DNA methylation status or histone marks have not been studied in patients with chronic cough. In addition, the use of a single 2 h pretreatment time point for FP as well as a single time point to analyse gene expression profiles may reduce the information about the effects observed. We have previously shown that 2 h pretreatment of ASMCs with dexamethasone significantly inhibited TNFα-induced cytokine release in ASMCs from healthy volunteers [15]. Further studies are needed to evaluate the time course of FP and stimulus effects on the transcriptome in ASMCs from chronic cough and healthy non-cough volunteers. Furthermore, a wider protein/mediator panel would aid the understanding of the impaired immune response seen in the chronic cough ASMCs. We have shown for the first time that the baseline and stimulated innate immune response to infection in ASMCs is deficient in patients with chronic cough while oxidative stress pathways are enhanced. Poly(I:C) induces the expression of neuropeptides and receptors implicated in sensory nerve activation or cough equally in cells from chronic cough and healthy non-cough subjects. In addition, the reduced inflammatory response in cough ASMCs, the lack of sensitivity to FP and the enhanced expression of signatures relating to neuropeptides and neuronal ligands suggests that ASMCs may be important in local neuroinflammatory responses. An altered innate immune response particularly in response to poly(I:C) in ASMCs may have profound consequences on airway function and cough ( Figure 4). The induction of the cough responses provoked by respiratory viruses, a common cause of cough, may be mediated by airway structural cells such as ASMCs as well as by infiltrating immune cells and nerves. Further interactions are mediated by mediators and receptors between the two systems. Stimulation of ASM cells by TGFβ, for example enhances the expression of neuropeptides and neuropeptide receptors in these cells and increases oxidative stress. The central cough generator then establishes and co-ordinates the output to the muscles that cause cough and to ASM cells. The responses in cough patients are less responsive to inhaled corticosteroids than ASMCs from healthy non-cough subjects.
4,564
2017-08-25T00:00:00.000
[ "Biology", "Medicine" ]
Of Echoes and Utterances: A Brief Sketch of Arabic Hauntings in Hindi Cinema and Song On a Saturday afternoon in January 2017, Shah Rukh Khan—Hindi cinema's reigning star of the last thirty years—arrived at Bollywood Parks Dubai to promote his latest film release, Raees (2017). To the crowd of adoring Arab and South Asian fans and journalists who had flocked to the theme park to catch a glimpse of the “Badshah of Bollywood” and brand ambassador of Dubai, SRK unveiled the much-anticipated Arabic version of “Zaalima” (Cruel One), the film's breakout hit song. Rendered in Darija by Moroccan pop artists Abdelfettah Grini and Jamila El Badaoui, this version of “Zaalima” proved an awkward copy of the original, its unwieldy Arabic lyrics molded to fit as tightly within the blueprint melody as possible. “Dīrī fiya al-thiqa, al-gharām hā howa” (Put your confidence in me, for love is here) did not have quite the same ring or seamlessness as the Hindi-Urdu “Main sau martaba dīwāna hua” (I fell in love a hundred times over), with the Arabic line painstakingly crafted to echo the “hua” ending of the Hindi-Urdu. the first Indian song to trend on Spotify's Global Top 200 Chart within 48 hours of release, while its lyric video alone amassed over 470 million views on Youtube. 2 In a behind-thescenes promo video, Beast soundtrack composer Anirudh Ravichander explains that the piece is an attempt to fuse Arabic rhythms with kuthu, a popular genre of Tamil folk music, into a "Pan World" melody. 3 When it comes to the lyrics, unlike Arabic versions of Bollywood songs, the Arabic used here is deliberately slangified, what Anirudh describes as "the gibberish we speak, what we call Arabic." 4 For instance, the refrain "halamathi habibo" (I dreamt of my lover) is an approximation of "ḥalamt bi ḥabibi," while the hook "malama pitha pithaadhe" is a nonsensical phrase only meant to sound like Arabic to the everyday, non-Arabic-speaking listener. It seems that for Bollywood and Kollywood, the pursuit of this overall "Arabic mood" in songs-by way of musicality, costume and set design, lyrics, and collaboration with foreign record producers and singers-has become the latest formula for achieving virality, arguably the industries' most important tool for local and global audience growth and retention (Fig. 1). Yet, while "Arabic Kuthu" nods at (South) Indian diasporas living in the Persian Gulf through reference to the Arabic "they" speak, Bollywood's Arabized songs are primarily addressed to Arab fans of Bollywood in an acknowledgement of their historic spectatorship of, and robust present-day engagement with, Hindi cinema and song. 5 Moreover, Bollywood's distinctly market-driven approach to Arabic transcends the production of song-and-dance numbers to encompass the ever-expanding universe of Hindi-language content, Bollywood-adjacent, and otherwise. Arabic is no longer simply a language of reception but has been elevated to a language of production, one that reveals the changing geographic nature, interpersonal relationships, and priorities of a global film industry looking to root itself immediately West. Hindi films have been subtitled in classical Arabic (if rather inaccurately and irregularly) over the past sixty years at least, for audiences as far as Morocco and Egypt and as close as Oman. Yet, as demonstrated by the launch of dubbing-focused, Persian Gulf-based subsidiaries of leading Indian entertainment channels like Zee TV, B4U, Star Plus, and others beginning in 2008, Arabic and its various dialects have become key to the transnationalization of Hindi-language content. 6 While these companies have necessarily relied on Dubai-based Arabic translators to localize films and television shows into (primarily) Egyptian and Levantine dialects, others-like Eros Now-are utilizing artificial intelligence (AI) technology such as Microsoft Azure's speech translation engine to speed up "a two week dubbing process that takes millions of dollars into zero cost and on the fly." 7 The race to churn out Arabic-dubbed Hindi entertainment content across various platforms, from over-the-top (OTT) streaming channels to social media, substantiates Monika Mehta's apt analysis that Bollywood distributors are expediently adopting "spreadability" as an industrial strategy. 8 Ethnomusicological scholarship on Indian music, including Hindi film compositions, has almost always privileged Indo-Persian confluences over other geographic and cultural 2 Soundarya Athimuthu, "How 'Arabic Kuthu' From Vijay's 'Beast' Went Viral With 100 Million Views," The Quint World, 28 February 2022, https://www.thequint.com/entertainment/indian-cinema/actor-vijay-arabic-kuthu-frombeast-hits-100-million-views-beats-dhanushs-rowdy-baby#read-more. connections when parsing questions of language, poetry, and instrumentation. And while the impact of Amir Khusrau (1253-1325)-the creator of the Sufi devotional song, qawwali, and the khyal and tarana musical styles, renowned for his innovations in Indian classical music genres, Hindi-Urdu popular songs, and Sufi music-cannot be overstated, the contemporary Bollywood-Arabic nexus beckons us to explore different sonic movements into and out of the subcontinent. 9 Recent analyses of recording history in the Indian Ocean reveal that Bombay was a significant center of Arab music production from the late nineteenth century onward, especially for Kuwaiti, Bahraini, Yemeni, and other artists from the Gulf littoral. A central node in maritime trade between India and the Persian Gulf, Bombay proved a more "familiar location" for Khaleeji artists than Aleppo or Baghdad; a place where "Arab musicians from different Arab regions resided and performed their music" in nightclubs, private gatherings, and even on the Delhi radio. 10 The popular genre of sawt (lit. sound) in the Gulf, founded according to most sources by Bombay-born Kuwaiti poet ʿAbdullah al-Faraj (1836-1901), is inflected with Indian musical traditions, "some of which were taken directly from Indian song." 11 Later generations of sawt singers, such as ʿAbdullatif al-Kuwaiti (1901/ 1904-75), produced the first commercial recordings of the genre for private labels that were pressed by the National Gramophone Record Company in Bombay. 12 What propelled these musicians towards India-the drive to reach wider audiences and access advanced technologies and infrastructures for the production and distribution of their work-is also what ostensibly motivates Bollywood and Hindi-language entertainment companies to establish outfits in Gulf media capitals today. A more extensive study of the Indian Ocean aesthetic space, especially one connecting contemporary media industries to the rich history of musical encounters across this expanse (including maritime music such as sea songs), should give us more detail on how Arabic and other languages play into these push-and-pull factors of transoceanic sonic production. 13 Arabic as a "Muslim Language" Apart from song-and-dance, certain forms of Arabic have been, in one way or another, inherent to the development of Hindi film genres and tropes. Anjali Gera Roy has observed that not only do the "dialogues and lyrics of Hindi cinema…draw on Perso-Arabic loan words and imagery and metaphors of Persian, Arabic and Indian Urdu poetry," but the "Arabic qissa, Persian dāstān and the Urdu ghazal have been synthesised with Hindu mythological and religious concepts in almost all genres of Hindi cinema, except perhaps the mythological and devotional." 14 Writing against the dualistic Hindi-Urdu paradigm that characterizes most academic and popular linguistic analysis of Hindi cinema, Roy's multilingual, comparative approach charts how Arabic, as well as Persian and Urdu, mix with Sanskritized Hindi to create the rich syncretic language of Hindi films (one that the term "Hindi cinema" itself belies). 15 Many of Hindi cinema's prominent Urdu scriptwriters and lyricists-from Kaifi Azmi, Shakeel Badayuni, and Majrooh Sultanpuri of the "golden era" of Hindi film music to Irshad Kamil of more contemporary renown-studied Arabic as part of their literary education. Their dialogues and song lyrics are inflected with Arabic lexical borrowings, recently exhibited by Kamil's reference to the Qur'an in his Sufi anthem "Kun fa Yakun" (Be, and It Is) for the 2018 film Rockstar. 16 Although Arabic has materialized as a language of mysticism and ecstatic love in these Sufi strains of Bollywood music, it has also, much more viscerally, appeared in Hindi cinema's "majoritarian attempts at imposing linguistic stereotypes on Muslims." 17 Arabic, alongside Urdu, is the de facto screen language of "Islamic terrorism"-perhaps considered even more frightening than the latter for its incomprehensibility to the average Bollywood filmgoer in India. Since the 1980s, the depiction of Muslim film characters markedly transformed from earlier stereotypes of the indolent (but kindly) nawab to the violent, cunning jihadi anti-nationalist. Hindi cinema's "presentation of Pakistan, Islam and Kashmir in a vile nexus of anti-Indian conspirators," as well as its other preferred depiction of Muslims as thugs of the underworld and/or international terrorists, vilified Urdu and abstracted Arabic as its scarier sister language. 18 Arabic became the secret code that connected Indian Muslims to each other and the pan-Islamic world for which they reserved their true loyalty. Ananya Jahanara Kabir notes how in the Tamil-Hindi film Roja (Rose, 1992), even the simple Arabic definite article of "al-" became heavily value-laden when prefixed to the name of a bearded elderly man introduced to us as al-Sami, immediately connoting his pan-Islamic and terrorist associations. 19 Until only very recently, whenever Arabic appeared in films about Kashmir or Islamic terrorism, it was mostly "mouthed" or "accented" rather than voiced in intelligible speech. After greeting each other with "al-salāmu ʿalaykum" or other Islamic phrases recognizable to pan-Indian audiences, jihadi characters would devolve into an angry, hushed conversation presumed to be in Arabic (or in Kashmiri or Pashto, depending on the type and geography of the jihadism in question). This was a tried-and-true trope of earlier films like Roja and even more recent offerings like the Tamil-Hindi Vishwaroopam/Vishvaroop (2013). The obfuscation of Arabic and its collapsibility within a larger family of unfamiliar "Muslim languages" has only served to other the language further. In terms of film plots, contemporary Bollywood has not shied away from the genre of "Islamic terrorism." Instead, using new filming locations in Morocco, Lebanon, the United Arab Emirates (UAE), Jordan, and Turkey, the industry has expanded its geographical remit to perfect and make more "authentic" its exploration of the theme in the Middle East. Phantom (2015), a film that explores the 26/11 attacks in Mumbai against the backdrop of international terrorism, was partially shot in Beirut. Its lead actress, Katrina Kaif, was praised by Indian media outlets for attempting to study Arabic: "Katrina wants to know the language well so that her inflections are correct while dubbing her lines in the language. She will continue practising it to get the pronunciation right." 20 For Airlift (2016), centered on the evacuation of Kuwait-based Indians during the onset of the Gulf War, actor Akshay Kumar also learned Arabic to play the role of an Indian-Arab billionaire. The pursuit of realism has encouraged a level of professionalization in the industry that has, at least for this trend of Bollywood movies, created a space for Arabic to become somewhat more pronounced-less of a muddled whisper and more of a graspable piece of film dialogue. Bollywood's most accurate and realistic visual representation of Arabic to date belongs to Salman Khan's Tiger Zinda Hai (Tiger is Alive, 2017). Khan, best known as the industry's foremost working-class hero and action superstar, collaborated with Abu Dhabi's media free zone, twofour54, to produce the film. The Tiger Zinda Hai set at Khalifa Industrial Zone Abu Dhabi (KIZAD) doubles as Tikrit, Iraq (fictionalized in the film as "Ikrit"), and is the largest and most expensive Bollywood set to ever be constructed outside of Mumbai. Comprised of a dust-covered, sepia-textured cityscape with over 2500 windows and 1000 doors, the set's remarkable array of signage in classical and Iraqi Arabic depicts a marketplace full of bakeries, textile and garment shops, internet cafes, and even a municipal city council (Fig. 2). Undoubtedly buoyed by the expertise of twofour54's team of native Arabic speakers, the film puts to shame big-budget Hollywood projects like Arrival (2016), a film about language that not only used Arabic script to represent an Urdu headline, but did so erroneously by having it written disjointed and backwards. Still, no matter how good the Arabic has gotten, mainstream Indian imaginaries of the Middle East continue to traffic in tried-and-true Orientalist tropes of Arabs, even as they seek to excavate and entrench India's place in the modern history of the region. A Shared Soundscape As much as Indian music executives like to bill their YouTube-topping Arabic versions of Bollywood tunes as the first of their kind, to the listener even slightly versed in "world music" such statements ring resoundingly false. Arabic hauntings have reverberated throughout the Bollywood soundscape for decades. My own naïve realization of this well-known phenomenon began with my late discovery of Amr Diab's mega-hit "Tamally Maʿak" (Always with You, 2000), long after having already loved and repeatedly played its Bollywood copy "Kaho Na Kaho" (Whether You Speak or Not, 2004)-what to me was the "original." There has always been an informal exchange on the level of piracy and remix between India and the Arab world, especially as countless Hindi film songs-themselves hybrids of diverse musical traditions-have lifted melodies from Arab pop hits by singers like Diab, Ragheb Alama, Ehab Tawfik, and many others. The modern mediascapes of South Asia and the Middle East and North Africa have never existed in isolation, always borrowing from or referencing each other, albeit fleetingly or somewhat randomly when it comes to pop music. If the creation of Arabic versions of Bollywood's most marketable songs is becoming an entrenched, formalized strategy in the industry, the reverse is true for Arab pop artists as well. This is attested to by the popularity of musical traditions as varied as Hisham Abbas's iconic Egyptian-Carnatic fusion song of 2000, "Nari Narain" (variously translated as "Two Fires" and "I'm on Fire"), and the newer mash-up/remix cultures of Bollymizwid/Bollyraï. While it is difficult to isolate these sonic connections from their multidirectional, often unplanned paths, it is nearly impossible to determine "Arabic influences" in Hindi cinema and song when Urdu, Hindi, Persian, and their derivatives all draw from classical Arabic roots. Nevertheless, what I hope to have opened in this short survey is the space to discuss potential directions for a study of Arabic in Indian cultural production(s), particularly as contemporary migration, financial, and political flows bring India ever closer to the Gulf and broader Middle East. Acknowledgements. I would like to thank Nile Green for his insightful feedback and Anjali Gera Roy for her generosity in sharing several sources that supported this piece.
3,287.2
2023-02-01T00:00:00.000
[ "Linguistics", "Art" ]
Sound Insulation of Corrugated-Core Sandwich Panels: Modeling, Optimization and Experiment With the extension of the applications of sandwich panels with corrugated core, sound insulation performance has been a great concern for acoustic comfort design in many industrial fields. This paper presents a numerical and experimental study on the vibro-acoustic optimization of a finite size sandwich panel with corrugated core for maximizing the sound transmission loss. The numerical model is established by using the wave-based method, which shows a great improvement in the computational efficiency comparing to the finite element method. Constrained by the fundamental frequency and total mass, the optimization is performed by using a genetic algorithm in three different frequency bands. According to the optimization results, the frequency averaged sound transmission of the optimized models in the low, middle, and high-frequency ranges has increased, respectively, by 7.6 dB, 7.9 dB, and 11.7 dB compared to the baseline model. Benefiting from the vast number of the evolution samples, the correlation between the structural design parameters and the sound transmission characteristics is analyzed by introducing the coefficient of determination, which gives the variation of the importance of each design parameter in different frequency ranges. Finally, for validation purposes, a sound insulation test is conducted to validate the optimization results in the high-frequency range, which proves the feasibility of the optimization method in the practical engineering design of the sandwich panel. Introduction Because of its high stiffness to mass ratio and excellent impact resistance property, lightweight sandwich panels are largely used in civil construction, high-speed vehicle, ship structure, and aerospace industries [1,2]. A typical sandwich panel usually consists of two face sheets and a core layer. According to the different types of the core, sandwich panels can be broadly separated into two categories, homogeneous core sandwich panels and non-homogeneous (structure supported) core sandwich panels. Due to the lack of mechanical strength, the former type (e.g., foam core sandwich panel) is commonly used in the applications under light load condition. To overcome this drawback, the latter type, non-homogeneous sandwich panel, uses structural stiffener as the core layer. The core structure not only provides additional mechanical strength and bending stiffness but can also keep the lightweight characteristics of the sandwich panel. With the extension of the applications of sandwich panels, the vibro-acoustic properties, especially the sound insulation performance, have attracted great attention for acoustic comfort design in many industrial fields [3]. Due to the geometry diversity of the structural core, the mechanism of sound transmission through structure supported sandwich plate is complicated. Under this circumstance, Model Configuration A typical sandwich panel usually consists of two face sheets and a structu layer. Among various types of core structure, the corrugated core has attracted attention thanks to its cost and manufacturing advantages [27]. The structure is wich panel with trapezoidal corrugated core, as shown in Figure 1. Due to the existence of periodical stiffening structure, the original rectangul between the two face panels is divided into several trapezoidal sub-cavities. As shown in Figure 2, since the vibro-acoustic coupling effect between the acoustic cavities and the structure is taken into consideration, the classic mass-a resonance in a simple double-leaf structure is turned into the complex vibro-acou pling between each trapezoidal acoustic cavity and its surrounding structure. T incident sound wave can be transmitted to the top face sheet through the struct acoustical borne paths and finally radiate to the semi-infinite domain. Theoretical Formulation Considering the periodical one-way stiffened feature of the corrugated c sandwich panel studied in this paper is assumed to have an infinite length in forced direction [28]. As a result, the dynamic response of the sandwich panel can plified into a plane strain problem, and only the cross section of the sandwich str considered. In terms of structure, the sandwich panel can be treated as an assembly o single panel, as shown in Figure 3. According to the Kirchhoff theory [29] and Navier equations [30], the go equations of the free transverse and longitudinal vibration of a thin panel can be ex Due to the existence of periodical stiffening structure, the original rectangular cavity between the two face panels is divided into several trapezoidal sub-cavities. As shown in Figure 2, since the vibro-acoustic coupling effect between the internal acoustic cavities and the structure is taken into consideration, the classic mass-air-mass resonance in a simple double-leaf structure is turned into the complex vibro-acoustic coupling between each trapezoidal acoustic cavity and its surrounding structure. Thus, the incident sound wave can be transmitted to the top face sheet through the structural and acoustical borne paths and finally radiate to the semi-infinite domain. Model Configuration A typical sandwich panel usually consists of two face sheets and a structural core layer. Among various types of core structure, the corrugated core has attracted growing attention thanks to its cost and manufacturing advantages [27]. The structure is a sandwich panel with trapezoidal corrugated core, as shown in Figure 1. Due to the existence of periodical stiffening structure, the original rectangular cavity between the two face panels is divided into several trapezoidal sub-cavities. As shown in Figure 2, since the vibro-acoustic coupling effect between the internal acoustic cavities and the structure is taken into consideration, the classic mass-air-mass resonance in a simple double-leaf structure is turned into the complex vibro-acoustic coupling between each trapezoidal acoustic cavity and its surrounding structure. Thus, the incident sound wave can be transmitted to the top face sheet through the structural and acoustical borne paths and finally radiate to the semi-infinite domain. Theoretical Formulation Considering the periodical one-way stiffened feature of the corrugated core, the sandwich panel studied in this paper is assumed to have an infinite length in the reinforced direction [28]. As a result, the dynamic response of the sandwich panel can be simplified into a plane strain problem, and only the cross section of the sandwich structure is considered. In terms of structure, the sandwich panel can be treated as an assembly of several single panel, as shown in Figure 3. According to the Kirchhoff theory [29] and Navier equations [30], the governing equations of the free transverse and longitudinal vibration of a thin panel can be expressed as Theoretical Formulation Considering the periodical one-way stiffened feature of the corrugated core, the sandwich panel studied in this paper is assumed to have an infinite length in the reinforced direction [28]. As a result, the dynamic response of the sandwich panel can be simplified into a plane strain problem, and only the cross section of the sandwich structure is considered. In terms of structure, the sandwich panel can be treated as an assembly of several single panel, as shown in Figure 3. Due to the existence of periodical stiffening structure, the original r between the two face panels is divided into several trapezoidal sub-cavi As shown in Figure 2, since the vibro-acoustic coupling effect betw acoustic cavities and the structure is taken into consideration, the class resonance in a simple double-leaf structure is turned into the complex vi pling between each trapezoidal acoustic cavity and its surrounding str incident sound wave can be transmitted to the top face sheet through t acoustical borne paths and finally radiate to the semi-infinite domain. Theoretical Formulation Considering the periodical one-way stiffened feature of the corr sandwich panel studied in this paper is assumed to have an infinite le forced direction [28]. As a result, the dynamic response of the sandwich plified into a plane strain problem, and only the cross section of the sand considered. In terms of structure, the sandwich panel can be treated as an ass single panel, as shown in Figure 3. According to the Kirchhoff theory [29] and Navier equations [30 equations of the free transverse and longitudinal vibration of a thin panel as According to the Kirchhoff theory [29] and Navier equations [30], the governing equations of the free transverse and longitudinal vibration of a thin panel can be expressed as where w is the transverse displacement, and u is the longitudinal displacement. In addition, where ν is the Poisson's ratio, and ρ and h are the material density and thickness of the panel, respectively. Associating with the elasticity modulus E, in order to consider the material damping, the damping factor η is added and leads to a complex elasticity modulus i = √ −1. With the assumption of infinite length, as shown in Figure 4, Equation (1) degenerate to where τ is the local coordinate of the panel, and where k b and k l are the bending and longitudinal wave number, respectively. where w is the transverse displacement, and u is the longitudinal displaceme In addition, 3 2 (1 i ) 12(1 ) where ν is the Poisson's ratio, and ρ and h are the material density and thic the panel, respectively. Associating with the elasticity modulus E , in order to the material damping, the damping factor η is added and leads to a complex e modulus i= 1 − . With the assumption of infinite length, as shown in Figure 4, Equation (1) de to where τ is the local coordinate of the panel, and k and l k are the bending and longitudinal wave number, respectively For the structural part, since the sandwich structure is an assembly of sever plates and both the transverse and longitudinal displacements are expressed by t mation of corresponding wave functions, the governing equations of the whole s can be assembled based on the displacement compatibility conditions and force rium conditions. A demonstration of the connection relations between five subshown in Figure 5. According to the force-displacement relation, all the intern can be calculated by using the following relations: For the structural part, since the sandwich structure is an assembly of several single plates and both the transverse and longitudinal displacements are expressed by the summation of corresponding wave functions, the governing equations of the whole structure can be assembled based on the displacement compatibility conditions and force equilibrium conditions. A demonstration of the connection relations between five sub-plates is shown in Figure 5. According to the force-displacement relation, all the internal forces can be calculated by using the following relations: As for the acoustic cavity, the governing equation of the sound pressure insi Helmholtz equation [24]: where c is the sound speed inside the acoustic cavity, and k is the acoustic wa ber. According to the basic concept of the WBM, both the structural displacem sound pressure can be approximated by a summation of the specific wave functi , 1 As for the acoustic cavity, the governing equation of the sound pressure inside is the Helmholtz equation [24]: where c is the sound speed inside the acoustic cavity, and k is the acoustic wave number. According to the basic concept of the WBM, both the structural displacement and sound pressure can be approximated by a summation of the specific wave functions [22]. where Ψ b,n and Ψ l,i are the structural wave functions, which are also the exact solution of Equation (1). ∈ m is the acoustic wave function contribution coefficient, and ξ n and ζ l are the structural wave function contribution coefficients.p q is the particular solution of the nonhomogeneous Helmholtz equation. Since there is a direct coupling between the acoustic cavity and structure, w a represent the particular solution induced by the sound pressure inside the cavity. The additional terms, w F and w q , are the particular solution caused by the existence of external excitation and acoustic source, respectively. For the acoustic cavity, Φ m is the acoustic wave function, which satisfies the Helmholtz equation and is defined as: Due to the irregularity of the acoustic cavity, the wave functions are defined in the envelope rectangle shown in Figure 6. , 0 , 1 , 2 , ( ( ) , ) , 0 , 1 , 2 , Due to the irregularity of the acoustic cavity, the wave functions are defined in the envelope rectangle shown in Figure 6. Corresponding to each acoustic wave function, the cavity sound pressure induced transverse displacement of the plate can be expressed as Corresponding to each acoustic wave function, the cavity sound pressure induced transverse displacement of the plate can be expressed as w a,m (τ ) = e (−jk x,m (y 0 +τ · cos α)) ( where When the panel is subjected to a concentrated point force F 0 (τ = τ F ), the particular solution is [31] In this paper, because the sandwich panel is subjected to the acoustic plane wave excitation, as shown in Figure 7, the particular solution for a single flat panel can be expressed as follows [31]: Since there are direct couplings between the sandwich structure and the interior acoustic cavities, Figure 8 gives a simple demonstration of a panel-cavity coupled system which consists of two adjacent sub-domains (17) In this sub-domain, there are five kinds of boundary condition: v Γ is the particle velocity boundary condition (including acoustic rigid wall), p Γ is the sound pressure boundary condition, Z Γ is the acoustic impedance boundary condition, s Γ is the vibroacoustic coupling boundary condition, and I Γ is the acoustic continuity boundary condition. On each kind of boundary, the residual function is defined as Since there are direct couplings between the sandwich structure and the interior acoustic cavities, Figure 8 gives a simple demonstration of a panel-cavity coupled system which consists of two adjacent sub-domains Ω (α) and Ω (β) . Since there are direct couplings between the sandwich structure and the interior acoustic cavities, Figure 8 gives a simple demonstration of a panel-cavity coupled system which consists of two adjacent sub-domains (17) In this sub-domain, there are five kinds of boundary condition: v Γ is the particle velocity boundary condition (including acoustic rigid wall), p Γ is the sound pressure boundary condition, Z Γ is the acoustic impedance boundary condition, s Γ is the vibroacoustic coupling boundary condition, and I Γ is the acoustic continuity boundary condition. On each kind of boundary, the residual function is defined as Take sub-domain Ω (α) as an example, to enforce the boundary condition, a weighted residual formulation is used. In this sub-domain, there are five kinds of boundary condition: Γ v is the particle velocity boundary condition (including acoustic rigid wall), Γ p is the sound pressure boundary condition, Γ Z is the acoustic impedance boundary condition, Γ s is the vibro-Materials 2021, 14, 7785 9 of 31 acoustic coupling boundary condition, and Γ I is the acoustic continuity boundary condition. On each kind of boundary, the residual function is defined as The linear differential operator ℘(·) is defined as where ρ 0 is the density of the acoustic medium. According to the conventional Galerkin procedure, the weighted function can also be written as a summation of the same acoustic wave function given in Equation (10): Substituting Equations (7) and (19) into the boundary weighted residual formulation (17) leads to the linear equations of the acoustic cavity: where where N a and N s are the total number of acoustic cavities and sub-plates, A is the boundary condition matrix, C sp is the vibro-acoustic coupling matrix, and C p is the coupling matrix between adjacent acoustic cavities; the details of those matrices are given in Appendix A. Assuming the total number of acoustic wave functions is m a , f is a m a × 1 vector related to the predefined boundary conditions, and the detailed formulation can also be found in the Appendix A. To balance the computational efficiency and accuracy, the number of acoustic wave functions can be determined by using the structural bending wave number: Associating with the structural boundary condition, the dynamic equations of the structural part of the system can also be written in a simple matrix form: where The sub-matrices M SA , M SB and M SL are the vibro-acoustical coupling matrix, structural bending coefficient matrix, and structural in-plane coefficient matrix, respectively. Thus, the complete form of the governing equation of the system can be written as As the key indicator of evaluating the sound insulation performance of the sandwich panel, the sound transmission loss (STL) is used to be the objective of the optimization procedure: where W inc and W rad are the incident and radiation sound power, respectively. As shown in Figure 9, the external excitation is incident plane sound wave with an angle of α. The radiation sound pressure can be calculated by using the Raleigh's integral [32]: where: ρ 0 is the density of the acoustic medium, H 0 is the Hankel function of the second kind, R re is the radius of the half-circle observation surface, v ni is the normal velocity, and k is the wave number of the acoustic wave. Thus, the radiation sound power can be obtained as follows: Since the sound transmission loss of the sandwich panel varies with the target frequency, in order to evaluate the sound insulation performance in a specific frequency range (ω 1 ∼ ω 2 ), the frequency averaged STL used in the optimization process: where: 0 ρ is the density of the acoustic medium, (2) 0 H is the Hankel function of th second kind, re R is the radius of the half-circle observation surface, ni v is the norm velocity, and k is the wave number of the acoustic wave. Thus, the radiation sound power can be obtained as follows: Since the sound transmission loss of the sandwich panel varies with the target fr quency, in order to evaluate the sound insulation performance in a specific frequenc range 1 2 (~) ω ω , the frequency averaged STL used in the optimization process: Numerical Verification Consider a unit cell of the sandwich plate with corrugated core, as shown in Figur 10, the total length of the structure is L = 0.5 m, the distance between the top face she and bottom face sheet is H = 0.15 m, the inclined angle of the core plate . The tw face plates and the core sheet have the same thickness, which is ht = hb = hs = 2 mm. This vibro-acoustic coupling system contains three trapezoidal acoustic cavitie which are filled with air. Each acoustic cavity is coupled with the plates occupying i Numerical Verification Consider a unit cell of the sandwich plate with corrugated core, as shown in Figure 10, the total length of the structure is L = 0.5 m, the distance between the top face sheet and bottom face sheet is H = 0.15 m, the inclined angle of the core plate α = 60 • . The two face plates and the core sheet have the same thickness, which is h t = h b = h s = 2 mm. Thus, the radiation sound power can be obtained as follows: Since the sound transmission loss of the sandwich panel varies with the target frequency, in order to evaluate the sound insulation performance in a specific frequency range 1 2 (~) ω ω , the frequency averaged STL used in the optimization process: Numerical Verification Consider a unit cell of the sandwich plate with corrugated core, as shown in Figure 10, the total length of the structure is L = 0.5 m, the distance between the top face sheet and bottom face sheet is H = 0.15 m, the inclined angle of the core plate . The two face plates and the core sheet have the same thickness, which is ht = hb = hs = 2 mm. This vibro-acoustic coupling system contains three trapezoidal acoustic cavities, which are filled with air. Each acoustic cavity is coupled with the plates occupying its This vibro-acoustic coupling system contains three trapezoidal acoustic cavities, which are filled with air. Each acoustic cavity is coupled with the plates occupying its boundaries. Both structural ends of the unit cell are clamped, and the left and right end boundaries of the acoustic cavity are considered as acoustically rigid wall. Assuming there is a uniformly distributed pressure p with an unit magnitude acting on the bottom sheet, to verify the reliability and computational efficiency of the wave-based method, both the conventional finite element method (ANSYS ® ) and the WBM are used to calculate the vibro-acoustic response of the system. Figure 11 presents the displacement response of two observation points on the top face sheet in the frequency range of 1-800 Hz. to verify the reliability and computational efficiency of the wave-based method, both the conventional finite element method (ANSYS ® ) and the WBM are used to calculate the vibro-acoustic response of the system. Figure 11 presents the displacement response of two observation points on the top face sheet in the frequency range of 1-800 Hz. Accordingly, Figure 12 gives the sound pressure distribution of the acoustic cavities at the resonance frequencies marked in Figure 11. Accordingly, Figure 12 gives the sound pressure distribution of the acoustic cavities at the resonance frequencies marked in Figure 11. boundaries. Both structural ends of the unit cell are clamped, and the left and right end boundaries of the acoustic cavity are considered as acoustically rigid wall. Assuming there is a uniformly distributed pressure p with an unit magnitude acting on the bottom sheet, to verify the reliability and computational efficiency of the wave-based method, both the conventional finite element method (ANSYS ® ) and the WBM are used to calculate the vibro-acoustic response of the system. Figure 11 presents the displacement response of two observation points on the top face sheet in the frequency range of 1-800 Hz. Accordingly, Figure 12 gives the sound pressure distribution of the acoustic cavities at the resonance frequencies marked in Figure 11. Both the structural displacement response and the in-cavity sound pressure distribution indicate that the result obtained by the WBM agrees very well with the conventional finite element method. Especially for the in-cavity sound pressure response, not only the pressure distribution but also the magnitude has a great match with the FEM commercial software. The comparison between the conventional FEM and the present method proves the reliability of the WBM. According to the truncation rules of the WBM, the total number of basic wave functions used to approximate the field variables are related to the plate bending wavelength, which means the number of wave functions would become larger when the targeting frequency increasing. As shown in Figure 13, both the number of wave functions and computational central processing unit (CPU) time are proportional to the frequency. Moreover, since the WBM is frequency-dependent, the system equation should be reconstructed in each frequency step, which costs most of the computational time. Nonetheless, at the highest frequency, 800 Hz, the total DOFs of the system is 127, which is still much smaller than the conventional FEM. Figure 14 presents the frequency average CPU time of the present method, and FEM, the relative error between the two methods, is represented by red square dots. Both the structural displacement response and the in-cavity sound pressure distr tion indicate that the result obtained by the WBM agrees very well with the conventio finite element method. Especially for the in-cavity sound pressure response, not only pressure distribution but also the magnitude has a great match with the FEM commer software. The comparison between the conventional FEM and the present method proves reliability of the WBM. According to the truncation rules of the WBM, the total numbe basic wave functions used to approximate the field variables are related to the plate be ing wavelength, which means the number of wave functions would become larger w the targeting frequency increasing. As shown in Figure 13, both the number of wave functions and computational cen processing unit (CPU) time are proportional to the frequency. Moreover, since the W is frequency-dependent, the system equation should be reconstructed in each freque step, which costs most of the computational time. Nonetheless, at the highest freque 800 Hz, the total DOFs of the system is 127, which is still much smaller than the conv tional FEM. Figure 14 presents the frequency average CPU time of the present meth and FEM, the relative error between the two methods, is represented by red square d For FEM, the comparison result indicates that with the increase of the mesh num per wavelength, the system DOFs is strikingly enlarged, and the computational accur is significantly improved. From the perspective of relative error, because of the analy nature of the WBM, the result of the FEM would converge to the result of the WMB. W the relative error reduces to less than 5%, the average CPU time of FEM is 5.4 s, whic almost seven times that of the present method. According to the validation results presented above, the WBM is reliable in solving the dynamic response of the vibro-acoustic coupling system and has much higher computational efficiency than the conventional FEM. Those advantages make the WBM a good choice for the vibro-acoustic optimization procedure, which usually requires enor- For FEM, the comparison result indicates that with the increase of the mesh number per wavelength, the system DOFs is strikingly enlarged, and the computational accuracy is significantly improved. From the perspective of relative error, because of the analytical nature of the WBM, the result of the FEM would converge to the result of the WMB. When the relative error reduces to less than 5%, the average CPU time of FEM is 5.4 s, which is almost seven times that of the present method. According to the validation results presented above, the WBM is reliable in solving the dynamic response of the vibro-acoustic coupling system and has much higher computational efficiency than the conventional FEM. Those advantages make the WBM a good choice for the vibro-acoustic optimization procedure, which usually requires enormous computation. Structural-Acoustic Optimization The geometry configuration of the target model is shown in Figure 15 According to the validation results presented above, the WBM is reliable in solvin the dynamic response of the vibro-acoustic coupling system and has much higher com putational efficiency than the conventional FEM. Those advantages make the WBM good choice for the vibro-acoustic optimization procedure, which usually requires enor mous computation. Structural-Acoustic Optimization The geometry configuration of the target model is shown in Figure 15; the sandwic panel has a total length of L = 0.7 m, and the thickness of the core layer (the distance be tween the two face sheets) is H = 0.04 m. The corrugated core has five inclined stiffener (lateral side of the trapezoid) and the base angle of the hypotenuse is φ. The thickness o the top panel, core sheet, and bottom panel are ht, hs, and hb, respectively. The externa excitation is incident plane sound wave with an angle of . In the x-direction, bot ends of the sandwich panel are clamped and infinitely baffled. The Baseline Model Before performing the optimization, a baseline model should be proposed to apprais the optimization result. To ensure the rationality of the baseline model, a parametric stud with respect of the inclined angle φ is applied. In the parametric study, the top face pane the core sheet, and the bottom face panel are assumed to have the same material and thick ness, thus, ht = hb = hs = 2 mm. The whole sandwich panel is made of aluminum, and th material properties are listed in Table 1. The Baseline Model Before performing the optimization, a baseline model should be proposed to appraise the optimization result. To ensure the rationality of the baseline model, a parametric study with respect of the inclined angle ϕ is applied. In the parametric study, the top face panel, the core sheet, and the bottom face panel are assumed to have the same material and thickness, thus, h t = h b = h s = 2 mm. The whole sandwich panel is made of aluminum, and the material properties are listed in Table 1. In the frequency range 25-1200 Hz, the frequency averaged sound transmission loss variation with respect to the core inclined angle is illustrated in Figure 16. Acoustic medium Air Acoustic medium density-ρ0 1.21 kg/m 3 Sound speed-c 343 m/s In the frequency range 25-1200 Hz, the frequency averaged sound transmission loss variation with respect to the core inclined angle is illustrated in Figure 16. As shown in the figure, when the core stiffener has an inclined angle of 48 degrees, the sandwich panel has the largest STLavg of 30.5 dB, which can be treated as the singleparameter optimal geometrical configuration. Table 2 gives the details of the baseline model. m is the mass of baseline model. In Table 2, f1 and m are the first resonance frequency and total mass of the sandwich structure. The spectrum of the STL in the targeting frequency range is shown in Figure 17. As shown in the figure, when the core stiffener has an inclined angle of 48 degrees, the sandwich panel has the largest STL avg of 30.5 dB, which can be treated as the singleparameter optimal geometrical configuration. Table 2 gives the details of the baseline model. m is the mass of baseline model. In Table 2, f 1 and m are the first resonance frequency and total mass of the sandwich structure. The spectrum of the STL in the targeting frequency range is shown in Figure 17. According to the distribution of the resonance frequencies of the baseline model, the targeting frequency range 25-1200 Hz is separated into three frequency ranges: (1) in the low-frequency range, 25-300 Hz, only the global bending is included; (2) in the middlefrequency range, 300-800 Hz, both the global and local modes are included; and (3) in the 800-1200 Hz, almost all the local resonance modes are included. Vibro-Acoustic Optimization In order to achieve the best possible sound insulation performance, a multi-parameter optimization is implemented, and the thickness of top face plate, the core sheet, the According to the distribution of the resonance frequencies of the baseline model, the targeting frequency range 25-1200 Hz is separated into three frequency ranges: (1) in the low-frequency range, 25-300 Hz, only the global bending is included; (2) in the middlefrequency range, 300-800 Hz, both the global and local modes are included; and (3) in the 800-1200 Hz, almost all the local resonance modes are included. Vibro-Acoustic Optimization In order to achieve the best possible sound insulation performance, a multi-parameter optimization is implemented, and the thickness of top face plate, the core sheet, the bottom face plate, and the inclined angle of stiffener are selected as the optimization variables: The objective function is to maximize the STL avg in the targeting frequency range: As shown in Figure 18, to ensure that the two adjacent inclined stiffeners do not interfere, the base angle of the hypotenuse ϕ is limited by the constraint as follows: where H is the distance between the two face sheets, and L is the length of the sandwich panel. targeting frequency range 25-1200 Hz is separated into three frequency ranges: (1) in the low-frequency range, 25-300 Hz, only the global bending is included; (2) in the middlefrequency range, 300-800 Hz, both the global and local modes are included; and (3) in the 800-1200 Hz, almost all the local resonance modes are included. Vibro-Acoustic Optimization In order to achieve the best possible sound insulation performance, a multi-parameter optimization is implemented, and the thickness of top face plate, the core sheet, the bottom face plate, and the inclined angle of stiffener are selected as the optimization variables: The objective function is to maximize the STLavg in the targeting frequency range: As shown in Figure 18, to ensure that the two adjacent inclined stiffeners do not interfere, the base angle of the hypotenuse φ is limited by the constraint as follows: where H is the distance between the two face sheets, and L is the length of the sandwich panel. Considering the basic topology and manufacturability of the optimization result, the ranges of value for the optimization variables are limited by the constraints as follows: Considering the basic topology and manufacturability of the optimization result, the ranges of value for the optimization variables are limited by the constraints as follows: where h t , h s , and h b are the thickness of the top panel, core sheet, and bottom panel, respectively. Meanwhile, to ensure not losing any lightweight property and structural stiffness, the total mass and first resonance frequency of the optimization result are subject to the constraints: where f 0,1 and m 0 are the first resonance frequency and total mass of the baseline model, respectively. The problem in this paper is a multi-parameter optimization model. For such optimization problems, due to the limitation of global searching ability, the traditional gradientbased algorithms often fall into local optimum in the optimization process and cannot converge toward the global optimum solution [17]. In order to overcome this shortcoming, the genetic algorithm with its strong global search ability is used in this paper. The genetic algorithm is a stochastic global optimization method developed by imitating the biological evolution mechanism in nature. The essence of the algorithm is to find the optimal solution to the problem through a large number of random probes. The excellent global searching ability of genetic algorithms can reduce the possibility of falling into local optimal solution, which is universal and has a wide range of applications. The genetic algorithm does not rely on the gradient information of the objective function on the parameters in the optimization process and does not require the continuous derivability of the objective function, which is suitable for large-scale and discontinuous optimization models. Although the results of the old genetic algorithm are acceptable, their main drawback is that the overall run time can easily become unacceptable. The multi-threaded genetic algorithm used in this paper can accelerate this problem because of its parallelizing [33]. The optimization is performed on Intel ® core TM i7 CPU 11700 with 2.5 GHz 8 processor cores. MATLAB program Version R2017b is used throughout the optimization. By using the parallel computing technique, each optimization case has 10 threads, and each thread contains 100 sample individuals. To get the best solution, the values of parameters are set where the selection rate is 0.5, and the mutation rate is 0.4. To ensure the reasonability of the optimization result, at least 80 generations of evolution are performed, and considering the randomness of the genetic algorism, the final result is accepted only when the relative error of five successive generations is less than 0.1%. The program flow chart of the genetic algorithm is illustrated in Figure 19. Table 3 gives the details of the optimization result in the low-frequency rang According to the results listed in Table 3, the optimized model has a 7.6 d STLavg than the baseline model. In addition, after the optimization, the optimize shows a 28.8 Hz increase in the first resonance frequency compared to the baselin Figure 20 gives the STL spectrum of the optimized model in the low-frequency r Table 3 gives the details of the optimization result in the low-frequency range. According to the results listed in Table 3, the optimized model has a 7.6 dB higher STL avg than the baseline model. In addition, after the optimization, the optimized model shows a 28.8 Hz increase in the first resonance frequency compared to the baseline model. Figure 20 gives the STL spectrum of the optimized model in the low-frequency range. According to the results listed in Table 3, the optimized model has a 7.6 dB hig STLavg than the baseline model. In addition, after the optimization, the optimized m shows a 28.8 Hz increase in the first resonance frequency compared to the baseline mo Figure 20 gives the STL spectrum of the optimized model in the low-frequency range There are two main regions in which the STL after optimization is better than baseline, which are marked as shadow areas. For both zone 1 and zone 2, the redistr tion of the structural resonance frequencies causes the increase of the sound transmis loss. However, the total number of the resonance frequencies remains unchanged, w There are two main regions in which the STL after optimization is better than the baseline, which are marked as shadow areas. For both zone 1 and zone 2, the redistribution of the structural resonance frequencies causes the increase of the sound transmission loss. However, the total number of the resonance frequencies remains unchanged, which indicates that under current constraints (topology and total mass), the global resonance of the sandwich structure can hardly be eliminated by the optimization. Figure 21 shows the optimization history distribution of the STL avg with respect to the three design parameters (thickness of top face plate, inclined angle of the stiffeners, and thickness of stiffener plates). The normal samples are the not retained samples in the optimization history; the elite samples are the best-retained samples; the trend curve is the trend of elite samples, and the optimal sample is the final solution of the optimization. According to the optimization sample distribution, the elite descendants of both the top panel thickness and the angle of the stiffener accumulate on the lower limit. This phenomenon indicates that in the low-frequency range, increasing the thickness of the face panel on the incident side and reducing the angle of the core stiffener can be a benefit to improving the sound insulation performance of the sandwich panel. Low-Frequency Optimization To determine the proportion of the variance in the sound transmission loss that is predictable from the design parameters, the coefficient of determination (COD, R 2 ) is calculated based on the optimization history, which is defined as [34]: where y i (x i ) is the elite sample point in the optimization history, f i (x i ) is the corresponding second order trend function value, y is the expectation of the sample points. The value range of the determination coefficient (R 2 ) is 0-1. The higher the determination coefficient, the higher the influence of the change of the design parameter on the sound transmission loss. Figure 22 gives the coefficient of determination of each design parameter with respect to STL avg . The sound insulation performance of the sandwich structure mainly depends on the overall stiffness of the structure, which can be significantly influenced by geometrical configuration. All four design parameters have high COD with respect to the STL avg L, and among those, the inclined angle of the stiffener has the greatest impact on the sound transmission loss of the sandwich panel. the trend of elite samples, and the optimal sample is the final solution of the optimizatio According to the optimization sample distribution, the elite descendants of both the t panel thickness and the angle of the stiffener accumulate on the lower limit. This pheno enon indicates that in the low-frequency range, increasing the thickness of the face pan on the incident side and reducing the angle of the core stiffener can be a benefit to impro ing the sound insulation performance of the sandwich panel. Table 4 presents the results of middle-frequency optimization. Comparing to the baseline model, the optimized model has a STL avg increase of 7.9 dB, and the first resonance frequency increases by 5.1 Hz. transmission loss. Figure 22 gives the coefficient of determination of each design parameter with respect to STLavg. The sound insulation performance of the sandwich structure mainly depends on the overall stiffness of the structure, which can be significantly influenced by geometrical configuration. All four design parameters have high COD with respect to the STLavg L, and among those, the inclined angle of the stiffener has the greatest impact on the sound transmission loss of the sandwich panel. Table 4 presents the results of middle-frequency optimization. Comparing to the baseline model, the optimized model has a STLavg increase of 7.9 dB, and the first resonance frequency increases by 5.1 Hz. As shown in Figure 23, for the optimized model, despite there being an obvious increase of the STLavg in the targeting frequency range, it has a much worse sound insulation performance in the low-frequency range. This result indicates that, under the constraints of pre-defined topology and total mass, it is hard to achieve a broadband sound insulation performance improvement by only optimizing the geometrical parameters. As shown in Figure 23, for the optimized model, despite there being an obvious increase of the STL avg in the targeting frequency range, it has a much worse sound insulation performance in the low-frequency range. This result indicates that, under the constraints of pre-defined topology and total mass, it is hard to achieve a broadband sound insulation performance improvement by only optimizing the geometrical parameters. Figure 24 shows the optimization history distribution of the STLavg with respect to the three design parameters in middle-frequency range. In contrast to the low-frequency optimization, only the elite sample of the inclined angle shows a lower limit accumulation, which indicates that the less vertical stiffener is still preferred to improve the sound transmission loss of the sandwich panel. Figure 24 shows the optimization history distribution of the STL avg with respect to the three design parameters in middle-frequency range. In contrast to the low-frequency optimization, only the elite sample of the inclined angle shows a lower limit accumulation, which indicates that the less vertical stiffener is still preferred to improve the sound transmission loss of the sandwich panel. Figure 24 shows the optimization history distribution of the STLavg with respect the three design parameters in middle-frequency range. In contrast to the low-frequen optimization, only the elite sample of the inclined angle shows a lower limit accumulatio which indicates that the less vertical stiffener is still preferred to improve the sound tran mission loss of the sandwich panel. Figure 25 shows the coefficient of determination of each design parameter with spect to STLavg in the middle-frequency optimization. Figure 25 shows the coefficient of determination of each design parameter with respect to STL avg in the middle-frequency optimization. Middle-Frequency Optimization According to the COD result, the most important design parameters that influence the STL of the sandwich panel are still the thickness of the face sheet and the inclined angle of the stiffener. However, compared to the low-frequency optimization, with the increase of the targeting frequency range, a small rise of the COD of the face sheet thickness can be observed, and at the same time, the importance of the inclined angle of the stiffener is diminished. (c) Figure 25 shows the coefficient of determination of each design parameter with respect to STLavg in the middle-frequency optimization. According to the COD result, the most important design parameters that influence the STL of the sandwich panel are still the thickness of the face sheet and the inclined angle of the stiffener. However, compared to the low-frequency optimization, with the increase of the targeting frequency range, a small rise of the COD of the face sheet thickness can be observed, and at the same time, the importance of the inclined angle of the stiffener is diminished. Table 5 gives the optimization results in the high-frequency range. Table 5 gives the optimization results in the high-frequency range. As shown in Table 5, the high-frequency optimization reduces the STL avg of the corresponding frequency range by 11.7 dB, which is much more significant than the low and middle-frequency range. From the perspective of the STL spectrum illustrated in Figure 26, except for the modes redistribution, the resonance dips in the targeting frequency range are strongly diminished by the optimization. As shown in Table 5, the high-frequency optimization reduces the STLavg of the corresponding frequency range by 11.7 dB, which is much more significant than the low and middle-frequency range. From the perspective of the STL spectrum illustrated in Figure 26, except for the modes redistribution, the resonance dips in the targeting frequency range are strongly diminished by the optimization. The optimization sample distribution is illustrated in Figure 27. Compared to the sample distribution of the previous two frequency ranges, the high-frequency optimization has a more evenly distribution of the elite descendants, especially for the thickness of the face panel. Nevertheless, the lower limit accumulation can also be observed in the elite sample distribution of the angle of the core sheet. The optimization sample distribution is illustrated in Figure 27. Compared to the sample distribution of the previous two frequency ranges, the high-frequency optimization has a more evenly distribution of the elite descendants, especially for the thickness of the face panel. Nevertheless, the lower limit accumulation can also be observed in the elite sample distribution of the angle of the core sheet. The optimization sample distribution is illustrated in Figure 27. Compared to t sample distribution of the previous two frequency ranges, the high-frequency optimi tion has a more evenly distribution of the elite descendants, especially for the thickness the face panel. Nevertheless, the lower limit accumulation can also be observed in the el sample distribution of the angle of the core sheet. The COD of the design parameters in the high-frequency optimization is shown Figure 28. Despite the thickness of the face panel and the angle of the core stiffener s holding the dominating position, with the increase of the targeting frequency, the COD the former one continues to rise and the latter goes the opposite way. The COD of the design parameters in the high-frequency optimization is shown in Figure 28. Despite the thickness of the face panel and the angle of the core stiffener still holding the dominating position, with the increase of the targeting frequency, the COD of the former one continues to rise and the latter goes the opposite way. High-Frequency Optimization (c) The COD of the design parameters in the high-frequency optimization is shown in Figure 28. Despite the thickness of the face panel and the angle of the core stiffener still holding the dominating position, with the increase of the targeting frequency, the COD of the former one continues to rise and the latter goes the opposite way. In the above three optimizations, the acoustic cavities in the core layer of the sandwich panel are filled with light acoustic medium (air). According to the optimization re- In the above three optimizations, the acoustic cavities in the core layer of the sandwich panel are filled with light acoustic medium (air). According to the optimization result, in addition to the thickness of the core panel (stiffener), all other three design parameters are important to the sound transmission of the sandwich panel. Meanwhile, the correlation between the design parameters and STL varies with the targeting frequency range. Considering heavy acoustic medium, Figure 29 presents the optimization result of the sandwich panel with water-filled acoustic cavities in the same high-frequency range, and the results of the optimized model are listed in Table 6. Considering heavy acoustic medium, Figure 29 presents the optimization result of the sandwich panel with water-filled acoustic cavities in the same high-frequency range, and the results of the optimized model are listed in Table 6. As listed in Table 6, under the same constraints, the STLavg of the water cavity model only increases 3.8 dB after the structural optimization, which is far less than that in the air cavity case. This result indicates that when the equivalent stiffness of the acoustic cavity is comparable to the structural stiffness, only optimizing the structural parameters has little impact on the sound transmission characteristics of the sandwich panel. As shown As listed in Table 6, under the same constraints, the STL avg of the water cavity model only increases 3.8 dB after the structural optimization, which is far less than that in the air cavity case. This result indicates that when the equivalent stiffness of the acoustic cavity is comparable to the structural stiffness, only optimizing the structural parameters has little impact on the sound transmission characteristics of the sandwich panel. As shown in Figure 29, the filled heavy acoustic medium exhibits a strong equivalent mass effect, and the first resonance frequency of the coupled system drops from 69 Hz to 35 Hz, a drop of nearly 50%. It is precise because of this strong equivalent mass effect that the sound insulation of the water cavity model is much higher than that of the air cavity model. Summing up the four optimization cases, the coefficient of determination of the design parameters is given in Figure 30. As shown, firstly, with the increase of the targeting frequency, the thickness of the face panel shows an increased influence on the sound transmission loss, and instead, the parameters of the core sheet (including the thickness and the inclined angle) seem to have reduced importance on the sound insulation property of the sandwich panel. Secondly, as mentioned above, the rapid drop of the COD of the design parameters in the water cavity case also indicates that when the acoustic cavities are filled with heavy acoustic medium, the medium would dominate the sound transmission characteristics of the whole system, and changing the model configuration is not so important for the sound insulation performance. Experimental Validation To validate the result in the high-frequency optimization, a sound insulation test is conducted by using the method of reverberation room and anechoic room [35]. The specimen configuration is shown in Figure 31. Experimental Validation To validate the result in the high-frequency optimization, a sound insulation test is conducted by using the method of reverberation room and anechoic room [35]. The specimen configuration is shown in Figure 31. tion models targeting different frequency ranges. Experimental Validation To validate the result in the high-frequency optimization, a sound insulatio conducted by using the method of reverberation room and anechoic room [35]. Th imen configuration is shown in Figure 31. Since the optimization is based on the two-dimensional cross section of the re wich panel, the test specimen can be treated as the stretching of the 2D cross secti the stretching length is W. For comparison purposes, there are three test specim total, whose parameters are listed in Table 7. Since the optimization is based on the two-dimensional cross section of the real sandwich panel, the test specimen can be treated as the stretching of the 2D cross section, and the stretching length is W. For comparison purposes, there are three test specimens in total, whose parameters are listed in Table 7. Among the three test specimens, specimen 1 refers to the baseline model, and specimen 2 has different inclined angle from specimen 2. Specimen 3 is the one with the optimized cross section (due to manufacturing restrictions, the cross section of specimen 3 is not exactly the same as the calculation results), and the other two can be considered as the control group. The layout of the test facility is illustrated in Figure 32; the anechoic chamber and the reverberation chamber are connected by a no end reflexing passage, and the test specimen is bolted on the test window of the reverberation chamber side. The size of the anechoic chamber is about 16 m × 11.4 m × 6.6 m, and the lower cut-off frequency is about 60 Hz. The volume of the reverberation chamber is about 268 m 3 . When the frequency is below 400 Hz, the deviation of sound pressure uniformity is less than 3 dB. When the frequency is above 400 Hz, the deviation of sound pressure uniformity is less than 1.5 dB. In the reverberation chamber, multiple measuring positions are selected randomly to place the microphone. Subject to the experimental conditions, for the same specimen, only one microphone is used in a single test run, and after the test, the microphone is moved to another measuring position and the test is repeated under the same operating conditions. For each specimen, the test procedure is repeated 5 times, and the final sound pressure is calculated by averaging the 5 test results. On the anechoic chamber side, the same test strategy is used, and the only difference is that the 5-test position is located on the same cross section (measuring surface) of the no end reflexing passage. The photo of the test-site in given in Figure 33. The layout of the test facility is illustrated in Figure 32; the anechoic chamber and the reverberation chamber are connected by a no end reflexing passage, and the test specimen is bolted on the test window of the reverberation chamber side. The size of the anechoic chamber is about 16 m × 11.4 m × 6.6 m, and the lower cut-off frequency is about 60 Hz. The volume of the reverberation chamber is about 268 m 3 . When the frequency is below 400 Hz, the deviation of sound pressure uniformity is less than 3 dB. When the frequency is above 400 Hz, the deviation of sound pressure uniformity is less than 1.5 dB. In the reverberation chamber, multiple measuring positions are selected randomly to place the microphone. Subject to the experimental conditions, for the same specimen, only one microphone is used in a single test run, and after the test, the microphone is moved to another measuring position and the test is repeated under the same operating conditions. For each specimen, the test procedure is repeated 5 times, and the final sound pressure is calculated by averaging the 5 test results. On the anechoic chamber side, the same test strategy is used, and the only difference is that the 5-test position is located on the same cross section (measuring surface) of the no end reflexing passage. The photo of the test-site in given in Figure 33. In the reverberation chamber, multiple measuring positions are selected randomly to place the microphone. Subject to the experimental conditions, for the same specimen, only one microphone is used in a single test run, and after the test, the microphone is moved to another measuring position and the test is repeated under the same operating conditions. For each specimen, the test procedure is repeated 5 times, and the final sound pressure is calculated by averaging the 5 test results. On the anechoic chamber side, the same test strategy is used, and the only difference is that the 5-test position is located on the same cross section (measuring surface) of the no end reflexing passage. The photo of the test-site in given in Figure 33. The experimental control room is located on the other side of the wall of the anechoic room and the reverberation room. All data processing and control equipment are placed in it, including sound source controller (B&K2260), power amplifier (B&K2706), data acquisition device (LMS), microphone (B&K4189), and data acquisition computer (Lenovo). In the experiment, white noise signal is used as the incident sound source, and the transmitted power is controlled by the sound source controller. Figure 34 shows the testing equipment of STL experiment. Materials 2021, 14, x FOR PEER REVIEW The experimental control room is located on the other side of the wall of the room and the reverberation room. All data processing and control equipment ar in it, including sound source controller (B&K2260), power amplifier (B&K2706), quisition device (LMS), microphone (B&K4189), and data acquisition computer ( In the experiment, white noise signal is used as the incident sound source, and t mitted power is controlled by the sound source controller. Figure 34 shows th equipment of STL experiment. The experimental results are shown in Figure 35. Comparing the STL of spe and 2, since the specimen 1 has a more inclined (core) stiffener, the sound tran loss is correspondingly higher than that of specimen 2, especially in the low-fr range, which agrees with the numerical prediction. According to the given cu The experimental results are shown in Figure 35. Comparing the STL of specimens 1 and 2, since the specimen 1 has a more inclined (core) stiffener, the sound transmission loss is correspondingly higher than that of specimen 2, especially in the low-frequency range, which agrees with the numerical prediction. According to the given curves, the optimized model (test specimen 3) has an obviously higher sound transmission loss than the other two specimens in the targeting frequency range (800-1200 Hz). This result proves the reliability of the optimization and indicates that the optimization of the 2-dimensional cross section is effective and can be applied in the practical engineering design of the sandwich panel. The experimental results are shown in Figure 35. Comparing the STL of specimens 1 and 2, since the specimen 1 has a more inclined (core) stiffener, the sound transmission loss is correspondingly higher than that of specimen 2, especially in the low-frequency range, which agrees with the numerical prediction. According to the given curves, the optimized model (test specimen 3) has an obviously higher sound transmission loss than the other two specimens in the targeting frequency range (800-1200 Hz). This result proves the reliability of the optimization and indicates that the optimization of the 2-dimensional cross section is effective and can be applied in the practical engineering design of the sandwich panel. Conclusions In this paper, considering the vibro-acoustic coupling effect, a numerical model of a finite size sandwich plate with corrugated core is established by using the wave-based method. Through the numerical validation, the comparison between the WBM and conventional finite element method indicates that under the same accuracy condition, the Conclusions In this paper, considering the vibro-acoustic coupling effect, a numerical model of a finite size sandwich plate with corrugated core is established by using the wave-based method. Through the numerical validation, the comparison between the WBM and conventional finite element method indicates that under the same accuracy condition, the present method can significantly reduce the system DOFs and save nearly 80% of the computational time. These advantages make the present method very suitable for calculating the objective function in an optimization problem. (1) Together with Raleigh's Integral, a vibro-acoustic optimization is performed by using the genetic algorithm to maximize the frequency averaged sound transmission loss (STL avg ) of the sandwich panel in three different frequency bands. (2) A sound insulation test is conducted by using the methods of reverberation room and anechoic room to validate the optimization results in the high-frequency range. The test data show that the optimized model has an obviously higher sound transmission loss than the control group. Finally, the optimal results cannot be considered conclusive for all sandwich panels since they have not considered the effects of arbitrary incidence angles with respect to different frequency bandwidths, among other factors. This aspect, on which our future work will focus, cannot be neglected since, for example, the STL can be strongly dependent on the incidence angle due to coincidence effects. In addition, the method presented in this paper can be applied in the modeling of other corrugated core shapes of sandwich panels such as rectangular corrugated core or triangular corrugated core.
13,715.8
2021-12-01T00:00:00.000
[ "Physics", "Engineering" ]
Note on Bertrand Competition under Quadratic Cost Functions The authors focus on a model of competition known in economics as Bertrand competition. They investigate how the outcome of Bertrand competition changes when linear cost functions are replaced by quadratic functions in the model. Named after French mathematician Joseph Louis Francois Bertrand (1822-1900), Bertrand competition describes interactions among firms and a market situation in which firms make their output and pricing decisions based on the assumption that their competitors will not change their own prices. Prokop, Ramsza and Wiśnicki show that the introduction of quadratic cost functions in the model leads to a qualitative change in competition between firms. In this case, the standard assumption that a firm with a lower price is interested in taking over the entire market is not only unrealistic (even in the absence of capacity constraints), but also irrational from the profit-maximization viewpoint, the authors say. Therefore the results of previous studies are misleading, according to Prokop, Ramsza and Wiśnicki. They argue that the so-called Bertrand duopoly model should be adjusted to better capture the behavior of firms under quadratic cost functions. The authors relax the assumption that the output of firms is determined by existing demand. They offer a modified Bertrand duopoly model in which firms competing in prices are free to choose their level of production depending on the market demand for their products at specific prices. “Under the modified assumptions, the static price competition game of identical duopolists has no symmetric equilibrium in pure strategies,” the authors conclude. Their research shows that, under quadratic cost functions, it is possible to expect price fluctuations on oligopolistic markets rather than a stable equilibrium situation described by the standard model of price competition. * Warsaw School of Economics, Department of Economics II; e-mail<EMAIL_ADDRESS>** Warsaw School of Economics, Department of Mathematics and Mathematical Economics; e-mail<EMAIL_ADDRESS>*** Warsaw School of Economics, M. A. student; e-mail<EMAIL_ADDRESS>6 GOSPODARKA NARODOWA nr 2/2015 Introduction It has been widely known that the static Bertrand competition in prices among identical firms in oligopoly leads to the outcome similar to perfectly competitive market.This result was based on the assumption of constant marginal costs of supply.Clearly, it would be interesting to see how robust this conclusion is to changes in the cost functions of firms. The objective of this note is to show the key differences in the outcome of Bertrand competition under different cost functions of firms.In particular, we consider the homogenous duopoly competition when the cost functions are quadratic, and compare it with the results of the standard Bertrand models with demand-determined output discussed in the previous literature, summarized by, for example, Dastidar [1995], or Satoh and Tanaka [2013]. We show that the introduction of quadratic cost functions leads to a qualitative change in the competition between firms.In this case, the assumption that the firm with the lower price is interested in taking over the entire market is not only unrealistic (even in the absence of capacity constraints), but simply irrational from the profit-maximization viewpoint.Therefore the results of the analysis provided by the preceding literature are misleading.More specifically, we show that the Bertrand duopoly model should be adjusted to better capture the firms' behavior under the quadratic cost functions.We prove that the modified price competition game of duopolists characterized by identical quadratic cost functions has no symmetric equilibrium in pure strategies. In the next section, we briefly review the standard Bertrand competition of firms with identical and constant marginal costs.The following section focuses on the model with quadratic cost functions of firms.In that section, we consider two different specifications of firms' behavior when their prices differ: the standard approach found in the existing literature, and the modified framework suggested in our paper.Conclusions are formulated in the last section. The standard Bertrand duopoly More than a hundred years ago, Bertrand [1883] pointed out that firms compete primarily in prices.In the simplest static model of price competition, two companies (duopoly) produce homogeneous good at identical and constant marginal costs c.Moreover, it is assumed that there are no capacity constraints, so that each firm would be able to satisfy the entire market demand. The companies set prices of their products simultaneously and independently.Consumers buy the goods at the lowest quoted price.In the case of identical prices set by the suppliers, the demand is split equally between the two firms.Thus the demand for the products of firm i is given by: Each firm maximizes its own one-period profit. In the above game the Nash equilibrium is defined as a pair of strategies p 1 * and p 2 * such that the following two conditions are satisfied: It means that, in the Nash equilibrium none of the firms has incentives to change the price of its product, given the price of its competitor.Firm i would like to set its price slightly below the competitor's price.That would allow firm i to take over the entire market demand.Only when the rival's price is set at the level equal marginal costs, firm i will have no incentives to further undercut prices.Thus, it is easy to check that the only Nash equilibrium entails a pair of strategies p 1 * and p 2 * such that: In the above equilibrium, each firm earns zero profit.This result is called "Bertrand paradox" 1 : the price competition between just two firms is enough to generate the outcome of perfectly competitive market. The Bertrand paradox disappears when we relax some of the assumptions of the basic model.There are three key directions of modification.First, the suppliers may have capacity constraints, which will not allow them to cover the entire market demand when they undercut prices.Second, by considering an infinite horizon for the competition between firms, the incentives for the firms to undercut in the short-run could be eliminated.Finally, when products offered by firms are differentiated, then even the sharp price competition may generate positive profits for both suppliers.In each of the above modifications, a symmetric Nash equilibrium in pure strategies exists and constitutes a good basis for the prediction of the firms' behavior in oligopoly.2 1 It should not be confused with the Bertrand paradox known in the probability theory; see Bertrand [1889]. As mentioned at the beginning of this note, we will focus on the modification of the assumption about the cost functions of the suppliers.Instead of the linear form, we consider the quadratic cost function, i.e., the marginal costs will not be constant any more. 3 The model with quadratic cost functions The standard approach Again consider an industry comprised of two firms: 1 and 2. Each of them produces an identical good at quantities q 1 and q 2 , respectively.Market demand for the product is given as a linear function: where p ( (0 ≤ p ≤ a) ) denotes the market price, and > a a ( 0) is a given parameter reflecting the market size. Each firm is characterized by the quadratic cost function In the standard model of Bertrand competition it is assumed that the firms set prices simultaneously and independently, and the output is determined by the market demand.Therefore the profits are computed as follows In the symmetric Nash equilibrium (if it exists) both firms will choose identical price level, denote it by p * .Then, the profit of each firm will amount to If one of the firms, say firm 1, reduced its price slightly below p * , it would capture the entire market demand and it would earn profit equal Firm 1 would have no incentives to undercut below p * if and only if the profit given by ( 8) is not smaller than the profit given by (9), i.e., 3 This modification is related to the assumption of capacity constraints; compare, e.g., Tirole [1988, pp. 212-213]. Since < p a * , it follows from (11) that none of the firms will have any incentives to undercut for Moreover, in a Nash equilibrium, the profits of firms should not be negative, i.e., or Hence Thus, if p * is a price at a Nash equilibrium of the above game, then it must belong to the interval 1 3 a , 3 5 a ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ .Observe that when both firms set the price equal p * , none of the firms individually has any incentives to raise its price, because by doing so, according to (7), the firm will earn zero profit.Therefore, any price satisfying ( 12) and ( 15) is indeed the Nash equilibrium of the Bertrand game. 4 However, it should be noticed that the computation of profit according to (7), implicitly assumes that the firm with the lower price will supply the quantity equal to the entire market demand at this price.It would mean that the firm is forced to produce even when its marginal costs are above the price (which happens in any of the above Nash equilibria).Therefore, this assumption is rather unrealistic.The price undercutting firm should be allowed to supply less than the entire market demand, if that is more profitable for that firm. Thus, the Bertrand model in its standard version is not appropriate for the analysis of oligopolistic price competition in the case of quadratic cost functions.By offering the lowest price in the market, a firm should have an option to sell less than the market demand for its product.This option was 4 Observe that the equilibrium p * = 3 5 a Pareto dominates any of the other equilibria. not important in the case of linear costs, because marginal costs stayed constant no matter the production size.Situation is significantly different when cost functions are quadratic. The modified model Let us consider a modified Bertrand duopoly game, in which the firms competing in prices are free to choose their levels of production up to the size of the market demand for their products at the quoted prices.By setting its price at the level of p i , a profit-maximizing firm i would be interested in producing the output at the level that equalizes the marginal costs (2q i ) and the quoted price (p i ), as long as there is enough market demand for its product at the prices set by the firms.Thus firm i will produce = q p 2 i i , as long as the market demand for the product of firm i is not smaller than = q p Formally, we can describe the production decision of firm i as a function of prices as follows.When firm i quotes the lowest price in the market < p p ( ) i j , it will produce and sell In the case of identical prices set by the suppliers (p i = p j ), the output sold by firm i equals to Hence the quantity of the product sold by firm i as a function of prices could be summarized as 5 In this case, we assume the most natural efficient-rationing rule, i.e., the customers with the highest valuation of the product buy first from the cheaper producer.For a discussion of rationing rules see, for example, Tirole [1988, pp. 212-214].Given the above function of firm i's output level, the profit of firm i is computed as (17) Now, we will focus on the possibility of the existence of a symmetric Nash equilibrium of our modified Bertrand game.In a symmetric equilibrium (if it exists), each firm i would be interested in setting the price p i to maximize where q i (p i , p i ) is given by ( 16), i.e., q i ( p i , . The profit--maximizing price p i , denoted by p * , equals At the above price, each firm i would produce and sell Summing up the above considerations, the price = p a 1 2 * is the only symmetric strategy profile that maximizes firms profits and clears the market.Thus, it is the only candidate for a symmetric Nash equilibrium of our modified game. Observe that when both firms charge the same price given by ( 19) and produce the output given by (20), none of them has any incentives to undercut.The argument is as follows.When firm j quotes price p j = = p a 1 2 * , it follows from ( 16) that by charging the price p i below p j firm i produces and sells the output From (18) it follows that the profit of firm i would be lower than π = a 1 16 i * 2 .Thus, none of the firms will be interested in reducing its price below = p a 1 2 * .Now, we should consider a possibility for any of the firms to individually increase its price.When firm j quotes price p j = = p a 1 2 * , it follows from ( 16) that by charging the price p i above p j firm i produces and sells i.e., By considering only a small increase of price p i above p j = = p a 1 2 * , we may limit ourselves to the case when < ≤ a p a 1 2 3 4 i .Then, from (18) it follows that the profit of firm i amounts to From ( 24) we have that the derivative of the above profit function is given as Therefore, it follows from the above considerations that our modified Bertrand competition game has no symmetric Nash equilibrium in pure strategies. Conclusions In this note, we presented the complication that arises in the simple analysis of the static price competition when the linear cost functions are replaced by the quadratic costs.After adjusting the standard model of Bertrand competition to the new cost environment, we obtain a game with no symmetric equilibrium in pure strategies.Therefore, a solution to the modified static model must involve either asymmetric behavior or randomization.Since we expect identical firms to behave in a similar way, a symmetric equilibrium mixed strategies should be considered.This conclusion has important consequences for the behavior of firms operating under the quadratic cost functions. Following the results of our analysis, when the costs of production are described by the quadratic function, we should expect often changes of prices by oligopolistic firms, rather than a stable market situation described by the standard Bertrand model.In order to capture the price fluctuations in the industry characterized by quadratic cost functions mixed strategies must be considered, and the dynamic framework of analysis becomes even more important.
3,376.8
2015-04-30T00:00:00.000
[ "Economics", "Mathematics" ]
Field study of system efficiency of detached house with solar energy system and fuel cell cogeneration system in Japan . In this study, the energy flow in detached houses in Sendai City in the Tohoku region of Japan where photovoltaic power generation system, solar thermal collection system, and fuel cell cogeneration system were installed was analyzed based on measured data. Hot water obtained by solar heat collected in the house is used to supply hot water to the kitchen throughout the year, to supply hot water to floor heating and to preheat the air supply of the HVAC system in winter, and to regenerate the desiccant wheel of the HVAC system in summer. A hybrid hot water supply system using city gas and a heat pump is used as an auxiliary heat source for a solar thermal system. As a result, the monthly energy self-sufficiency rate, energy efficiency and heat loss rate of the house for three years were calculated, and the necessity of the optimization of the operation method of the solar thermal system was found. In addition, daily analyses of solar thermal system in winter and summer suggested the importance of the operation corresponding to the weather. Introduction The energy self-sufficiency rate of Japan in 2019 was 12.1% [1]. As described above, the energy selfsufficiency rate of Japan is very low, and so active use of renewable energy in houses is required. In particular, the use of solar energy is suitable for the energy consumption structure of houses because it can supply electricity and heat by photovoltaic power generation and solar thermal collection. Fuel cell cogeneration system, which generate electricity and heat by chemically reacting hydrogen from city gas with oxygen in the air, are also suitable for houses. In this study, the energy flow in detached house in Sendai in the Tohoku region of Japan where photovoltaic power generation system, solar thermal system, and fuel cell cogeneration system were installed was analyzed based on measured data, and the operation method to improve the system efficiency was discussed. Outline of house The system was introduced in a two-story wooden house that was completed in April 2015. The total floor area is 163 m 2 . In terms of heat loss factor, the UA value is 0.52 W/m 2 K and the Q value is 1.69 W/m 2 K. The measurement of air-tightness carried out in September 2020 showed that the equivalent leakage area per floor area was 2.51 cm 2 /m 2 . Fig. 1 shows the flow of energy source, generation, load, and waste in the surveyed house. The house produces and use heat and electricity from sources such as city gas and commercial electricity supplied from outside and solar energy obtained from two kind of panels on the roof. Heat is produced by solar thermal system and fuel cell cogeneration system. The solar system provides hot water for the kitchen, floor heating on the second floor, and the coils in the HVAC system, while the fuel cell cogeneration system provides hot water for floor heating on the first floor, bathroom and others. As for electricity, each system and home appliances are operated using commercial power and electricity produced by fuel cell cogeneration system and solar power generation, and surplus electricity is sold. Fig. 2 shows a schematic of the solar thermal system in summer. The system is based on a system that the authors have researched and developed [2]. The solar thermal collector has an area of 12 m 2 and the thermal storage tank has a volume of 300 L, and the auxiliary heat source is a hybrid system of high-efficiency gas water heater and heat pump. Solar heat collected by the collector is stored as hot water in the storage tank for use in hot water supply and heating. In winter, solar heat is used to preheat the supply air of HVAC system and floor heating, and in summer, it is used to regenerate desiccant wheel using titanate nanowire (TiO2) as a dehumidifier in HVAC unit. The desiccant wheel is able to regenerate at low temperatures of about 40-60 degrees centigrade. If sufficient solar heat is not available, the auxiliary heat source will operate automatically. The HVAC system has five operating modes: air blowing, dehumidification, dehumidification and cooling, heating and heating and humidification. Non-CFC indirect evaporative cooler is used to cool the introduced outside air in the dehumidified cooling mode in summer. On the other hand, in the heating and humidifying mode in winter, the humidified air from the evaporative cooler is mixed with the introduced outside air. Outline of measurements Long-term measurements of temperature, humidity, amount of heat and amount of water in the systems were carried out. Power consumption and power generation were measured by home energy management system. In this study, using the energy flow diagram shown in Fig. 1, the monthly system efficiency was analyzed based on measured date from 2016 to 2018 [3]. In addition, the daily system efficiency of solar thermal system in winter and summer was analyzed using the measured data in January and July 2018. 3 Results of monthly total energy flow analysis for each season Results of energy self-sufficiency rate Energy self-sufficiency rate is the ratio of solar energy to total energy input, and according to Fig. 1, it is the sum of S3 and S4 divided by the sum of S1 to S5 minus L13. Fig. 3 shows the results. Energy selfsufficiency rate tended to decrease in summer and winter and increase in the middle season. This is because solar power generation decreases and energy consumption for heating and cooling increases in summer and winter. One of the methods to increase energy self-sufficiency rate is to store solar generated electricity during the day and use it at night when energy demand is high in surveyed house. Results of total energy efficiency Total energy efficiency is the ratio of energy consumption to total energy input, and according to Fig. 1, it is the sum of L1 to L12 divided by the sum of S1 to S5 minus L13. Fig. 4 shows the results. There were 12 months in 3 years when total energy efficiency exceeded 100%. The reason for this is thought to be the addition of the gain associated with the increase in power generation efficiency of the fuel cell cogeneration system as evaluated by primary energy. In addition, since the coefficient of performance of the hybrid water heater is 5.0, it is considered that heat five times the power consumption was produced in the hybrid water heater, and the heat produced minus the power consumption was added as the gain. Correlation coefficients between each item of the energy flow diagram and total energy efficiency are shown in Table 1. Total energy efficiency tended to increase due to the increase in heat production of fuel cell cogeneration system and solar thermal collection and the increase in surplus electricity by the increase in power generation. In addition, because total energy efficiency was negatively correlated with items related to solar thermal system, especially hybrid water heater, the way solar thermal system is operated needs to be analyzed in detail and optimized. Results of heat loss rate The heat loss rate is the ratio of the waste heat of the systems to the total energy input, and according to Fig. 1, it is the sum of W1 to W3 divided by the sum of S1 to S5 minus L13. Fig. 5 shows the results. The heat loss rate increased in winter when the use of hot water increased. According to the correlation coefficient between each item of the energy flow diagram and heat loss rate shown in Table 2, the exhaust heat from the solar thermal collection system (W3) had the strongest positive correlation among the exhaust heat from each system (W1 to W3). Based on the above, it is considered that heat loss from the piping connecting the hot water coil to preheat the outside air in HVAC unit, the floor heating and the thermal storage tank was high, and also heat loss from the storage tank was high. Results of daily energy flow analysis of solar thermal system 4.1 Results of daily analysis in January 2018 Solar thermal energy self-sufficiency rate is the ratio of solar thermal energy to total input energy, and according to Fig. 1, it is the value of S3 divided by the sum of S1 to S5. Fig. 6 shows the results. The amount of heat collected was dependent on the weather, and rain or snow was observed in the day when the amount of heat collected was 0 MJ. Solar thermal energy selfsufficiency rate tended to increase with increasing solar thermal collection. According to Fig. 1, the energy efficiency of a solar thermal system is the sum of L4 and L5 divided by the sum of S2gas and S3, and the solar thermal energy contribution rate is the value of S3 divided by the sum of S2gas and S3. Fig. 7 and Fig. 8 show the results, respectively. On days when the amount of heat collected was 0 MJ, the hybrid water heater operated and the energy efficiency was relatively high at 72.6 to 78.0%. When the HVAC system was operated all day with solar radiation, the energy efficiency was high at 75.6 to 76.6% due to efficient heat production by solar thermal energy and city gas. Similarly, on the 4th, when the HVAC system was operated all day, the heat loss of the solar thermal collection system (W3) shown in Fig. 9 was high and the energy efficiency was low at 70.3%. It is given as one of the factors that less of the surplus heat (G3) produced by the hybrid water heater came back through the HVAC system than on other days. Solar thermal energy contribution rate when the HVAC system was operated all day increased as the solar thermal collection increased, but the contribution rate was higher on 12th and 18th when the HVAC system was operated for shorter periods. On the other hand, when the HVAC system was operated only in the morning and evening, or only in the evening, except for 2nd when G3 was lower than on other days, energy efficiency tended to decrease due to increased heat loss in the thermal storage tank as the solar thermal collection increased. Results of daily analysis in July 2018 Solar thermal energy self-sufficiency shown Fig. 10 increased with increasing heat collection and reached a maximum of 19.0%. There were many days when the HVAC system was operated only during the daytime, and the energy efficiency of the solar thermal system shown in Fig. 11 was 86.1 to 105.4% when the amount of heat collected was 47 MJ or more per day, except for the 13th and 31st. The operating time of HVAC system was short on the 13th, and the hybrid water heater produced more heat because the HVAC system was operated all day on the 31st. On the other hand, the energy efficiency was less than 80% when operated at evening in spite of low heat collection (3rd, 5th, 29th). The energy efficiency on the 2nd and 15th was very high because the HVAC system was not operated and the thermal storage tank was full the previous day and the hybrid water heater did not operate much on the day. 2nd, 3rd, and 15th, even though the amount of heat collected was small. That reason is mentioned above. The contribution rate was as low as the energy efficiency because the HVAC system was operated all day on 31st. Conclusion The energy flow in detached house with solar energy system and fuel cell cogeneration system in Sendai of Japan was analyzed based on measured data. As a result of the monthly analysis, it was found that the heat production and power generation of fuel cell cogeneration system, solar thermal collection and solar power generation were factors in the increase of total energy efficiency, while the heat loss of solar thermal system in winter was high. When the HVAC system was operated all day with solar radiation in winter, the energy efficiency was high due to efficient heat production by solar thermal energy and city gas. In order to increase the solar thermal energy contribution rate when HVAC system is operated all day, it is necessary to reduce heat loss in the thermal storage tank. When there was some heat collection in summer, energy efficiency and solar energy contribution rate were higher for daytime only operation of HVAC system. In order to operate a solar thermal system efficiently until evening, it is necessary to devise the flow control of hot water to send the minimum temperature of hot water to the coil.
2,975
2023-01-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Option Pricing with Markov Switching in Uncertainty Markets In this paper, we present a stock model with Markov switching in the uncertainty markets, where the parameters of drift and volatility change according to the states of a Markov process. To price the option, we firstly establish a risk-neutral probability based on the uncertain measure given by Liu. Then a closed form of the European option pricing formula is obtained by applying the Laplace transforms and the inverse Laplace transforms. Introduction The problem of option pricing is one of the most foundational problems in financial world.In 1900, Brownian motion was first introduced to finance by Bachelier [1].Samuelson [2] proposed that stock prices follow geometric Brownian motion.Following that, Black and Scholes [3] created the famous Black-Scholes model and gave an option pricing formula.Nowadays, it has become an indispensable tool in financial market.In previous option pricing theory, the problems of option pricing were handled under stochastic theory. In order to study uncertain phenomena in human systems, Liu [4] found an uncertainty theory and refined it based on normality, monotonicity, self-duality and countable subadditivity.In 2008, Liu [5] proposed a concept of uncertain process and defined uncertain differential equation.In 2009, Liu [6] designed a canonical process and invented uncertain calculus.By means of uncertain differential equation, Liu [6] proposed a stock model for uncertain markets that are essentially a kind of markets consistent with uncertain measure.Following that, Chen [7] derived an American option pricing formula. In reality, some important information may greatly impact the volatility of stock returns, such as a change from "bull market" to "bear market".Hamilton [8] first studied the regime change and business cycles by using the Markov switching model.Di Masi, Kabanov and Runggaldier [9] considered the problem of hedging a European call option for a diffusion model, where drift and volatility are functions of a two-state Markov process.Guo [10] provided a closed-form formula for the arbitrage-free price of the European call option by using Markov switching model.Cheng et al. [11] carried out further research on Guo's result.Mamon and Rodrigo [12] got an explicit solution to European options in a Regime-switching economy. Inspired by the empirical phenomena of stock fluctuations related to the business cycle and the evidence from literatures that validated Markov switching model in the investigation of option pricing, in this paper, we deal with the pricing of options with Markov switching model in uncertainty markets.Specifically, we assume that stock prices are generated by a geometric uncertain process, and that the drift and volatility parameters take different values depending on the state of a Markov process.We finally provide an explicit formula for pricing when the Markov process has two states. Preliminaries Uncertainty theory is a branch of axiomatic mathematics.It is an evolving system founded by Liu to deal with human uncertainty.Now the book Uncertainty theory has been updated to edition five [13]. is a normal uncertain variable with expected value 0 and variance 2 t , whose uncertainty distribution is ( ) If t C is a canonical process, then the uncertain process ( ) process, where e is called the log-drift and σ is called the log-diffusion. where r is the riskless interest rate, e is the log-drift, σ is the log-diffusion, and t C is a canonical Liu process. Note that the stock price is ( ) whose uncertainty distribution is ( ) ( ) Definition 5. Assume a European call option has a strike price K and an expiration time s.Then the European call option price is ( ) Definition 6. Assume a European put option has a strike price K and an expiration time s.Then the European put option price is ( ) Uncertain Stock Model with Markov Switching Consider the following uncertain stock model which incorporates different states of stock market quotation where ( ) . Similarly, let ( ) 1 t ε = when some significant information just appears and cause turbulence in stock market, then, ( ) , , Suppose, further, that each piece of information flow is a random process ( ) , and 1 2 , , , n z z z  are independent identically distributed processes.Then their super imposed process is Poisson.Consider the transition among different states, let i λ be the rate of leaving state i and let i τ be the time interval remain- ing in state i .Then the cumulative distribution function of i τ is as follow: ( ) 1 e , 0,1 Then the volatility of stock price is driven by the canonical process t C and the Markov process ( ) t ε . Risk-Neutral Option Pricing Based on the Uncertain Stock Model with Markov Switching We aim to value the European call option based on the risk-neutral pricing theory, but it is easy to verify that the model is not accord with no-arbitrage hypothesis. As we know, in a risk-neutral framework, the Option Put-Call Parity Relation is as follow: where C is the European call option price, P is the European put option price, K is the same strike price and 0 Y is the initial stock price. But from the Definition 5 and Definition 6, we can learn: ( ) ( ) Therefore, no option put-call parity Relation was created between c f and p f .They were not priced in the risk-neutral measure.So we need to find the risk-neutral measure.( ) ( ) Proof: As we know, in the risk-neutral measure, the expected stock return e is equal to the riskless interest rate r .Assume that ( ) pected value of t Y .Following from (4.5), we have: In the risk-neutral measure, the expected value of t Y is: Then: Then we have: Thus, the risk-neutral uncertainty distribution function is verified.Then we will present the following theorem for a two-state Markov switching model.Let i T be occupation time of state 0, when the chain starts from state i .That is the total amount of time between 0 and T during which ( ) 0 t ε = , starting from state i for i = 0, 1.Let where  ( ) is the derivative of the risk-neutral uncertainty distribution ( ) ( ) where 0 J and 1 J are the modified Bessel functions, defined as (a = 0, 1) ( ) 0 t δ is the unit impulse function. Proof: Since the arbitrage price of the European option is the discounted expected value of t Y under the risk-neutral uncertainty measure, we have: Consider i T and it's uncertainty distribution function ( ) , i g t T , then according to the smoothing property of conditional expectation, we have where  ( ) is the derivative of the risk-neutral uncertainty distribution ( ) Next we will deduce the form of ( ) is the Laplace transform of Then taking Laplace transforms with respect to T on both sides.By using the convolve formula, we get: where θ is the unit step function So, when v w ≥ , we have: Conclusion In this paper, a stock model with Markov switching in the uncertainty markets is proposed to capture the fluctuations related to the business cycle.Then the risk-neutral probability based on the uncertain measure is established for European call option pricing.Finally, an analytical formula of the option price is given by virtue of risk-neutral pricing theory.The model presented in this paper is applicable not only to two states Markov switching but also to general model with finite states Markov process. Lemma 1 . Consider the uncertain stock model(2.4 Theorem 1 . be the uncertainty distribution function of i T .Under the Markov switching uncertain stock model (3.1) and the risk-neutral uncertainty distribution (4.7), the arbitrage free price of European call option with expiration date T and strike price K is given by using Laplace transform and inverse Laplace transform.Note that with respect to t.Assume that the time interval of state changing obeys the exponential distribution, as shown in (3.2), then by considering the total probability of { } the Theorem is proved. Markov process with a finite number of states.In this paper, we will focus our discussion on the case of two-state Markov switching model.Specifically, let ( ) 0 ε is a stochastic process representing the state of market and ( ) t r ε take different values when ( )t ε is a
1,936.2
2015-05-12T00:00:00.000
[ "Mathematics" ]
Camera-Based Smart Parking System Using Perspective Transformation : The concept of the “smart city” has emerged with the advancement of technology, but some facilities are not sufficiently intelligent, such as parking lots. Hence, this paper proposes an inexpensive and plug-to-play camera-based smart parking system for airports. The system utilizes inverse perspective mapping (IPM) to provide an aerial view image of the parking lot, which is then processed to extract parking space information. The system also includes a guidance system to assist drivers in finding available parking spaces. The system is simulated on a 3D scene based on the parking lot of Macao International Airport. In the experiment, our system achieved an accuracy rate of 97.03% and a mean distance error of 8.59 pixels. This research study shows the potential of enhancing parking lots using only cameras as data collectors, and the results show that the system is capable of providing accurate and useful information. It performs well in parking lots with open space, in particular. Moreover, it is an economical solution for implementing a Introduction The rapid advancements of the third industrial revolution and computer technology have ushered in a new era of "smart life", where integration with smart devices, cities, airports, and more is becoming a reality.In this context, many urban designers and planners did not anticipate the pace of technological progress and its continued development.However, the city of Macao is actively promoting education, training, and awareness related to smart city development, intending to significantly enhance the overall quality of life. One area where smart technology can have a significant impact is the airport, which is a crucial infrastructure that can benefit from innovations such as smart parking and automated baggage handling. While there have been successful implementations of smart technology in airports around the world, such as London Heathrow [1,2] and Singapore Changi Airport [3], most research studies focus on improving the "in-system" experience, with less attention being paid to the "out-system", such as the parking lot.This is where our contribution comes in. In this paper, we propose an innovative, economical, and plug-and-play smart parking system for Macao International Airport that integrates various methods such as IPM (inverse perspective mapping), object detection with deep learning, and a guidance system with a rig of multiple cameras.Our system is easy to install, making it an attractive solution for other airports looking to adopt smart parking technology.The possibilities of smart technology are endless, and our contribution is just the beginning of what can be achieved with the integration of smart technology in airport management. We use a 3D scene to simulate our system and its functions.To evaluate the accuracy of our proposed method, we conducted an experiment where 977 vehicles were placed in 100 randomly generated scenes, and the images of the six cameras and the coordinates of the placed vehicles were collected.Our proposed method successfully located 948 vehicles with an accuracy of 97.03% and a mean distance error of 8.59 pixels, which is acceptable given the average size of the vehicles.These results demonstrate the effectiveness of our proposed method in accurately detecting and locating vehicles in parking lots. Overall, our proposed smart parking system can greatly benefit airport management by improving efficiency and reducing costs.Our contributions can be summarized as follows: • We proposed a simple yet effective smart parking system based on cameras that enhances the parking experience for users. • We designed basic functions for this system, including parking lot modeling, vehicle detection, parking space management, and guidance.• We adopted many mature algorithms used in the corresponding field, which provide many solutions to troubleshooting maintenance problems. This paper is organized as follows.The related works are reviewed in Section 2. Our system architecture and its components are presented in Section 3, while simulation results using a simulated 3D scene of Macao International Airport are discussed in Section 4. We discuss our system and its limitations in Section 5. Finally, we conclude our work and its limitations in Section 6. Related Works Smart parking systems have become increasingly popular as they help solve the problem of parking in urban cities, improve parking efficiency, and create smart cities. Smart parking is an important component of the larger trend toward digital technologies in all areas of customer service, and its implementation has been studied in various contexts.The benefits of smart parking include streamlining parking systems, reducing congestion, and improving the overall user experience. In [4] , the authors analyze the implementation of smart parking as a part of the smart airport concept.The authors argue that airports are increasingly interested in improving customer services, and parking is an important area of customer service that airports provide to their customers.Smart parking can help airports streamline their parking systems, reduce congestion, and improve the overall passenger experience.Similarly, in [5], the authors discuss the high acquisition costs associated with constructing a smart parking lot. Most research studies on smart parking aim to either upgrade existing parking systems or implement new ones, and there are two trends in terms of smart parking designs: IoTbased and visual-algorithm-based methods. • IoT-based methods: ESDTW [6] is an underground parking navigation system that estimates vehicle position using LTE signal strength and smartphone sensors.However, it has limitations in terms of its signal availability and performance in weak or noisy areas and may not be compatible with other wireless technologies.An IoT-based smart parking management system [7] uses Arduino components, Android apps, and infrared sensors to detect and reserve parking spaces, but requires many sensors and controllers for installation and maintenance.The Edge AI Parking Surveillance System [8] detects parking occupancy in real time using AI algorithms and edge computing, with excellent performance demonstrated in a real parking lot for three months.However, these approaches may all face security challenges such as data leakage, unauthorized access, and malicious attacks on IoT devices. • Visual-algorithm-based methods: The smart parking system in [9] uses image processing and AI to assign parking spaces and provide parking information and user recommendations.However, the paper lacks experimental results, comparative analysis, risk assessment, and ethical considerations.Aerial Image Parking Lot Analysis [10] uses elevation information to analyze and visualize parking lots from aerial images, including vehicle detection and simulation.However, it cannot be applied to smart parking in airports because it is limited to aerial images of airports.Image Processing Parking Space Detection [11] uses cameras and image processing to detect vacant parking spaces and display real-time data, but lacks analysis and comparison with other systems. Overall, IoT-based methods are reliable but may require many sensors and devices, leading to high acquisition and maintenance costs.Visual-algorithm-based methods, on the other hand, offer good performance in their respective scenarios, but may not be easily applicable to other types of smart parking systems, making their methods less persuasive. System Architecture In this section, we will describe the architecture of our proposed smart parking system based on surveillance videos.Figure 1 illustrates the various major components of the system.The operations in the preparation stage are only required to be performed once before the system starts running.During preparation, the system takes images from its cameras to generate coordinate relations and extract parking spaces.When the system is running, it uses images from its cameras to detect vehicles and guide users. Parking Lot Coordinates Mapping Perspective transformation is widely used in image restoration, which is the projection of an image onto a new view plane.For example, when two cameras take pictures of a plane at different angles, the coordinates of the same point in the plane are different in the two pictures.Perspective transformation achieves the function of mapping the same point in the two pictures.By the same token, when multiple cameras are taking pictures of a parking lot, the picture of each camera contains information about a part of the parking lot.Using this series of transformations can make it possible to obtain aerial view images without the need for special equipment, such as a UAV. Many methods can achieve coordinate mapping, such as stereo-based [12], adaptivebased [13], and 4-point-based [14] methods.The 4-point-based method is recommended as the preferred option in our system among these methods because it has fewer factors (parameters of algorithms, position or pose of cameras changed by installation mistakes, or extreme weather) involved.In addition, the 4-point-based method completely satisfies the requirements of this system with regard to robustness; therefore, we apply this method to our system. There are two steps to achieve parking lot coordinate mapping using a 4-point-based method: • Obtain the perspective transformation matrix and calculate the perspective transformation from the four pairs of corresponding points; • Apply a perspective transformation to images via the perspective matrix. The first step can be modeled by Equation (1): S(x) is the source matrix, which is a matrix of coordinates of four points on the source image, requiring any three of the points to not be co-linear.Similarly, T(x ′ ) is the target matrix, which is a matrix consisting of the coordinates of four points on the target plane, requiring any three of the points to not be co-linear and for each point to correspond to S(x).X represents the perspective transformation matrix, x and x ′ are the coordinates of the four points in the source image and target plane, respectively, and these points satisfy the constraint of no co-linearity in geometry. Next, we expand Equation (1) as follows: from which a 1 a 2 a 3 a 4 defines transformations such as rotation, scaling, etc.; b 1 b 2 is the translation vector; and c 1 c 2 is the projection vector.LU decomposition can be used to solve this, and then we can obtain the perspective transformation matrix X. Then, we can apply this matrix to an image to obtain any transformed point in the target plane.This process could be formulated by the following equation: After solving X in Equation (1) and applying that value of X to Equation (3), we can obtain the coordinate mapping between one camera and the aerial view using 4-point-based IPM. Figure 2 introduces the workflow of 4-point-based IPM.For simplicity, these intersecting lines in Figure 2a,d are used as an abstraction of the parking lot.At first, when obtaining the camera frame and the initialization of the system, it is necessary to know the pixel coordinates of at least four marks.The marks are usually small and clear, such as an end of a line or a line-line intersection.Secondly, the coordinates in the real world should be obtained from the aerial view (shown in Figure 2c); however, it is difficult to obtain the aerial view since the airport is so strict in controlling UAVs.The coordinates of the real world could be obtained by converting them from virtual coordinates (scaled in the computer).In the following works, virtual coordinates are used in representing the position of objects in the parking lot.Furthermore, by measuring the length and width of each parking space and the width of routes in the real world, we can assign a virtual coordinate to corners of parking spaces where mark points are usually selected from.Then, by applying 4-point-based IPM, the projected relationship between virtual coordinates and camera frames can be obtained.Then, as shown in Figure 2b,e, based on these two sets of virtual coordinates, the computer can calculate the IPM result of the view of the camera.Finally, fusing all of the IPM results can output a fused aerial view image which is extremely similar to the aerial view in terms of the relationship of coordinates (as shown in Figure 2c,f).In this step, the fused image is generated by calculating the nearest camera for each pixel and then copying its corresponding pixel. Parking Lot Modeling Before the system starts to detect vehicles, a precise model of the parking lot is necessary.This is useful in determining whether a vehicle is located in a parking space or not.Some routes in parking lots are designed to be one-way, which should be followed when guiding vehicles.The model can be labeled manually because this task only needs to be completed once before the system runs.In this paper, we try to propose a more automatic method. In the model of the parking lot, information on every parking space and route should be provided.In this section, we will introduce how we extract these two types of information.Note that the locations and heights of cameras are manually provided as this information is difficult to analyze from surveillance images. Parking Space Extraction The shapes of parking spaces may vary greatly in different parking lots, but basically, all the parking spaces have two straight lines at two sides, which we consider the most useful characteristic in extracting parking spaces.Therefore, to detect these straight lines, surveillance images covering the whole parking lot are required.In addition, the parking lot should be empty, which ensures that no line will be blocked by vehicles. After mapping surveillance images to a fused aerial view image, the system can use it to extract parking spaces.Figure 3 shows the process of parking space extraction using the generated fused aerial view image.The whole process can be divided into two stages: target line extraction and bounding box extraction.Target Line Extraction: In this stage, the lines of the parking spaces are extracted for constructing bounding boxes.Firstly, the preprocessing stage contains a blur function.When transforming surveillance images to aerial view images, the pixels are mapped onto the fused image.The pixels far from the camera are expanded and distorted.As a result, the edges of transformed lines may be jagged, which affects the performance of the following steps.The blur function can alleviate this situation.Secondly, we use the Canny edge detector [15] to detect the edges in the fused image.The result of the edge detector is an image containing edges, where the image has the same size as the fused image.To further ensure the edges of the parking space can be extracted as lines, we perform a dilation operation on the edge image in case the edges are shattered.The third step is to extract lines from the edges.A line detection method based on Hough Transformation is provided in [16].We use this method to extract lines.The majority of the extracted lines are not from the edges of parking spaces; therefore, we need to filter the lines.Since the fused aerial view image is generated with respect to the real coordinates, the length of parking spaces in pixels can be calculated.Extracted lines with large length differences from the actual length are filtered.In addition, the direction of parking spaces can also be used to filter the lines.For example, if the parking spaces are perpendicular to the edge of the parking lot, the lines that are not perpendicular to the edge of the parking lot are filtered.If there are multiple alignment directions of parking spaces, the lines perpendicular to each direction can be reserved.After this step, the lines reserved should be those extracted from the edges of parking spaces and the length should be similar to the edges. Bounding Box Extraction: To extract the boxes of parking spaces, we consider repeated boxes with the same size as the characteristic.Similar to filtering lines, the distance of lines, i.e., the width of the parking space, can be calculated.Figure 4 illustrates extracting the bounding box with a verification line.Firstly, when there are two lines with centers in one of the alignment directions and their distances meet the expected length, the box they form is considered as a candidate bounding box.Then, we follow the alignment direction and the opposing direction to check if there exists a verification line.The verification line should have the same direction and its distance to the candidate bounding box should meet the expected distance.If a verification line is found, we consider the candidate bounding box to be a box of a parking space and store it for the next step.If the parking spaces have multiple alignment directions, the lines can be divided into groups depending on their direction, and then we perform the extraction.Finally, sometimes more than one line is extracted from the edge of a parking space, which leads to redundant boxes being extracted.Therefore, in the last step of box extraction, we calculate the IOUs (intersection over union) of boxes to remove the redundant boxes. Route Modeling Route information is another important type of information in a parking lot.However, it is difficult to extract because there is no obvious and common characteristic of routes in different parking lots.Furthermore, some routes are designed to be one-way and the direction signs are variable, making it more difficult to extract information accurately.Compared with the parking spaces, routes are easier to label manually as the number of routes in a parking lot is much smaller than the number of parking spaces.Therefore, in our work, we label the routes and their directions manually from the fused aerial view image. A graph is a classical data structure using vertices (also known as nodes) and edges to express the relationships among vertices.An edge usually contains a weight, which refers to the consumption between two nodes.Because the routes are one-way in some parking lots, a directed graph can be used to model a parking lot.The turns in the parking lot are modeled by nodes in the graph.An edge in the graph represents the road between two turns and its direction follows the direction of the road.Meanwhile, the entrance and exit of the parking lot are modeled as nodes as well.A node in the graph records its coordinates and the weight of an edge is the distance between two vertices. Vehicle Detection Nowadays, vehicle detection techniques are increasingly relying on sensors to make them more intelligent.For our system, cameras are the best data collectors because a lot of useful information can be extracted from camera images (colors, locations, and so on).In our system, the camera frames go through the object detector, which returns a branch of the bounding box of detected vehicles in a short time.Then, the location information of the vehicles in the camera frame is sent to the guidance system for the next process. Object Detector Due to the demand for high speed and accuracy, YOLOv5 [17] is a good object detector for this system and performed well in different areas [18,19].YOLOv5 belongs to the one-stage algorithm, which makes it quick enough to detect an image, and uses many techniques to achieve high performance, such as data augmentation, and more advanced training strategies, etc.As shown in Figure 5, it follows the structure of YOLOv3 and YOLOv4, which can be described as "Backbone-Neck-Head".With this design, YOLOv5 can detect objects with high accuracy and speed.Meanwhile, YOLOv5 also uses depth and width multipliers to control the scale of its models, so these factors come in handy to manage the depth of the neural network.Not only does it perform well, but YOLOv5 also supports PyTorch Hub, which is a model-sharing platform, and allows users to call or deploy a model in a short period of time with a few lines. Locating the Position of a Vehicle The detection results are a series of bounding boxes from vehicles.By applying IPM, the center of a bounding box can be transformed into virtual coordinates.As illustrated in Figure 6, when the position of the box center is transformed, the transformed location is not the actual location of the vehicle.The position of the transformed center needs to be adjusted to the actual location as closely as possible.We simply model the detected vehicle as a circle.h denotes the height of the camera and r denotes the half height of the vehicle.The distance between the bottom and the actual location is denoted by l a and the distance between the bottom and the transformed center is denoted by l c .When the heights of the camera and the vehicle are known, the actual distance can be calculated as follows: In the aerial view, the locations are denoted in points.O denotes the location of the camera, A denotes the actual location, and C denotes the location of the transformed center.A can be calculated by the following equation: In the implementation, the height of the camera is known and we use the average height of vehicles as r.After the calculation, the difference between the calculated location and the ground truth is reduced.However, because vehicles and their poses are variable, the result may still not be accurate when the distance between the camera and the detected vehicle is too long.Similar to the fused aerial view image, we can assign a responsible area to each camera, i.e., for each camera; only the results located in this area (where the camera is the nearest one to every pixel inside) are accepted.In the case of a vehicle located on the border between two areas, which may lead to both results not being accepted, we set a tolerance.In detail, each pixel not only accepts the result from the nearest camera, but also the camera whose distance to this pixel is less than the nearest camera distance plus a predefined value. Vehicle Tracking The detected bounding boxes are not directly labeled to a certain vehicle.A tracking method should be used to track vehicles by their coordinates in a sequence of frames. An object tracking framework is provided in [20].It uses an object detector to find the bounding boxes of vehicles, then uses a Kalman filter to predict future bounding boxes and associate them with the vehicles.Our system uses this method to track the vehicles in the parking lot with some modifications.The bounding boxes were converted to coordinates in the previous section; therefore, the input and output of the Kalman filter are coordinates.In addition, due to the tolerance, a vehicle may be related to multiple coordinates.To associate these coordinates, when a new vehicle is detected by a camera (except for the vehicle located at the entrance), the system will associate it with the nearest coordinates. Parking Space Management and Guidance System When the parking lot is large and almost full, it is difficult for a user to find an available parking space, which is time consuming and frustrating.With the functions mentioned above, a parking space management and guidance system is proposed.This system applies those functions and guides the users to an available parking space. Parking Space Allocation In the previous sections, the coordinates of each parking space and vehicle were captured.Using this information, the system can determine the occupancy of a parking space.If a vehicle is located in the bounding box of a parking space, this parking space will be marked as occupied.The occupancy detection for parking spaces should be periodic in case some users do not use this system. When a new vehicle enters the parking lot and uses this system to find a parking space, the system will allocate an available parking space to it, referring to the occupancy of parking spaces.The allocation strategy may vary considering the different situations or requirements of the parking lot.For example, the system can allocate the nearest parking space to a short-term parking user and allocate a parking space in a particular area to a long-term parking user.With the allocated parking space, the user can follow the guidance function to find it, which will be introduced in the following section. Route Planning and Guiding In this section, the processes of route planning and guiding will be introduced.Each parking space has a route for entering the parking lot and a route for exiting.When planning a route, we first update the graph for the departure and destination.Next, we utilize an algorithm to find the shortest path to the parking space.Once these two steps are completed, the guidance system can provide accurate guidance to the users and lead them to their destination. Graph Updating: Before route planning, the origin and destination should be represented in the graph.Two temporal nodes are created upon a user requesting the guiding service: V p indicates the node for the target parking space, and V c indicates the node for the vehicle.V p is calculated by projecting the center of the allocated parking space to the nearest edge.V c is located at a predefined location where the user can access the system.Both of the two nodes are inserted into the target edge and connected to the vertices on that edge.The new edge follows the direction of the target edge. Route Planning: Route planning is the shortest path problem because the system should use the shortest possible distance to guide the user towards the location.There are a lot of choices for the shortest path problem, i.e., Dijkstra [21], Bellman-Ford [22], A* algorithms [23], etc. Dijkstra is a simple but powerful and popular algorithm with many good solutions [24,25].It is the most suitable algorithm for this system to carry out route planning.The process of applying the Dijkstra algorithm to route planning can be briefly described in the following steps: 1. Initialize the graph with the origin node and destination.Here, the directed graph created in Section 3.2.2can be used. 2. Create the Shortest Set and Unvisited Set.Assume S as the Shortest Set, and add the V 1 to this set.Meanwhile, add other nodes to the Unvisited Set (U) and set their distance to infinity. 3. Find the nodes that connect a node in S, then calculate their distance.Update the distance if it is smaller than before; otherwise, maintain the original.Among these nodes, add the node with the minimum distance to S and record the shortest path.4. Repeat step 2 and 3 until the V p is added to the S. The path recorded is the shortest. In our system, the guidance function is requested when a user enters or leaves the parking space.Therefore, for each parking space, the route from the entrance to the parking space and the route from the parking space to the exit can be calculated and stored in advance.When the guidance function is requested, the corresponding route can be called directly, which saves processing time. Workflow of Guidance System: When requesting guidance services, the user should access the guidance system through methods such as scanning the QR code or using a mobile application.If the user is at the entrance, the system will locate the vehicle at the entrance.If the user is in a parking space, the user can be located through a method such as inputting the identification number of the parking space, scanning the QR code corresponding to the parking space, or using the guidance information when entering the parking lot.The path consists of a sequence of nodes that the user is expected to follow.At first, the user is considered to be located at the first edge, which is represented by the first two nodes in the path.The system tracks the position of the user to provide further direction by repeating the following steps: 1. The system calculates the distance of the vehicle and the current edge to ensure it is following the guidance.2. The system calculates the projection of the vehicle coordinates on the edge to obtain the position on the graph. 3. According to the path, the direction to the next node is returned to the user. This process ends when the user reaches the target position or does not follow the guidance.After guidance ends, the occupancy detection will be called to update the occupancy status of the parking spaces. System Simulation In this section, we will discuss the result of simulating the processes of system operations through 3D software.We implemented the functions in the system through Python and its OpenCV library.We used YOLOv5 based on PyTorch as our object detector.We used a 3D model of a parking lot using Unity3D.The structure of this model is similar to the north parking lot of Macao International Airport.It consists of 119 parking spaces, 6 cameras, and other infrastructure.The heights of the cameras are the same, but the poses are manually adjusted to cover the whole parking lot and to simulate manual installation.Figure 7 shows the aerial view of the model and camera views of the six cameras.The positions, directions, and approximate fields of view are illustrated in Figure 7d.Note that the cameras are adjusted to ensure that every parking space is covered by at least one camera. Parking Lot Coordinate Mapping For each camera, we choose four points (which are basically ends of lines or line-line intersections) as marks.Then, we find the pixel coordinates of the marks and calculate virtual coordinates.Note that the quadrilateral formed by the four points in a camera view should be as large as possible to reduce the deviation.By applying IPM, we can obtain the perspective transformation matrices, which contain the relationships between the camera views and aerial views of the cameras.The transformed views of cameras are shown in Figure 8a-g.Then, we build a camera lookup table, which records the distances and orders of distance from all cameras for each pixel in aerial view.For example, the value in the table for the pixel in the top-left corner stores an array: [Green → Red → Blue → Purple → Yellow → Gray].The array contains the colors of the cameras and the nearer camera has a smaller index.With this lookup table, a fused aerial view image can be generated.For each pixel, it copies the color of the transformed view of the nearest camera.If that camera does not cover this pixel, it will find the next nearest camera.The generated image is shown in Figure 8d. Parking Lot Modeling We use the fused aerial view image generated in the previous section to extract parking spaces.The process follows the steps mentioned in Section 3.2.1.Figure 9a-c show the result of each step in parking space extraction. In the parameter settings, the number of kernels for the blur operation is 4, the thresholds for the Canny edge detector are 150 and 250, the pixels at the edges dilate by 1 pixel in four directions, and the threshold for the Hough transformation is 50. We first blur the fused aerial view image.Then, we apply Canny edge detection to the image and dilate it.The result is shown in Figure 9a.Next, we detect the lines in the result and filter the lines by their direction and length.Because the parking spaces in the model have horizontal and vertical alignments, the lines perpendicular to these two directions are retained.When filtering lines by length, we set a tolerance on length to prevent incorrect filtering.The detected lines are shown in Figure 9b.Then, the lines are divided into two groups according to their direction and the boxes are extracted from them.The extracted boxes are shown in Figure 9c.The comparison of extracted boxes with the real aerial view is shown in Figure 9d.The comparison shows that every parking space is extracted and basically covered by its box.In this simulation, the parking spaces far from the camera did not encounter too much distortion because the resolution of the images was high.Figure 10 shows a downsampled example.This example was originally located at the bottom of the transformed view of Camera Gray (Figure 8g).The example image was downsampled to 50% of its original size, and the result of edge detection for the original downsampled image was disordered.For the blurred image, we use 50 and 200 as the Canny thresholds.The edges detected from the blurred image are much clearer.As described in Section 3.2.2, a directed graph is used as a routing model to help the system collect route information easily.The routing model is a graph consisting of the following components: • V, the vertices set will include the entrance and exit.• E, the edges set will be unilateral due to the guidance of the arrows in the parking lot. Figure 11 illustrates the graph representing the routes in the parking lot.The weight of each edge is shown.The weight is the distance between two vertices in thousands of pixels. Vehicle Detection and Locating To simulate a normal situation in a parking lot, we randomly put some vehicles in the model.PyTorch Hub is used to deploy YOLOv5.After detecting vehicles on each camera view, the center of each bounding box is extracted.As mentioned in Section 3.3.2, the centers will be transformed to the virtual coordinates with adjustments.Measured in the software, the height of each camera is 5.75 and the average height of vehicles is 0.75.The tolerance of accepting results is set to 200 pixels.The camera lookup table generated in Section 4.1 is used with the tolerance to filter the results.After applying the perspective transformation matrices produced in Section 4.1 and the steps in Section 3.3.2, the results are illustrated in Figure 12.A circle in the figure shows a detected vehicle and its color indicates the camera that detects this vehicle.The line connects the detected vehicle and the detecting camera.In this case, all the vehicles in the model are successfully detected and their calculated locations basically cover their models.Based on this model, we designed an experiment to evaluate the accuracy of vehicle detection and location.We developed a script to randomly place 5 to 15 vehicles on this parking lot and used it to generate scenes for the experiment.For each scene, we used the script to randomly place vehicles.Furthermore, the images of the six cameras and the coordinates of the placed vehicles were collected.We collected data for 100 randomly generated scenes, where 977 vehicles were placed in total.For the collected images, we used the proposed method to convert them into the located points.For the coordinates of placed vehicles, we transformed them into the virtual coordinates, i.e., the pixel coordinates in the aerial view.If a vehicle was close to a located point and the distance was less than 30 pixels, we considered it as located.Otherwise, it was considered missing.Note that the average length of the vehicles in the aerial view was around 60 pixels.In addition, if a vehicle was close to points from multiple cameras, we used the mean of the points as the located point. Two metrics are used to evaluate the method.The first is accuracy: where N total denotes the total number of vehicles, and N loc denotes the number of successfully located vehicles.The second is mean distance error: (7) where x denotes the coordinates of the point located by the cameras, x is the actual coordinates, and || xi − x i || 2 is the L2-norm calculating the distance between xi and x i . Table 1 shows the evaluation results.In total, 948 out of 977 vehicles were successfully detected.The accuracy of the experiment is 97.03%.The mean distance error is 8.59 pixels.Because the average length of a vehicle is around 60 pixels and the average width is around 25 pixels, this distance error is acceptable. Guidance In the simulation for the guidance system, we assumed that a vehicle enters the parking lot and requests guiding service at V 1 .The system randomly allocates a parking space to the vehicle, which is the bounding box with red hatching as illustrated in Figure 13.The process of calculating paths for this parking space follows the steps in Section 3.4.2.The system projects the location of the allocated parking space to the nearest edge and adds a temporal node as the destination.The new node is located on the edge V 4 → V 7 .The weight of the new edge V 4 → V p is calculated to be 0.37, and the weight of V p → V 7 is 0.58.Then, the Dijkstra algorithm is applied.As shown in Figure 13, until the shortest distance of V p is added, the distance of the reached nodes is shown next to the node.The shortest path to the parking space is The system follows this path to guide the vehicle until it reaches V p .When the vehicle is going to leave the parking lot, the system will guide it, following the path from V p to V 8 . Discussion The proposed system boasts significant advantages, such as its simplicity and ease of installation.It has been designed as a plug-and-play solution that can be easily deployed in most situations with cameras that can be installed.Additionally, the system has adopted many mature algorithms used in the corresponding field, which provide many solutions to troubleshooting maintenance problems.However, similar to our simulation results in the parking lot of Macao International Airport, to optimize the system's performance, certain limitations must be addressed. Another evaluation of our proposed location method in a different parking lot is illustrated in Figure 14.We encountered some challenges in detecting vehicles due to blockages caused by plants, as indicated by Figure 14b.For our proposed location method, the accuracy of the system is sensitive to such blockages and can affect the overall performance. To address this, it is important to note that the pose and position of the cameras need to be carefully designed to ensure optimal performance in challenging environments.Meanwhile, our proposed methods require precise information from the parking lot.In the simulations, we took extensive measures to ensure the precision of data to improve the performance of the functions.For instance, if the data are not precise enough, the parking space extraction process may not be able to generate a complete parking space map, because the lines crossing two responsible areas may be broken.Furthermore, bad coordinate mapping may cause bad accuracy in locating vehicles.In addition, the coordinate transformation model is relatively simple, and a more generalized transformation method can reduce the estimated distance error in different scenarios. In the guidance system, we currently use pre-calculated routes to guide users.However, when there are too many users on the route, congestion inside the parking lot may occur, and the pre-calculated routes cannot handle the congestion.A possible solution is to recalculate the shortest route when there are too many vehicles detected on the precalculated route.The weight of an edge should be updated according to the number of vehicles detected on that edge. In our paper, we assumed limited types of vehicles commonly found in parking lots for convenience.However, as discussed in Section 3.3.2, the coordinates of the detected bounding box need to be transformed before estimating its real coordinates.Here, we calculated them using one set of transformation parameters (average height, etc.), which only fit the limited types of vehicles we considered.This means that the set of parameters may not be suitable for other types of vehicles.A possible solution is to generalize the model by fine-tuning YOLOv5, which was originally trained on the COCO dataset, and updating our parameters in coordinate transformation to account for different types of vehicles and scenarios, such as buses or trucks.By doing so, we can improve the performance of our system and make it more adaptable to various scenarios. Conclusions and Future Work In this paper, we present a cutting-edge camera-based smart parking system that leverages perspective transformation.Our system goes beyond the traditional use of cameras for security purposes and utilizes them as data collectors to provide location and tracking information for vehicles.By optimizing inverse perspective mapping (IPM), we fuse frames from different cameras to create an aerial view image that facilitates parking space extraction and route modeling.The collected data were processed and analyzed using a variety of tools, including YOLOv5. To evaluate our system's performance, we conducted tests on a 3D model of the Macao International Airport parking lot.We installed six cameras at various angles and heights to capture the entire lot, and the system was able to produce highly accurate and valuable information, resulting in increased efficiency.For vehicle detection and location, our system also works well with an accuracy rate of 97.03% and a mean distance error of 8.59 pixels.However, our system has some limitations that need to be addressed, such as the need for precise data for effective parking lot modeling and sensitivity to occlusions in camera frames.Nevertheless, our camera-based smart parking system, which utilizes IPM, object detection, and a guidance system with multiple cameras, has tremendous potential for enhancing the parking experience for users, and improving profitability and management efficiency for airport managers.Our unique contributions, such as the use of perspective transformation and aerial view images for parking space extraction and route modeling, have significant potential for advancing smart city development. By highlighting the significance of our novel techniques, we aim to demonstrate the value and potential impact of our research on the field of smart parking systems.Going forward, we plan to deploy our system in the Macao International Airport parking lot to further increase efficiency and support Macao's smart city development. Figure 1 . Figure 1.Overview of the system architecture. Figure 2 . Figure 2. Overview of the parking lot coordinate mapping framework.In order to obtain an aerial view using images from cameras 1 and 2 (a,d), IPM needs to be called.After the processing of IPM, (b,e) can be obtained.Finally, the fused aerial view image by fusing (b,e) is shown in (f). Figure 3 . Figure 3.The workflow of parking space extraction. Figure 4 . Figure 4. Extracting the bounding box of a parking space with a verification line. Figure 5 . Figure 5. Overview of the structure of YOLOv5. (a) View of Camera Green (b) View of Camera Red (c) View of Camera Blue (d) Aerial view of the 3D parking lot model with camera positions.(e) View of Camera Yellow (f) View of Camera Purple (g) View of Camera Gray Figure 7 . Figure 7. Aerial view and different views of cameras.(d) shows the direct aerial view image capture from 3D software, illustrating the positions and directions of the six cameras.(a-c) and (e-g) show the views of these cameras. Figure 8 . Figure 8. Transformed views of cameras, lookup table illustration, and fused aerial view image.(a-c) and (e-g) show the transformed view of cameras produced through IPM.(d) shows the fused aerial view image generated using transformed views and the camera lookup table. Figure 9 . Figure 9. Illustration of parking space extraction.(a-c) show the result of each step in parking space extraction.(d) shows the comparison of extracted boxes with the real aerial view. Figure 10 . Figure 10.Illustration of an example of a downsampled image and its result after edge detection.(a,b) show the image and the result for the original image.(c,d) show the image and the result for the blurred image. Figure 11 . Figure 11.Illustration of the extracted parking spaces and modeled routes. Figure 12 . Figure 12.Vehicle location results.The color of the location result indicates the camera that detects this vehicle. Figure 13 . Figure 13.The process of route planning.The bounding box with red hatching is the target parking space.The calculated distance is next to the reached node.The right edge is the path guiding the vehicle to the parking space.The green edge is the path guiding the vehicle to the exit. (a) The cars' locations are projected onto the ground truth.(b) View of Camera Blue. Figure 14 . Figure 14.Evaluation of the proposed locating method in a challenging parking lot with cars and plants, demonstrating the effectiveness of the method in accurately locating the spatial coordinates of cars using the IPM.However, some objects may not be detected in some of the views.(a) shows all cameras could reflect most objects' locations well.(b)shows some objects are not detected. Table 1 . Evaluation metrics for vehicle localization.
9,976.4
2023-04-18T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
Efficiency of harvesting energy from colored noise by linear oscillators We investigate the performance of a linear electromechanical oscillator as an energy harvester of finitebandwidth random vibrations. We derive exact analytical expressions for the net electrical power and the efficiency of the conversion of the power supplied by the noise into electrical power for arbitrary colored noise. We apply our results to the important case of exponentially correlated noise and discuss the tuning of parameters to achieve good performance of the device. I. INTRODUCTION The concept of harvesting ambient energy to provide power for small self-contained sensors and actuators, where batteries and other power sources are inconvenient because they need to be replaced or refueled regularly, has attracted considerable interest in recent years.A wide range of energy harvesting systems and applications have been studied; for recent reviews, see, for example, Refs.[1][2][3].The most prominent type of systems are mechanical vibration energy harvesters that convert kinetic energy via electromagnetic, electrostatic, or piezoelectric transductions into electrical energy [4][5][6][7][8].A recent study considers flexoelectric membranes as the electromechanical transduction mechanism [9].Mechanical vibration harvesters can be modeled as mass-spring systems.Early studies considered linear springs and harmonic oscillators and treated the external vibrations as sinusoidal vibrations.In such systems, the harvester extracts maximum energy from the ambient vibrations, if the excitation frequency matches the natural frequency of the system, known as resonant energy harvesting [4].Nearly all current vibration transducers operate in this regime [3].Efficient operation of the vibration harvester then requires appropriate active or passive tuning [5,7,10]. The ambient energy is however rarely concentrated in a narrow band around a dominant excitation frequency and is typically distributed over a broad range of frequencies.To overcome the limitations of resonant energy harvesting, various groups have begun to study mass-spring systems with nonlinear springs and nonlinear oscillators [11][12][13][14].In particular, the power output of unimodal and bimodal Duffing oscillators has been investigated [12,13,[15][16][17][18][19][20][21][22][23].Broadband ambient vibrations are typically modeled by Gaussian white noise, and the performance of linear resonant and nonlinear energy harvesters has been investigated [17,[23][24][25].Using the Fokker-Planck equation to describe Duffing-type energy harvesters in a Gaussian white noise environment, Daqaq [17] and Green et al. [23] established that the mean power output of the device is not affected by the nonlinearity of the spring.These findings were recently confirmed by Halvorsen [25] who obtained upper bounds on the power out of linear and nonlinear energy harvesters driven by Gaussian white noise.Halvorsen [25] concluded that "nonlinear harvesters are not fundamentally better than linear ones."Nonlinear oscillators may still be advantageous in that the size of the harvesting device can be reduced without affecting the power output [23]. Most studies of vibration energy harvesters have considered either sinusoidal excitations or Gaussian white noise excitations.However, ambient vibrations in applications can deviate from these idealizations [26].The case of finitebandwidth random vibrations, i.e., external colored, noise has received very little attention [14,17,26].References [14,26] consist of experimental and simulation studies.Daqaq [17] employs approximate methods to obtain the power output of a Duffing oscillator driven by Ornstein-Uhlenbeck noise.A comprehensive theory of linear and nonlinear vibration energy harvesters driven by colored noise is lacking.Our aim is to provide rigorous analytical results for the power output and the efficiency of linear harvesters driven by arbitrary colored noise.The paper is organized as follows.In Sec.II we introduce the evolution equations for the vibration energy harvester.We assume transduction via piezoelectricity.We show that the stationary state is stable in the mean.In Sec.III we derive closed-form expressions for the second moments of the state variables of the harvester for arbitrary colored noise.Section IV deals with the power output and the efficiency of the energy harvesting device.We obtain analytical expressions for the power and the efficiency in terms of the correlation function of the colored noise.These results are applied in Sec.V to exponentially correlated random vibrations.We also consider the white noise limit and compare the power and efficiency for the finite-bandwidth case and the broadband case.We provide a summary of our results in Sec.VI. II. ELECTROMECHANICAL OSCILLATOR An electromechanical oscillator is a device that converts the power supplied by external noise, usually from an ambient source, into electrical energy (Fig. 1).The conversion is done in two steps.First, the external noise drives a damped oscillator.Second, the oscillator is coupled to a capacitor that stores the electrical energy.The coupling between the first and second steps is done by a transducer mechanism based on piezoelectricity.The equation for the position of the stochastically driven damped oscillator is where U (x) is the potential energy, b is the damping coefficient, ξ (t) is the random driving force, and c(x,V ) is the reaction force due to the motion-to-electricity conversion mechanism and the dot stands for temporal derivative.It has the same sign as the dissipative force and opposes the motion.This arises from the energy fraction that is taken from the kinetic energy and converted into electric energy.The simplest expression for this function is c(x,V ) = k v V , where k v > 0 is a piezoelectric parameter and V (t) is the voltage.The equation for the dynamics of the voltage has to take into account the capacitor with capacitance C, the load resistance of the piezoelectric component, and the connecting function F ( ẋ,V ) with the oscillator: where τ p = R L C is the characteristic time of the capacitor's charge process.This time is larger than any other characteristic time of the system.The simplest form of the connecting function where k c is another positive piezoelectric parameter.Finally, the stochastic equations for the electromechanical energy harvester are given by If we assume a harmonic potential U (x) = ω 2 0 x 2 /2, take the Laplace transform, and combine both equations in (2.3), we obtain a single equation for the random variable x(t), where the memory kernel is given by and V 0 = V (t = 0) and is constant.The stochastic equation (2.4) resembles a generalized Langevin equation; however M(t) and ξ (t) do not obey the fluctuation-dissipation theorem. We expect the electromechanical oscillator to reach a stable stationary state as time goes to infinity.To check this, we write (2.3) as a three-variable first-order system: ) where ) Taking the average of (2.6) or (2.7), we obtain where m = u .Clearly, in the stationary state the mean m vanishes, x = v = V = 0. We apply the the Routh-Hurwitz stability criterion [27] to the system (2.9) and find that all Hurwitz determinants are positive: This implies that the oscillator is stable in the mean and approaches m = 0 for large times. III. SECOND MOMENTS From (2.6) we obtain the following equations for the second moments of the electromechanical oscillator: To close the system of equations for the second moments, we need to find the correlations of the electromechanical oscillator state variables with the external noise, xξ , vξ , and V ξ .We assume that the noise is stationary with vanishing mean and correlation function C(s): We write the solution of (2.4), or (2.6), formally as where H (t) is a Green's function to be determined.Taking the Laplace transforms of (2.4) and (3.8), we obtain x(s) = f (s) + Ĥ (s) ξ (s). (3.9) Here the hat symbol denotes the Laplace transform with parameter s, f (s) is the Laplace transform of x(t) , and where x 0 = x(t = 0) and v 0 = v(t = 0).Analogously, we can find for the velocity a solution of the form where the Laplace transform of v(t) is s f (s) − v 0 and Ĝ(s) = s Ĥ (s).Finally, from (3.7), (3.8), and (3.11) we obtain ) From (2.6c) we find that (3.14) IV. ENERGY BALANCE AND EFFICIENCY The energy of the oscillator is sum of the kinetic and the potential energy, E = v 2 /2 + U (x).The energy rate or power of the oscillator can be calculated by taking the time derivative of the energy, Taking into account (2.3a) and averaging, we obtain the mean power balance equation where vξ is the power supplied by the noise, b v 2 the power dissipated by friction, and k v vV is the power transferred from the oscillator to the transducer.In the steady state, the power transferred from the oscillator to the transducer, k v vV , can be related to the net power converted into electrical power k c τ p vV /R L , which is equal to the power dissipated by the capacitor through the resistance R L , i.e., V 2 /R L , see (3.6). Denoting steady state values by the subscript s, we have 3) The transducer's efficiency of converting mechanical to electrical power is given by the quotient of k c τ p vV /R L and k v vV , that is, Our aim is to determine the overall efficiency of the conversion from the power supplied by the noise to the final net electrical power.This efficiency can be defined as the quotient of the corresponding powers, where η me is given in (4.4) and η nm is the efficiency of power converted from the external noise to the power transferred from the oscillator to the transducer, that is, η nm = k v vV s / vξ s .By combining (3.1)-(3.6) in the steady state we have where φ is an electromechanical parameter defined as Using (4.5), (4.6), (3.13), and (3.14), we can write the efficiency as Note that this is a general exact expression for the efficiency of energy harvesting by a linear electromechanical oscillator.Note further that the efficiency depends on the statistical properties of the noise only via the correlation function.The power dissipated by the oscillator can also be obtained in terms of the noise correlation function.In the steady state it follows from (4.2) and ( 4 where we have made use of (3.13) and (3.14).Let us analyze the limit case when τ p → ∞.From Eqs. (2.4) and (2.5) it can be appreciate that the dynamical equations for the elecro-mechanical oscillator reduce to that of the mechanical oscillator ẍ + b ẋ + ω 2 0 x = ξ (t).Since no power is converted to net electrical power this case lacks any practical interest.The efficiency of the conversion from the power supplied by the external noise to mechanical power is the quotient between the net power obtained and the power supplied by the noise, vξ s .The net power of the mechanical oscillator is the difference between the power supplied by the noise and the dissipated power: vξ s − b v 2 s .In the stationary state it is expected that the mean energy is constant, so the net power is zero.This can also be checked from Eq. (4.2) taking τ p → ∞.Therefore, in this limit both the net power and the efficiency are zero. V. EXPONENTIALLY CORRELATED NOISES In this section we apply our general results to an exponentially correlated stationary noise, i.e., where a and λ are the amplitude and the inverse of the correlation time λ = τ −1 c .The limit λ → ∞ and a → ∞, with D = a/λ constant, corresponds to the white noise limit.It follows from (5.1), (3.13), and (3.14) that the power delivered by the noise is (5.2) Equation (4.8) implies that the efficiency is given by Note that the efficiency of the whole conversion process, energy conversion from noise to net electrical energy, does not depend on the noise amplitude a but only on λ.The net electrical power is, from (4.6), (5.2), and (3.14), given by ( The power dissipated by the oscillator, is obtained from (4.9).We would like to reiterate that the efficiency and net electrical power depend on the noise source only via the noise correlation function C(u).The above results hold, for example, for two such different noise sources as dichotomous Markov noise (non-Gaussian process) and Ornstein-Uhlenbeck (OU) noise (Gaussian process).Note also that the net electrical power is proportional the noise amplitude a; however its dependence on λ is more intricate.To check our analytical results we have performed numerical simulations of the Langevin equations (2.3) with an OU noise. The numerical simulations are performed by solving numerically the Langevin equations (2.6c).We have discretized the equations by using the so-called Heun algorithm [28].At each time step we generate Gaussian white random numbers by means of the Box-Mueller-Wiener algorithm [28].For the case of OU noise we have added to the system (2.6c) a fourth equation for the variable ξ (t), namely, dξ/dt = −ξλ + a √ λη(t), where η(t) is Gaussian white noise with correlation η(t)η(t ) = 2a 2 δ(t − t )/λ.In this case the system of stochastic equations has been discretized again by using the Heun method and the random numbers for η(t) have been generated by the Box-Mueller-Wiener algorithm. We have averaged over 10 4 realizations of the noise for large times to determine vξ s , P , and v 2 s independently of each other.In Fig. 2 we show the comparison between the results provided by (5.2), (5.3), (5.4), and (5.5) with numerical simulations.Note that the net electrical power P passes through a maximum, P * , as λ increases.On the other hand, the efficiency decreases monotonically with λ.Consequently, the maximum efficiency does not occur when the net electrical power is maximal and vice versa.In the inset of the figure we have checked the result V 2 s = k c τ p vV s .With the goal to optimize the performance of the energy conversion, we have studied how the maximum net electrical power, P * , the efficiency at the maximum power, η * , and the characteristic frequency of the noise at the maximum power, λ * , depend on the parameters τ p , k c , k v , and ω of the energy harvester.The three first parameters are related to the electrical The characteristic frequency of the noise at the maximum power, λ * , decreases monotonically as τ p increases, whereas it increases monotonically as the remaining three parameters increase.In conclusion, to improve performance it would be desirable to tune τ p and ω to low values and k c and k v to intermediate values. To compare the above results with those for white noise, broadband random ambient vibrations, we define a = Dλ and take the limit λ → ∞ with D being a nonzero constant in the above expressions.We find ) where the label W denotes white noise.It is not difficult to check that vξ W s > vξ C s , (5.10) where the label C denotes colored noise, finite-bandwidth random ambient vibrations.In consequence, the white noise delivers higher power than the colored noise and the net electrical power obtained is also higher.However, in terms of efficiency, the electromechanical device is more efficient if the noise is colored than if it is white. VI. CONCLUSIONS We have derived exact analytical expressions for the power output and efficiency of converting ambient random vibrations into electrical power for a linear electromechanical oscillator driven by colored noise.An important finding is the fact that the efficiency and net electric power do depend only on the correlation function of the ambient noise.They do not depend on the probability distribution of the colored noise. We evaluate the net electrical power and the efficiency explicitly for exponentially correlated noise.For such noises, the twin goals of maximum power and maximum efficiency cannot be achieved at the same time.A compromise has to be struck, and we find that the best performance of the energy harvester occurs if the natural frequency of the oscillator and the characteristic charging time of the capacitor are tuned to low values, while the parameters of the piezoelectric transducer are set to intermediate values.The efficiency of the overall conversion process depends only on the correlation time and the bandwidth of the noise and not on the noise amplitude.We also find that the device operates more efficiently in a finite-bandwidth environment, i.e., colored external noise, while the net electrical power is higher for broadband random vibrations, i.e., white noise. While linear oscillators are widely used in energy harvesting, nonlinear oscillators have gained attention recently.We plan to investigate the performance of nonlinear energy harvesters driven by colored noise in future work. FIG. 1. (Color online) Schematic of the conversion mechanism of the external random power to the net electrical power.The power transferred is shown for each step. FIG. 3 . FIG. 3. Maximum net electrical power, P * , efficiency at the maximum power, η * , and characteristic frequency of the noise at the maximum power, λ * , as a function of τ p , k c , k v , and ω.In each panel we vary the corresponding parameter and set all other parameters equal to 1.For all plots we have taken C = 1.circuit of the transducer, and the last one characterizes the oscillator, the mass-spring system.In each panel of Fig. 3 we have varied the parameter specified and set the other parameters to 1.Both P * and η * decrease with ω [see Fig. 3(d).The maximum net electrical power, P * , displays the same behavior as function τ p , whereas η * increases for small values of τ p , passes through a maximum, and then decreases monotonically [see panel (a).The efficiency at maximum power η * (see panels b and c) increases with k c and k v for small values, reaches a plateau for intermediate values, and then begins to decrease slowly (not shown)].The roles of k c and k v on P * are opposite to each other.P * increases with k c , but decreases with k v [see Figs.3(b) and 3(c)].The characteristic frequency of the noise at the maximum power, λ * , decreases monotonically as τ p increases, whereas it increases monotonically as the remaining three parameters increase.In conclusion, to improve performance it would be desirable to tune τ p and ω to low values and k c and k v to intermediate values.To compare the above results with those for white noise, broadband random ambient vibrations, we define a = Dλ and take the limit λ → ∞ with D being a nonzero constant in the above expressions.We find
4,250.4
2013-08-01T00:00:00.000
[ "Engineering" ]
Nilotinib-Associated Destructive Thyroiditis Protein tyrosine kinase inhibitors are currently an important drug class in the treatment of leukemia. They represent targeted cancer therapy and have become the treatment of choice in chronic myeloid leukemia. Tyrosine kinases are enzymes expressed in multiple tissues and are involved in several signaling pathways influencing cellular growth. Below we describe a patient who developed an unusual complication of tyrosine kinase inhibitor therapy: thyrotoxicosis due to destructive thyroiditis. We review the pathophysiology of tyrosine kinase inhibitor-induced thyroid dysfunction particularly with regard to new second-generation tyrosine kinase inhibitors. Introduction Nilotinib is a second-generation tyrosine kinase inhibitor approved for the treatment of adults with Philadelphia chromosome-positive chronic myeloid leukemia (Ph+ CML) [1,2]. Tyrosine kinase inhibitors (TKIs) as a class compete with the ATP binding site of oncogenic tyrosine kinases leading to blocking of pathways essential for tumor cell survival. TKIs, particularly 1st generation formulations, are associated quite commonly with thyroid dysfunction particularly hypothyroidism. Second-generation TKIs appear to have fewer side effects related to thyroid dysfunction, and if thyroid changes do occur, they are usually transient [3]. Below we discuss an unusual case of transient thyrotoxicosis related to nilotinib therapy. Case Presentation A 53-year-old female with chronic myeloid leukemia (CML) presented to our emergency department with one week of severe palpitations, tremors, and anxiety. She also complained of a rapidly enlarging painful neck mass and difficulty swallowing. The symptoms began two weeks after the initiation of nilotinib therapy. There was no history of any recent upper respiratory tract infections, fevers, or viral syndromes. She had no personal or family history of any thyroid disorders. On physical examination, the patient was afebrile and slightly agitated. She was tachycardic, with a heart rate ranging from 120 to 150 beats per minute and an irregularly irregular heart rhythm. An electrocardiogram confirmed atrial fibrillation. The thyroid gland was symmetrically enlarged to approximately 60 grams, was firm, and was diffusely tender to palpation. A fine tremor of the outstretched hands and brisk deep tendon reflexes were noted. Initial laboratory work-up revealed a suppressed thyroidstimulating hormone (TSH) with elevated total T3 and free T4. Thyroglobulin was markedly elevated, and the titer of anti-thyroperoxidase antibody was strongly positive. Thyroid-stimulating immunoglobulin was negative (see Table 1). The erythrocyte sedimentation rate was elevated at 80 mm/hr (1-25 mm/h). Anemia and thrombocytopenia were present, consistent with the history of CML. Serum electrolytes, renal function and liver function tests were all unremarkable. The patient was admitted to the intensive care unit, nilotinib was discontinued, and stress doses of hydrocortisone were administered. The atrial fibrillation was rate-controlled with propranolol infusion. The 24-hour radioactive iodine uptake was 1.7% (normal, 15-35%), and no thyroid tissue was visualized on scan (see Figure 1). Over the next five days, with continuing high dose hydrocortisone, total T3 level decreased from 229.6 to 67.2 ng/dL, and the free T4 level decreased from 4.6 to 2.6 ng/dL; the TSH level remained suppressed (see Table 1). Thyromegaly decreased rapidly and her neck pain was significantly reduced over a period of two to four days. Heart rhythm reverted back to a normal sinus rhythm. On hospital day six, the patient was discharged home, with instructions to taper oral steroids over the following two weeks. She has followed subsequently with an outside physician and is now euthyroid. Discussion Tyrosine kinase inhibitors block the BCR-ABL gene product responsible for symptoms of CML. Tyrosine kinases are enzymes that catalyze the transfer of phosphate from ATP to tyrosine residues in polypeptides [4]. These enzymes are widely distributed throughout multiple tissue types and are involved in signaling pathways that regulate key cellular functions including proliferation, differentiation, growth, and apoptosis. More than 70% of known oncogenes and protooncogenes encode for tyrosine kinases [5], and receptor tyrosine kinases are known to be overexpressed in malignant diseases. Small molecule tyrosine kinase inhibitors (TKIs) have been designed to disrupt the catalytic activity of tyrosine kinases, thus disrupting the growth of malignant cells. As tyrosine kinases are expressed ubiquitously, TKIs have been noted to have several "off-target" effects. Thyroid dysfunction appears to be one of the off-target effects of TKIs. Although TKIs were initially thought of as targeted therapy, normal healthy tissue may also be affected as tyrosine kinases are ubiquitously expressed, including in thyroid tissue [6,7]. Kim and colleagues retrospectively assessed the effect of nilotinib on thyroid function in 55 patients with Ph+ CML. Of the 30 patients without preexisting thyroid dysfunction, 18 patients (60%) developed a suppressed TSH, and 12 patients (40%) developed an elevated TSH. Four patients (of whom 3 had positive anti-thyroid antibodies) had evidence of thyroiditis, with an episode of transient thyrotoxicosis followed by hypothyroidism [8]. Here we present another case of new-onset thyroid dysfunction in a patient who had recently begun therapy with nilotinib. We postulated that the thyrotoxicosis was due to drug-induced thyroiditis. This was based on the presence of a diffusely enlarged and painful thyroid gland with elevated ESR and low radioiodine uptake. Nilotinib was discontinued, and the patient was treated with steroids and beta-blockers, with rapid resolution of thyrotoxicosis. Understanding the etiology of TKI-induced thyroid dysfunction is a topic of considerable interest. Torino and colleagues in a 2009 review [9] suggested several potential mechanisms in which both thyroid hormone syntheses may be impaired and a destructive thyroiditis may develop later in the course of therapy. First, TKIs cause apoptosis of the thyroid follicular cells, leading to destructive thyroiditis. Secondly, prevention of vascular endothelial growth factor (VEGF) binding to normal thyroid cells and/or inhibiting thyroid blood flow may cause thyroiditis and thyroid dysfunction. Thirdly, inhibition of thyroid peroxidase activity may result in decreased thyroid hormone synthesis. Partial inhibition of TKI of pituitary or hypothalamic thyroid hormone transporter (TH) feedback, may alter the metabolism of TH and contribute to TKI-mediated impairments of thyroid function. Lastly, a yet to be described autoimmune mechanism affecting thyroid function may result in hypo-or hyperthyroidism. TKIs are now used in the treatment of selected patients with advanced thyroid cancer. Multiple receptor tyrosine kinases are overexpressed in thyroid cancer, including RET, rapidly accelerated fibrosarcoma (RAF) kinases, VEGF, mesenchymal epithelial transition factor, and epidermal growth factor. Because multiple receptor tyrosine kinases are overexpressed in thyroid cancer, TKIs may also represent new strategies in treating advanced thyroid cancers via targeting signaling by VEGF and its receptors. Various TKIs are currently being studied for their potential use in treatment of advanced and metastatic thyroid cancer. Vandetanib, Case Reports in Endocrinology 3 which inhibits VEGF receptor dependent tumor angiogenesis and epidermal growth factor receptor (EGFR-) and RETmediated tumor cell proliferation, along with cabozantinib, is now FDA approved for treatment of medullary thyroid cancer [10]. Additionally, sorafenib and lenvatinib have been recently FDA approved for use in patients with advanced and refractory differentiated thyroid cancer [11]. Conclusion Prospective trials are needed to accurately define the incidence of thyroid dysfunction during therapy with many newly approved tyrosine kinase inhibitors used to treat various malignancies. Initial screening and periodic surveillance of thyroid function during tyrosine kinase inhibitor therapy are prudent given the frequency of thyroid dysfunction with these drugs. In addition, because multiple receptor tyrosine kinases are overexpressed in thyroid cancer, TKIs represent promising new strategies for the treatment of advanced thyroid cancers; in particular, TKIs have been shown to have potential in differentiated thyroid carcinoma [12,13].
1,695.4
2015-05-07T00:00:00.000
[ "Biology", "Medicine" ]
Chiroptical spectroscopy of a freely diffusing single nanoparticle Chiral plasmonic nanoparticles can exhibit strong chiroptical signals compared to the corresponding molecular response. Observations are, however, generally restricted to measurements on stationary single particles with a fixed orientation, which complicates the spectral analysis. Here, we report the spectroscopic observation of a freely diffusing single chiral nanoparticle in solution. By acquiring time-resolved circular differential scattering signals we show that the spectral interpretation is significantly simplified. We experimentally demonstrate the equivalence between time-averaged chiral spectra observed for an individual nanostructure and the corresponding ensemble spectra, and thereby demonstrate the ergodic principle for chiroptical spectroscopy. We also show how it is possible for an achiral particle to yield an instantaneous chiroptical response, whereas the time-averaged signals are an unequivocal measure of chirality. Time-resolved chiroptical spectroscopy on a freely moving chiral nanoparticle advances the field of single-particle spectroscopy, and is a means to obtain the true signature of the nanoparticle’s chirality. (1). The authors mentioned that the solution used is made of one part colloidal dispersion and 20 parts glycerol. How would one make sure that this dilution is sufficient for only one particle diffuses into the field of view of microscope objective ? (2). Regarding the statement in the Supporting Information, "The column dimension of the detector is used for spectral separation. Since we used a 40x objective, each row on the CCD corresponds to a rectangular area with a width of 650 nm in the sample plane. The size of the nanostructures being measured is on the order of d ≈ 150 nm. It is thus reasonable to assume that only a few adjacent rows on the detector are capturing the spectrum of the particle IP article(λ). The other detector rows will collect light that is scattered by particles as well as impurities in the vicinity and hence they are used for the background correction." (a). Is it not possible to observe multiple scattering peaks form different particles across the 650 nm width of CCD ? (b). It is not clear how the scattering from other "particles as well as impurities" serves as background ? Would that not amount to another peak from another particle? Reviewer #2 (Remarks to the Author): This manuscript represents a significant milestone in our emerging ability to probe the fundamental properties of isolated supramolecular structures by lifting the veil of statistical (ensemble) averaging that has long obscured all manifestations of heterogeneous phenomena. More specifically, the authors have employed an ingenious time-resolved method to perform linear chiroptical spectroscopy on individual plasmonic nanoparticles of prescribed chiral or achiral morphologies as they freely circulate (via random Brownian motion) through a viscous solution-phase environment. By simultaneously discriminating the left-circular and right-circular components of scattered light following unpolarized excitation in a dark-field microscope, temporal "snapshots" of wavelengthresolved circular differential response for single particles could be acquired continuously. These data encode effects of both intrinsic chirality (circular dichroism and birefringence) and extrinsic orientation (linear dichroism and birefringence) for the diffusing target, with the cumulative spectrum obtained by averaging a series of such "snapshots" serving to cancel orientational "artifacts" and reveal (for what may be the very first time) the true chiroptical signatures of an isolated plasmonic entity. To further validate their approach, the authors repeated measurements with magnetic nanostructures that enable alignment and circulation to be controlled by an external magnetic field, thereby highlighting the pronounced dependence of optical activity on the spatial orientation of a particle. The results obtained during this tour de force study are intriguing and informative, promising to elicit a wide range of subsequent research endeavors designed to elaborate the nature of plasmonic chiroptical processes as well as their potential applications in various forms of chiral sensing/measurement. The contents of this manuscript are eminently suitable for publication as a report in Nature Communications, with key results being presented in a logical and concise manner that should be accessible to the general scientific community. Prior to immediate publication, the authors should consider clarification of the following minor points: 1. At the bottom of page 4, the authors state that "spectra of a single particle were acquired at one specific instant in time during which it is nearly stationary and thus has a defined orientation in 3D space." Might it be possible to quantify the phrases "nearly stationary" and "defined orientation" given that the time needed to record a complete chiroptical spectrum appears to be on the order of 1 second? How much translation and rotation (on average) will the targeted nanoparticles undergo during this acquisition time? A brief discussion of these issues does appear in the "Methods" section and in the Supplementary Information, so perhaps it would be sufficient to point the reader there for further details. It also would be useful to mention how long a single nanoparticle can be tracked using the current implementation of the apparatus, as this ultimately will place limits on spectralaveraging procedures. 2. It would be most informative if the circular-differential spectra for individual nanoparticles (such as those in Fig. 2) could be placed on an absolute intensity scale akin to the circular-differential absorption (ellipticity) or anisotropy ratio metrics routinely employed for ensemble-averaged studies. This may not be possible given the nature of the measured signals (per text on page 7), but it would highlight the exceptionally large magnitude of chiroptical effects supported by plasmonic nanoparticles. Perhaps the definition of CDSI given in equation (1) already affords a reasonable counterpart for the conventional anisotropy ratio, although the instructive discussion of chiral scattering processes given in the Supplementary Information (Section 2) would tend to make this assertion a bit too simplistic. 3. In the sentence spanning pages 7 and 8 of the manuscript the authors refer to "achiral electricquadrupolar terms" responsible for circular differential intensities that "do not reflect the chirality intrinsic to the molecule." This statement needs to be clarified, as it may be a bit misleading in present form. Presumably these achiral effects are distinct from the chiral electric-quadrupole (E2) and magnetic-dipole (M1) interactions that separately interfere with accompanying electric-dipole (E1) interactions to yield conventional linear chiroptical signatures? In fact, the resulting E1-E2 and E1-M1 processes are believed to contribute equally to the overall chiroptical response of an oriented molecule, yet the former rotationally average to zero and thus vanish in an isotropic ensemble average. 4. Were any experimental attempts made to quantify effects of the environment (for example, the nature of the solvent, as well as its temperature and/or viscosity) on measurements of the response evoked from entrained nanoparticles? Presumably, changes in any solvent parameter could (at least) affect the orientational dynamics of a nanoparticle, thus placing bounds on the extent of temporalsnapshot averaging needed to yield true chiroptical signatures. 5. Conventional circular-differential spectra (as measured for an isotropic ensemble of chiral targets) stem from the trace of a second-rank chiroptical tensor, the individual elements of which contain invaluable information regarding subtle matter-field interactions. In particular, the relatively small value of the trace often reflects the partial cancellation of sizable positive and negative contributions that relate the spatial properties of intrinsic transition moments to those of applied electromagnetic fields. Provided that the orientation of a targeted chiral particle was known (perhaps by exploiting the novel external-field control paradigms described by the authors), it might be possible to extract the entire tensor by analyzing each of the temporal snapshots individually. Although this could prove to be very difficult to accomplish in practice (given potential artifacts arising from linear dichroism/birefringence and annular dark-field illumination), it would provide access to unique information that typically is available only for chiral species constrained under strongly perturbative conditions (for example, immobilized on a surface or imbedded in a crystal). Any comments along these lines would make a tantalizing addition to the manuscript, especially since the results highlighted in Fig. 4c might already be partially on the way to realizing this possibility! 6. The authors are encouraged to perform a critical proof reading of their manuscript in order to correct grammatical errors and improve the overall quality of presentation. For example, the final sentence in the topmost paragraph on page 3 appears to be a fragment (perhaps due to a missing comma?). More importantly, the various colored curves appearing in Fig. 2c should be identified fully in the caption (the legend really is not clear here; there seem to be two nearly identical curves for each particle). The caption also indicates that the solid and dashed curves in Fig. 2e correspond to bulk CD spectra measured at different scattering angles, but no further details are given. Finally, the first line of the caption for Fig. 3 appears to have inverted the set of panels corresponding to a gold nanosphere (should be panels (a-d)) and a gold nanorod (should be panels (e-h)). We thank both reviewers for their comments. We are delighted to receive positive feedback on our work. In what follows we reprint the initial comments in black, answer in blue and indicate the new text and changes in the paper in red. Reviewer #1 (Remarks to the Author): This manuscript reports circular differential scattering intensity (CDSI) for "single" nano particles, using a laudable idea: namely, time averaged CDSI spectrum of a freely diffusing nano particle is equivalent to the ensemble average spectrum of nano particles. This idea avoids the complications from orientational effects (linear dichroism, quadruploe contribution etc) of a oriented particle. The authors are able to not only measure the time averaged CDSI spectra of a single nano helix, but also show that the time averaged CDSI spectra of a single achiral sphere becomes zero. The authors are to be commended for having taken strenuous efforts to demonstrate that the observed effects are real. We thank the referee for the positive feedback. Clarifications for the following few questions will be useful. (1). The authors mentioned that the solution used is made of one part colloidal dispersion and 20 parts glycerol. How would one make sure that this dilution is sufficient for only one particle diffuses into the field of view of microscope objective ? The aqueous colloidal solution was sufficiently diluted so that the particles were usually well separated. If particles where too close to each other or too bright (outliers, aggregates) they were disregarded. Before acquisition of a time-series one promising particle was located and the stage moved to center it in the field of view. Single particles can be distinguished from aggregates due to the scattering intensity. To address the comment, we revised the first paragraph of section 1 in the SI and added an additional figure for clarification: "The column dimension of the detector is used for spectral separation. Since we used a 40x objective, each row on the CCD corresponds to a rectangular area with a width of 650 nm in the sample plane. The size of the nanostructures is on the order of d ≈ 150 nm. It is thus reasonable to assume that only a few adjacent rows on the detector capture the spectrum of a single particle. The other detector rows will collect light stemming from other scatterers in the field of view as well as impurities in the vicinity. Such effects contribute to a pale intensity distribution, which is used for a background correction (see SI- Fig XXX). Prior to a spectral acquisition, an individual particle was selected and centered by moving the microscope stage. Single particles could be distinguished from agglomerates due to the vastly differing scattering intensities. Multiple particles in the field of view were readily identified by the corresponding peaks registered at different positions on the CCD." SI-Fig XXX Snapshot of the CCD showing the raw signal. Particles can be distinguished from the low intensity background due to stronger localized intensities. Spatially separated particles appear on different rows of the CCD. This permits the identification of a scatterer of interest. The particle concentration is kept dilute enough to ensure that the chance of several particles entering the field of view is very low. (2). Regarding the statement in the Supporting Information, "The column dimension of the detector is used for spectral separation. Since we used a 40x objective, each row on the CCD corresponds to a rectangular area with a width of 650 nm in the sample plane. The size of the nanostructures being measured is on the order of d ≈ 150 nm. It is thus reasonable to assume that only a few adjacent rows on the detector are capturing the spectrum of the particle IP article(λ). The other detector rows will collect light that is scattered by particles as well as impurities in the vicinity and hence they are used for the background correction." (a). Is it not possible to observe multiple scattering peaks form different particles across the 650 nm width of CCD ? In general this is possible, but additional particles would show up as intense signals on different rows on the CCD. If the particles come too close to each other, they cannot be distinguished as individual particles any more. However, the particles show translational diffusion over the course of the measurement and even if the particles would be stacked above each other at the start of the experiment, they appear on different rows later (see also the answer to question 1/the new figure SI-Fig XXX). (b). It is not clear how the scattering from other "particles as well as impurities" serves as background ? Would that not amount to another peak from another particle? Yes, if a single strong scatterer in focus enters the spectrum then it appears as a peak. However, the many defocused particles and other scattering impurities lead to a low intensity constant background (see also the answer to question 1/the new figure SI- Fig XXX). Reviewer #2 (Remarks to the Author): This manuscript represents a significant milestone in our emerging ability to probe the fundamental properties of isolated supramolecular structures by lifting the veil of statistical (ensemble) averaging that has long obscured all manifestations of heterogeneous phenomena. More specifically, the authors have employed an ingenious time-resolved method to perform linear chiroptical spectroscopy on individual plasmonic nanoparticles of prescribed chiral or achiral morphologies as they freely circulate (via random Brownian motion) through a viscous solution-phase environment. By simultaneously discriminating the left-circular and right-circular components of scattered light following unpolarized excitation in a dark-field microscope, temporal "snapshots" of wavelength-resolved circular differential response for single particles could be acquired continuously. These data encode effects of both intrinsic chirality (circular dichroism and birefringence) and extrinsic orientation (linear dichroism and birefringence) for the diffusing target, with the cumulative spectrum obtained by averaging a series of such "snapshots" serving to cancel orientational "artifacts" and reveal (for what may be the very first time) the true chiroptical signatures of an isolated plasmonic entity. To further validate their approach, the authors repeated measurements with magnetic nanostructures that enable alignment and circulation to be controlled by an external magnetic field, thereby highlighting the pronounced dependence of optical activity on the spatial orientation of a particle. The results obtained during this tour de force study are intriguing and informative, promising to elicit a wide range of subsequent research endeavors designed to elaborate the nature of plasmonic chiroptical processes as well as their potential applications in various forms of chiral sensing/measurement. The contents of this manuscript are eminently suitable for publication as a report in Nature Communications, with key results being presented in a logical and concise manner that should be accessible to the general scientific community. Prior to immediate publication, the authors should consider clarification of the following minor points: We thank the referee for these very positive comments and for recommending publication of our article in Nature Communications. 1. At the bottom of page 4, the authors state that "spectra of a single particle were acquired at one specific instant in time during which it is nearly stationary and thus has a defined orientation in 3D space." Might it be possible to quantify the phrases "nearly stationary" and "defined orientation" given that the time needed to record a complete chiroptical spectrum appears to be on the order of 1 second? How much translation and rotation (on average) will the targeted nanoparticles undergo during this acquisition time? A brief discussion of these issues does appear in the "Methods" section and in the Supplementary Information, so perhaps it would be sufficient to point the reader there for further details. It also would be useful to mention how long a single nanoparticle can be tracked using the current implementation of the apparatus, as this ultimately will place limits on spectral-averaging procedures. The last paragraph on the bottom of page 4 should give the reader a general idea of the respective times. However, we extended the paragraph and included the following: "In our setup, a spectrum of a single particle was acquired at one specific instant in time, i.e. with an exposure time short enough to assume a quasi-stationary orientation in 3D space during the acquisition of one spectrum (see Methods and SI). If the particle moves and if the observation is repeated …" In addition, we added a sentence in the Methods section (at the end of the "Nanoparticle Sample Preparation" paragraph) to point out the limits of our setup: "Note, that a lower viscosity will lead to a much faster measurement time for a rotationally averaged chiroptical spectrum. In contrast, the translational diffusion still has to be slow enough that the particle does not leave the field of view/the focus during the course of a measurement (see SI)." In the SI (first section, beginning of second paragraph) we added the following: "Because the nanoparticles can freely diffuse, the maximum measurement time t_max for different particles and viscosities can be estimated with the Brownian mean square displacement. The dimensions of the aperture set a spatial constraint \Delta x = 5mm in the object plane: <(\Delta x)^2> = 2 D t_max. Hence, a 150nm diameter nanosphere can be observed for approximately 25 minutes (1:20 glycerol), whereas in water (\nu \approx 1mPas) it will leave the observation volume after approximately 4 seconds. In general, the nanoparticles can change their position (row on the CCD) over time between subsequent acquisitions." 2. It would be most informative if the circular-differential spectra for individual nanoparticles (such as those in Fig. 2) could be placed on an absolute intensity scale akin to the circular-differential absorption (ellipticity) or anisotropy ratio metrics routinely employed for ensemble-averaged studies. This may not be possible given the nature of the measured signals (per text on page 7), but it would highlight the exceptionally large magnitude of chiroptical effects supported by plasmonic nanoparticles. Perhaps the definition of CDSI given in equation (1) already affords a reasonable counterpart for the conventional anisotropy ratio, although the instructive discussion of chiral scattering processes given in the Supplementary Information (Section 2) would tend to make this assertion a bit too simplistic. Typically, single-particle spectra are normalized to the total scattering intensity in order to permit comparison with other measurements. In our opinion, it is problematic to quote an ellipticity (theta) or an anisotropy (g-factor) for a single particle. Ellipticities typically consider the sample concentration, which is challenging to be specified for a single particle measurement, since the observation volume is hard to determine in practice. The g-factor is based on a difference in extinction (which is also depended on concentration), whereas our setup only probes the scattering and not the absorption of a particle. We thus think it is better to avoid any potential confusion and we would not want to introduce a new quantity in a field that already has several established measures of optical activity. 3. In the sentence spanning pages 7 and 8 of the manuscript the authors refer to "achiral electric-quadrupolar terms" responsible for circular differential intensities that "do not reflect the chirality intrinsic to the molecule." This statement needs to be clarified, as it may be a bit misleading in present form. Presumably these achiral effects are distinct from the chiral electric-quadrupole (E2) and magnetic-dipole (M1) interactions that separately interfere with accompanying electric-dipole (E1) interactions to yield conventional linear chiroptical signatures? In fact, the resulting E1-E2 and E1-M1 processes are believed to contribute equally to the overall chiroptical response of an oriented molecule, yet the former rotationally average to zero and thus vanish in an isotropic ensemble average. We completely agree with this comment and what we meant is exactly what the referee describes. We rephrased the following in order to avoid any confusion: "For molecules it is well known from the theory of optical activity [17] and the measurement of circular dichroism [20] that an oriented sample, including molecules immobilized at an interface, displays spectra that significantly differ from rotationally-averaged (ensemble) CD spectra. In the latter case, contributions from electricquadrupole transitions average to zero, whereas those transitions have to be considered for a stationary aligned structure. Similarly, contributions due to any linear birefringence (LB) or linear dichroism (LD) in an (oriented) sample can alter the polarization state and contribute to the circular intensity difference signal [18], which does not reflect the chirality intrinsic to the molecule." 4. Were any experimental attempts made to quantify effects of the environment (for example, the nature of the solvent, as well as its temperature and/or viscosity) on measurements of the response evoked from entrained nanoparticles? Presumably, changes in any solvent parameter could (at least) affect the orientational dynamics of a nanoparticle, thus placing bounds on the extent of temporal-snapshot averaging needed to yield true chiroptical signatures. We considered such influences, in particular whether heating could locally affect the viscosity, but these effects were deemed to be small. The environmental parameters (temperature, viscosity) are relatively constant over the course of the measurement as can be seen by looking at the time-traces of the CDSI signals (e.g. Fig. 2b or Fig. 3a). The rate of the fluctuations and their extent does not change. One could also expect that heating due to the light would scale with the input light intensity, but we did not observe this. Changes of different solvent viscosities are also discussed in the response to question 1. 5. Conventional circular-differential spectra (as measured for an isotropic ensemble of chiral targets) stem from the trace of a second-rank chiroptical tensor, the individual elements of which contain invaluable information regarding subtle matterfield interactions. In particular, the relatively small value of the trace often reflects the partial cancellation of sizable positive and negative contributions that relate the spatial properties of intrinsic transition moments to those of applied electromagnetic fields. Provided that the orientation of a targeted chiral particle was known (perhaps by exploiting the novel external-field control paradigms described by the authors), it might be possible to extract the entire tensor by analyzing each of the temporal snapshots individually. Although this could prove to be very difficult to accomplish in practice (given potential artifacts arising from linear dichroism/birefringence and annular dark-field illumination), it would provide access to unique information that typically is available only for chiral species constrained under strongly perturbative conditions (for example, immobilized on a surface or imbedded in a crystal). Any comments along these lines would make a tantalizing addition to the manuscript, especially since the results highlighted in Fig. 4c might already be partially on the way to realizing this possibility! The reviewer highlights a fascinating possibility. There are several effects that need to be dis-entangled in recording (and interpreting) orientation-resolved spectra and they do not seem trivial. We have considered this, but feel that this needs more careful analysis before we can attempt such a measurement. For instance, the helix can still rotate about its own body axis (about the magnetic moment) and so this will cause some orientational freedom, which would need to be considered in a separate average as the coordinate frame of the particle rotates with respect to that of the incident light. Connecting these individual measurements back to a theoretical model are also very challenging as the scattering occurs from a range of angles into a separate set of angles (segment of the scattering cone). So, while fascinating we do not (yet) feel confident to attempt this. 6. The authors are encouraged to perform a critical proof reading of their manuscript in order to correct grammatical errors and improve the overall quality of presentation. For example, the final sentence in the topmost paragraph on page 3 appears to be a fragment (perhaps due to a missing comma?). More importantly, the various colored curves appearing in Fig. 2c should be identified fully in the caption (the legend really is not clear here; there seem to be two nearly identical curves for each particle). The caption also indicates that the solid and dashed curves in Fig. 2e correspond to bulk CD spectra measured at different scattering angles, but no further details are given. Finally, the first line of the caption for Fig. 3 appears to have inverted the set of panels corresponding to a gold nanosphere (should be panels (a-d)) and a gold nanorod (should be panels (e-h)). We thank the referee for the suggestions. We have carefully re-read the manuscript and fixed smaller grammatical issues/wordings, and in addition we changed the following clarifications: Caption Fig.2: "Time-resolved chiroptical spectroscopy of freely diffusing single nanohelices. a) SEM and schematics of Au-nanohelices (scale bar 100 nm). b) Frame-wise CDSI at λ = 750±5 nm for a single left-handed nanohelix that is undergoing Brownian motion. c) Full CDSI spectra for momentarily aligned orientations in 3D space (first 30 frames shown). d) CDSI (t=400 frames) for freely diffusing single Au-nanohelices and -spheres. For each shape two measurements from different particles are shown in different shades of
5,983.6
2020-09-09T00:00:00.000
[ "Physics" ]
Addition of magnetic markers for non-destructive evaluation of polymer composites Polymer composite pipes are an appealing option as a substitute for conventional steel pipes, particularly due to their inherent corrosion resistance. However, the composite pipes currently used do not allow non-destructive evaluation (NDE) using instrumented devices which operate with magnetic sensors. The present work aims at the development of polymer composites with the addition magnetic markers to allow the application non-destructive evaluation techniques which use magnetic sensors. Glass-polyester composite flat, circular plates were fabricated with the addition of ferrite particles (barium ferrite and strontium ferrite) and four types of notches were introduced on the plates' surfaces. The influence of these notches on the measured magnetic properties of each material was measured. X-ray diffraction (XRD), X-ray fluorescence (XRF) and Brunauer, Emmett, and Teller (BET) nitrogen adsorption were used for the characterization of the ferrite particles. Particle dispersion in the polymer matrix was analyzed by scanning electron microscopy (SEM). According to the results, a particular variation in magnetic field was detected over the region surrounding each type of notch. The results suggest that the proposed technique has great potential for damage detection in polymer composites using magnetic sensors and thus constitute a valuable contribution which may ultimately lead to the development of non-destructive evaluation techniques for assessing the structural integrity polymer composite pipes. Introduction Steel pipelines are commonly used in the petroleum industry for the transport of oil and gas. However, corrosion leads to reduced time life of these components and the need for frequent inspection, repairs or replacement. Polymer matrix composite pipes, alternatively, offer advantages over steel tubes for this application due to the corrosion resistance inherent to this type of material and other benefits such as weight reduction [1][2][3][4] . Many types of resins and fiber reinforcement materials are currently used in composites for oilfield applications. While glass is the typical reinforcement material, polyester, vinyl ester, and epoxy are the most prevalent resins 1 . Nonetheless, the conventional polymer composite pipes currently used do not allow non-destructive evaluation (NDE) using instrumented pigs which operate with magnetic sensors. Approaches to detect damage and internal defects in polymer composite materials are of great interest, both for quality control during the production stages, as well as for in-service inspections and during maintenance operations. The use of non-destructive evaluation (NDE) techniques is necessary to access structural integrity without causing damage to the materials. NDE techniques currently used to examine composite materials include ultrasound, thermography, shearography and X-ray computed tomography (XR-CT) [5][6][7] . Shearography has been used for nondestructive evaluation of composites including delamination detection with various types of excitation loads including thermal, vacuum and internal pressure 6 . Thermography has also been proven capable of evaluating internal defects in composites. Although the technique is highly sensitive to detect delamination, there are limitations for high-thickness measurements 6 . X-ray CT provides very high resolution images and complete 3D reconstruction of the part evaluated and can detect defects such as voids, cracks, inclusions, dry fibers and delaminations. In this case, the parts to be tested can be thick due to the relatively low X-ray attenuation of the polymer composites. Ultrasonic techniques have been extensively used for inspection of metallic parts for flaw detection/evaluation, dimensional measurements, material characterization 7 . In composites, ultrasonic inspection is able to detect delaminations and provide in-depth information, even relatively thick parts. Magnetic flux leakage (MFL) is the most used technique for the inspection of steel pipelines in the petroleum industry 8 . However, this technique is not suitable for polymer composites since the material cannot be magnetized. Thus, the addition of magnetic particles to polymer matrix composites offers great potential to allow inspection of these materials using NDE techniques that are sensitive to changes in magnetic field. When there is a defect in the material to be inspected, there are distortions in the magnetic field detected by transducers and converted into electronic signals 9 . Ferrites are magnetic materials with a special characteristic: they have spontaneous magnetic induction in the absence of an external magnetic field. Ferrites are a well established family of magnetic materials, which are inexpensive and chemically stable. Thus, this large class of ceramic materials 10 has a wide range of technological applications. The objective of this work was to study the variations in magnetic field around notches in polymer composite plates with the addition of magnetic markers. Two types of ferrite (barium and strontium) were used as magnetic markers and four types of notches were introduced on the surface of the plates. The measurements indicated that each notch produced a characteristic signature in the magnetic field. Experimental Composite circular plates were fabricated by hand lay-up with a diameter of 500 mm for the magnetic characterization. The materials used were: balanced plain weave fabric E-glass (450 g.m -2 ), orthophthalic polyester, barium ferrite particles (for type 1 disk) and strontium ferrite particles (for type 2 disk). Both ferrites were acquired from FERMAG -Ferritas Magnéticas Ltda -BRAZIL. Polyester resin was mixed with the ferrite particles using a mix ratio in parts by weight (pbw) of 90:10 (resin: particles). A mechanical shaker with speed of 1750 rpm was used to disperse the ferrite particles in the polymer matrix for a period of 20 seconds. Then, the layers of glass fabric impregnated with the liquid mixture (polyester resin combined with ferrite particles) were laid up to form a laminated composite with fifteen layers. Rollers were used on each layer to help remove air bubbles and promote compaction. After the cure of the polymer resin was complete, the plates were cut into a circular shape using a diamond abrasive saw. Four types of notches were then introduced onto the composite disks using a drill. Figure 1 shows the composite disk with notches. Before the fabrication of the disks, the ferrites were analyzed on a Shimadzu EDX-820 Energy Dispersive X-ray Fluorescence Spectrometer for the determination of the chemical composition. X-ray diffraction patterns of samples of ferrite particles were determined on a X-ray diffractometer XRD-6000 Shimadzu, with copper tube (λ = 1.5418 Å). The scan was perfomed over the range 20-80° (2θ). The surface area, pore diameter and average pore volume of the ferrites were obtained by determining the adsorption isotherms of N 2 (g) at 77 K in a Micromeritics ASAP 2010 using the BET method (Brunauer, Emmet and Teller). These parameters are important to determine the morphology of the particles and the existence of agglomerates. The particle dispersion in the composite was observed using a scanning electron microscope (SEM) Shimadzu SSX-550, where the images were generated using backscattered electron (BSE) imaging. SEM has been used by other authors to observe particle dispersion and the presence of agglomerates in polymeric composites with micronand nano-sized particles [11][12][13] . Variations in the magnetic field on the composite disks with the addition of magnetic particles (barium and strontium ferrites) were measured using a custom-made apparatus (Figure 2). The disks were mounted on a shaft of a stepper motor in which it was possible to control the speed and direction of spin. A magnet was kept fixed near the disk and a Hall effect sensor, also fixed and connected with a FW Bell gaussmeter capable of measuring three perpendicular components of the magnetic field, was kept close to the magnet. The magnetic field produced by magnetization of the samples and the distortion of the magnetic field produced by the notches were detected by the sensor. Data were acquired through a computer using a GPIB interface controlled by a program in Labview. To verify the repeatability of the results, the direction of rotation of the disks was reversed over the limits of each notch. Symmetrical curves about the point of return would indicate a clear contribution of the notch for the deformation of magnetic field lines over the region. Each notch region was measured three times, in both directions. Results and Discussion The results of X-Ray Fluorescence obtained for the two types of ferrites are presented in Table 1. The results presented in Table 1 indicate that the relations in percentage by weight for barium ferrite are very close to the values of stoichiometric barium hexaferrite. Thus, the presence of residual iron oxide can be attributed to stoichiometric deviations generated during the manufacturing process. In other hand, the results for the ferrite strontium show a strontium excess superior of 3% by weight, which is associated with the presence of residual iron oxide identified by XRD, suggesting that the production process is more subject to variations in of stoichiometry. The level of impurities found for the two systems arises from the use of industrial raw materials and ores containing S, Si, Mn and V. Figures 3 and 4 present the X-ray diffractograms for barium ferrite and strontium ferrite, respectively. The results show that the particles consist primarily of BaFe 12 O 19 and SrFe 12 O 19 and iron oxide as a secondary phase. It was found that the presence of iron oxide is particularly marked in the sample of strontium ferrite. The presence of iron oxide is normally due to the excess of its precursor which is intentionally added during the production process of the ferrite. Its presence produces a magnetic response which competes with that of the ferrite. However, in the current application one detects the overall Figure 9. Magnetic field (Oe) vs. distance (mm) for the notch III. Figure 10. Magnetic field (Oe) vs. distance (mm) for the notch IV. magnetic response from the magnetic material and, as long as the composition remains unmodified, the presence of small amounts of residual iron oxide should not result in any negative effect to the properties of the product. BET measured specific surface area for the barium ferrite was 1.3928 m 2 .g -1 , while for the strontium ferrite the measured specific surface area was 1.0283 m 2 .g -1 . These low surface areas of the two materials are consistent with the microscopy results which indicated the presence of agglomerates. Figure 5 corresponds to the SEM image of the microstructure of polymer composites containing barium ferrite. The backscattered electron images suggest the presence of agglomerates of ferrite in the polymer matrix distributed between the layers of glass fiber reinforcement. These agglomerates were observed throughout the entire composite sample analyzed. The presence of agglomerates is undesirable, since they are expected to affect the magnetic response of the material. In practical terms these agglomerates may thwart the detection of micron-sized defects or the differentiation of the contour in some types of defects in this size domain. However, the magnetic non-destructive analysis was used in this work to detect defects larger than several millimeters in size. In addition, the distribution of particles can be improved by controlling the dispersion in the polymer by properly mixing the composites under ultrasound or by using a mechanical shaker as done in the present work. Figure 6 shows the microstructure of the polymer composite containing strontium ferrite. This sample also revealed the presence of agglomerates, with the same negative implications as previously discussed. In both systems, it was found that the micron size ferrite particles formed agglomerates with sizes exceeding 10 microns. Figures 7-10 present the changes in magnetic field (Oe) measured perpendicular to the plane of the samples versus distance (mm) for the notches I, II, III and IV (Figure 1), respectively. Each figure shows the results for the composites with the addition of barium and strontium ferrite. The beginning and end of each notch marked in the figures coincide with the dimensions determined in disks. Notches I, II, III and IV have lengths respectively of 16, 30, 28 and 32 mm, thereby confirming that the signal obtained is characteristic of each notch. The repeatability of the measured signals was also assessed using measurements in clockwise and counterclockwise directions, which in this case the same shape of the curve that represents the measured magnetic field versus distance was verified. The dashed line in the figures indicates the location where the rotation was reversed. Each notch region was measured three times, in both directions, and no difference was observed in the measured data. The results presented in Figures 7-10 clearly indicate the variation in magnetic field produced by presence of notches. It is observed that the signals measured in disks with barium ferrite are better defined, with a clear indication of the beginning and end of the notch, and a particular shape for each type of notch. The difference between the magnetic signals measured in composites with strontium and barium ferrites is believed to be related to the difference in atomic weight of Sr and Ba -atomic weight of Sr is 64% of the atomic weight of Baconsidering the fact that both composite materials were prepared using the same parts by weight. Thus, the particles in the material with the strontium ferrite are closer to each other than in the barium one which, in turn, it is more influenced by the demagnetizing dipolar interaction among those particles. A smaller separation among particles can also favor the formation of more agglomerates as indicated by the BET data. These effects in addition to the intrinsic magnetic response of each system should be further evaluated in future work. Conclusions In this work, circular glass/polyester composite plates with the addition of magnetic particles of barium or strontium ferrites (10 wt. (%)) were produced. Four notches were introduced on the composite plates and the variations in magnetic field produced by the presence of notches were measured. Variations in magnetic field were detected in regions surrounding the notches. Moreover, each type of notch produced a particular change in magnetic field measured along the circumferential path which includes the notches. SEM images indicated the presence of agglomerates of magnetic particles in some regions of the plates, which are known to affect the magnetic field even in the absence of notches. Nevertheless, signs of change in magnetic field due to notches were reproducible and characteristic for each type of notch. Composites with the addition of barium ferrite presented curves with less dispersion and better correlation with the geometry of notches. The results indicate that the proposed technique has great potential for damage detection in polymer matrix composites.
3,311
2011-10-28T00:00:00.000
[ "Materials Science" ]
Influence of Microparticles on Setting Time and Micromorphology of Coal Ash Geopolymers Geopolymers are inorganic materials with zeolites-like microstructure and mechanical properties similar to those of Ordinary Portland cement materials [1]. However, their properties are highly depending on the constituents (raw material and activator) characteristics, as well as, on the activation particularities (mixing parameters, curing time and temperature etc.). In order to explore the influence of partial replacement of coal ash with two types of fine aggregates (glass and sand microparticles) on micromorphology and setting time, four types of geopolymers were developed. The evaluations were performed by means of electronic microscopy and Vicat method. According to this study, the coal ash replacement with glass microparticles results in an increase in the initial and the final setting time, while the replacement of coal ash with sand particles show a significant decrease. Moreover, the microstructural analysis shows different behaviour, during activation, of the studied microparticles. The surface of the glass microparticles reacts in the alkaline environment, while the sand particles did not. Therefore, the increase of initial and final setting time can be correlated with the dissolution of Si-O from the glass particles, during geopolymerisation. Introduction Nowadays, the main challenge in new materials developing is strongly related to the environmental impact of raw materials exploitation and sustainability of obtaining technology. A promising class of oxidic materials, ideal to substitute Ordinary Portland Cement (OPC) materials and to eliminate the negative effects produced by those, are geopolymers [1,2]. A geopolymer is rich in aluminium and silicon material with porous structure and characteristics similar to OPC materials [3]. However, since as source of raw material for geopolymers, any powder rich in aluminium and silicon oxides can be used, globally multiple types of wastes have been identified as compatible [4]. Furthermore, the lack of standardization and the wide range of parameters that can vary in each geopolymer has lead to multiple unknowns which are hard or impossible to predict or simulated. In order to develop specific standards for these materials, different scenarios (composition) must be designed and analysed. According to previous studies [5,6], multiple factors influence the properties of geopolymers. As can be seen in Table 1, those are related to the raw [5,7]. Raw material Particle size distribution Humidity Chemical composition Activator solution Solid to liquid ratio Sodium silicate to sodium hydroxide ratio Sodium hydroxide molar concentration Mixing stage Mixing time Mixing speed Curing stage Curing time Curing temperature In case of geopolymers with reinforcing element few more factors must be taken into consideration, such as particles pozzolanicity or particles adhesion to the matrix [6]. According to Lateef N A et al. [8] the particle size distribution has a direct effect on the compressive strength, structure and permeable void ratio of geopolymers. For example when a fly ash source with the average particle size of 42.3 µm has used a value for compressive strength of 22 MPa was obtained in case of 28-days age samples, while after the fly ash was milled and sifted (the average particle size decreases to 4.78 µm) the compressive strength increased up to 67.3 MPa. For the same type of particles, the absorption decreased up to 21% and the microcracks resulted from the curing process are fewer in number. The quality of the final structure and its properties results mainly from the uniformity and strength of the tetrahedral structure formed by the Si-O-Al system [9]. Any impurity or chemical element (except Si, O and Al) from the raw material, especially the calcium content, can lead to the formation of defects in the structure that significantly influence the final properties of the geopolymers [10]. To evaluate the influence of calcium content on the setting time of geopolymers, P. Chindaprasirt et al. [11] replaced different percentages of ash with Portland cement type I, which contains a high content of calcium hydroxide (Ca(OH)2) and calcium oxide (CaO). According to the study, the increase in calcium content in the composition causes a sharp decrease in setting time regardless of the percentage of replacement. The chemical activator or activation solution plays a vital role in initiating the geopolymerization process. In general, a strongly alkaline environment is required to increase the hydrolysis of aluminosilicate particles contained in the raw material, therefore its concentration has a pronounced effect on the mechanical properties of the geopolymers [12]. On the other hand, the dissolution of Si + and Al + species during geopolymer synthesis depends mainly on the NaOH concentration, where the dissolution capacity of silicon and aluminum oxides is controlled by the NaOH concentration and mixing parameters [13]. Considering the fact that for geopolymers manufacturing alkaline solution with high concentration are used, adequate health and safety conditions must be introduced during the obtaining process [14][15][16]. Somna et al. [17] studied the compressive strength of fly ash-based geopolymers, hardened at ambient temperature by changing the NaOH concentration from 4.5 M to 16.5 M. It was observed that by increasing the NaOH concentrations from 4, 5 to 9.5 M, there is a significant increase in the compressive strength of the samples. However, the increase of NaOH concentrations from 9.5 to 14 M positively influences the mechanical properties of the final product, but in a reduced. The influence on the compressive strength of the alkaline activator concentration is significant due to the degree of dissolution of Si + and Al + which directly affects the formation of sialates bonds. Above 16.5 M a decrease is observed due to the excess of hydroxide groups which determines the precipitation of the aluminosilicate gel at early ages. Conventional methods of manufacturing geopolymers involve mixing the solid component, the raw materials rich in aluminum and silicon with the reinforcing elements if any, with a liquid component, a strongly alkaline solution. There are several studies about the ideal mixing procedure, however, the most common method involves mixing the solid components, if there are several, as well as the liquid ones, followed by mixing the liquid component with the solid one [18]. Another method is to mix the materials rich in silicon and aluminum with the sodium silicate solution, and after a few minutes, the NaOH solution is introduced in the binder while mixing. However, the geopolymers obtained by the second method have lower mechanical strength properties, except for those based on fly ash [19]. After mixing the components, the resulting geopolymeric paste is poured into moulds, where it is to harden at room temperature or at a slightly high temperature. At the same time, according to the study conducted by Z. Yahya et al. [20], if after filling the moulds, the surface in direct contact with the atmosphere is covered with a foil, the evaporation rate of the water in the mixture is reduced, so the geopolymer will have a smaller number of cracks. Up to know, the geopolymers successfully participated in multiple domains such as civil construction, water filtration, heavy metals encapsulation, heat resistant shields etc. due to their novel properties including corrosion resistance, compressive and flexural strength and high-temperature resistance [21][22][23][25][26][27]. However, there are still a few challenges in developing geopolymers with different types of particles in the matrix. Therefore, this study aims to develop and analyse four types of coal-ash based geopolymers which contain one or two types of reinforcing elements. Materials and methods The obtained geopolymer was manufactured by mixing sodium silicate and sodium hydroxide solution (alkaline activator) with coal ash and reinforcing particles (solid component). Materials 2.1.1. Coal ash. Coal ash is the main waste which results from coal combustion in power plants, these mineral powder rich in aluminium and silicon oxides ends by being deposited on vaste areas near the cities powerplants. The coal ash used in this study comes from CET II -Holboca Iasi Romania ash dumps, which occupied an area of approximately 50 hectares in 2013. Due to the fact that the performance of coal ash in geopolymers is influenced by its characteristics (chemical composition, humidity, particle size distribution etc.). In order to remove the negative effects produced by the impurities with high dimensions from the coal ash, only the particles with the diameter lower than 80 μm have been used. Moreover, the powder was dried at 120 until its mass remains constant for 30 minutes, i.e. humidity ≈ 0. According to ASTM C618-92a, the coal ash used in this study belongs to class F fly ashes because the sum of the three main oxides (silicon, aluminum and iron) is higher than 70% (eq. 1), as can be seen in Table 2 naturally, producing proven negative effects on the environment following storage in waste dumps [5]. Therefore, the use of glass waste in the manufacture of environmentally friendly materials has become a global concern [4]. Due to the ability to incorporate geopolymer paste, by the introduction of glass powder in the composition of these materials, a geopolymer with higher compressive strength can be obtained while an environmental problem is solved. As can be seen from Table 2, the glass powder, with particles smaller than 10 μm in diameter, shows higher SiO2, CaO and Na2O content, but lower aluminium oxide concentration. In order to study the behaviour of natural aggregates (sand), compared with waste (glass), two types of geopolymers with different percentage of sand particles have been obtained. As natural aggregates, a class of river sand with particles lower than 4 mm in diameter has been used. Activator solution. As an alkaline solution for coal ash-based geopolymers activation, a mixture of sodium hydroxide and sodium silicate solutions was used. The NaOH solution was prepared at a molar concentration of 10 by dissolving high purity NaOH flakes (98%) in distilled water 24 hours prior use. The sodium silicate solution with a density of 1.37 g/cm 3 and pH lower than 11.5 has been commercially acquired (Scharlad Company, Barcelona). Table 4. The micromorphology analysis was realized on samples cured at 70 °C for 8 hours and aged for 28 days in normal environmental condition. Methods The chemical composition of raw materials and obtained samples were measured by a Bruker S8 Tiger Wavelength Dispersive X-ray Fluorescence (Bruker, Karlsruhe, Germany) instrument. The samples micromorphology has been analysed using a scanning electron microscope FEI, model QUANTA 450 FEG (FEI company, WA, USA) in environmental SEM condition. The setting time of the geopolymer paste was evaluated by Vicat test according to ASTM C 191-04a. The test was conducted at room temperature (22 ±2 °C) with a Vicat apparatus (Figure 1.a). The Vicat method consists in testing the penetration resistance, with a flat tip needle with a diameter of 1 mm, of a geopolymer sample that was poured into a frustoconical mould specific to the device used (Figure 1. b). After placing the mould on a glass plate, it is filled with freshly obtained geopolymer paste (immediately after mixing the components) and covered with a glass panel that prevents water evaporation. Every 30 minutes, the penetration resistance shall be checked by removing the glass panel and releasing the needle using the release screw. The needle is pushed into the sample surface under the action of its weight, the weight of the plunger and that of the head (300 g). Depending on the depth of penetration, it is determined whether the two points (initial or final) specific to the setting time have been reached. After checking the depth on the scale, the needle is removed from the paste by lifting the rod, further, the sample is covered with the glass panel for the next 30 minutes. Chemical composition analysis As can be seen in Table 5, the chemical composition of the obtained geopolymers shows high differences between the concentration of the main elements (Si, Na, Fe and Al). Except for oxygen, the element with the highest concentration in all samples is silicon. the concentration of Fe and Al, but due to the replacement of 30% of coal ash with glass powder, the calcium concentration in the sample increases up to 30%. The chemical composition of the 30CA_70RS sample shows an increase of approximately 17% in the Si concentration, but by replacing 70% of the amount of coal ash with sand particles, the Al and Ca content decreases by 46%, respectively 3%. In the case of 15CA_15GP_70RS sample, the increase of Si concentration is of approximately 17% while the decrease of aluminum content reaches 50%. Therefore, by decreasing the coal ash content from the composition of the geopolymers, the aluminum content decrease while the silicon content increases. According to the chemical composition, the 100CA sample has better adhesion properties due to the ratio between the Si and Al concentration of approximately 2.4 which corresponds to a geopolymer with linear 2D structure [27,32]. However, by increasing the percentage of reinforcing particles, the Si content increases, therefore, those sample will exhibit lower flexibility but higher hardness and compressive strength. Micromorphology analysis From the microstructural point of view, the samples show a homogeneous structure with high compactness of the matrix that depends on the type or the percentage of reinforcing particles. Moreover, all samples show unreacted coal ash particles, but the sample 100CA ( Figure 2) shows the highest number, therefore, in some areas the matrix continuity is interrupted. Additionally, the SEM micrograph at 500X also highlights the cracks formed during the curing process, due to the fast evaporation of water. However, due to the self-healing capacity of geopolymers, which is related to the continued geopolymerisation reaction between unreacted particles and the activation solution from the gel pores, some cracks are repaired over time. At the same magnification, the sample 70CA_30GP ( Figure 3) shows a homogeneous structure, with a high degree of ash dissolution. The micromorphology of the geopolymers with river sand (Figure 4) respectively river sand and glass powder ( Figure 5) exhibit a high increase in the number of large pores, besides, the number of unreacted particles and small pores significantly decrease. The effect is produced by the high dimensions particles which block the air bubbles evaporation from the geopolymeric paste, during curing. Figure 7. 30CA_70RS sample micromorphology at high magnification (10kX). In environmental SEM condition, the interface between the reinforcing particles and the matrix can be studied at high magnification. As can be seen from Figure 6, the surface of the glass particles reacted in the alkaline medium (reacted zone), therefore, the matrix adheres strongly to their surface. In the case of the samples with river sand as reinforcing elements ( Figure 7) the matrix does not adhere to the surface of the particles, thus, a clear delimitation can be observed between the matrix and the sand particles. Setting time The interpretation of the setting time for these materials is similar to the equilibrium diagram specific to alloy systems, where the cooling liquid line provides information about the temperature at which the first crystallization germs appear, and the solidus line highlights the temperature at which the transition to solid-state occurs for the entire volume of material [24]. In the case of geopolymers, the first point (position 1 of the needle tip) refers to the initial setting time, which is specific to the loss of plasticity of the mixture and the passage of a certain period of time after mixing the components. The second point (position 2 of the needle tip) refers to the final setting time and this is characterized by the total time elapsed from the mixing of the components to the actual hardening of the sample (Figure 8). In other words, the initial setting time is determined by the time elapsed from the pouring in the mould of the geopolymeric paste until the needle penetrates to a depth of only (5 ÷ 7) mm in the geopolymer paste (Figure 1.b). The final setting time is specific to the moment when the needle no longer penetrates the surface of the sample. The coal ash-based geopolymers analysed in this study shows an initial setting between 4.78 and 6.88 hours and a final setting time between 22.95 ÷ 25.00 hours at ambient temperature (Table 6). 9 It is essential that the initial setting time to be long enough to cover all the cast-in-place operations. However, according to the obtained results, the initial and final setting time of the samples with glass powder is higher than that of 100CA and 30CA_70RS samples. In case of this materials calcium silicate hydrate (C-S-H), calcium aluminate silicate hydrate (C-A-S-H) and sodium aluminate silicate hydrate (N-A-S-H) are formed during geopolymerisation. As reported in previous studies the compounds that include calcium have lower setting time because the dissolution rate of Si 4+ and Al 3+ species is lower compared to the dissolution rate of Ca 2+ . Therefore, the setting time of the samples with glass powder should be lower, but this inversion can be explained through the dissolution of high content of Si 4+ , as a consequence of the glass particles reaction in the alkaline environment, that results in a high initial and setting time. Conclusions Four types of geopolymers have been obtained by activating the coal ash with a solution of sodium silicate and sodium hydroxide. In order to evaluate the influence of reinforcing elements of setting time and micromorphology, two types of particles have been introduced in the geopolymers matrix. The microstructural analysis confirms a different behaviour, during geopolymerisation, between the glass powder and the river sand particles. According to this study, the glass particles react partially in the alkaline environment while the matrix adheres strongly. However, the river sand particles didn't react in the described condition showing a clear delimitation between their surface and the matrix of the samples. Due to the fineness of the glass powder, the Si 4+ content from their surface is dissolved in high concentration by the alkaline solution. Therefore, the setting time of the samples with glass particles (70CA_30GP and 15CA_15GP_70RS) show higher initial and final setting time, even with a higher Ca content.
4,158
2020-07-18T00:00:00.000
[ "Materials Science" ]
Food Security: 3D Dynamic Display and Early Warning Platform Construction and Security Strategy Since it affects a nation’s economy and people’s wellbeing, food security is a crucial national security requirement. In order to realize multi-angle grain data presentation and analysis and achieve the goal of deep mining, we propose a 3D dynamic visualization analysis method of multidimensional agricultural spatial–temporal data based on the self-organizing map. This method realizes the multi-angle display and analysis of grain data and achieves the purpose of deep mining. With the outbreak of COVID-19, the global food security situation is not optimistic, so it is necessary to use the food security early warning system to solve the food security issue. Machine learning has emerged widely in recent years and has been applied in various fields. Therefore, it is an excellent way to solve food security to apply the model in machine learning to construct a food security early warning system. Afterward, a food security early warning platform is developed with a support vector regression (SVR) model to ensure food security. Finally, we analyze China’s medium and long-term food security policy in line with modernization objectives. The experimental results show that the food security early warning platform based on the SVR model from 2007 to 2016 is effective compared with the actual situation every year. Through analyses, we should improve the stability, reliability, and sustainability of food supply, firmly hold the food security initiative, and construct a national food security guarantee system matching the goal of modernization. Introduction Food is a tactical issue, while food security is a strategic one. Under the influence of unilateralism, trade conservatism, and anti-globalization trend, the average international food trade is facing challenges [1,2]. Domestic grain production is affected by land resources, agricultural science, technology, and the grain market and has also encountered a bottleneck period of development [3,4]. When uncertainties and destabilizing factors rise, we must maintain strategic focus and ensure good food production and national food security [5][6][7][8]. Food security refers to a fundamental human right to life, namely, that everyone should have access to enough food for future survival and health. Specifically, according to the Food and Agriculture Organization (FAO), food security refers to all people, at all times, having physical and economic access to sufficient, safe, and nutritious food that meets their dietary needs and food preferences for an active and healthy life [9][10][11][12]. At the national level, food security means that a country's food production, supply, and sale are in a state of no danger and threat. Food security bears on political stability, national economy, and people's livelihood and is an essential foundation for overall national security. According to the FAO, nearly one-third of all food produced for human consumption (1300 Mt of food) is lost or wasted every year [13]. One of the main challenges towards the enhancement of food security is the reduction of food waste. As a result of COVID-19, the lockdown in Italy has stimulated snack consumption, particularly potato chips. Therefore, Amicarelli et al., used material flow cost accounting to measure the economic costs of food loss and waste in proposed multiple views based on SOM collaborative visualization of 3D analysis method, this method from the perspective of visualization for the data dimension reduction, integration of a variety of visual analysis tools, data visualization expression after for dimension reduction, both to solve the traditional visual analysis tools cannot provide visualization of high dimensional multiple attribute data of time and space. Moreover, the linkage between various expression tools can realize the real-time multi-angle visual expression and data analysis and enhance analysts' ability to mine confidential information, which is of great significance to the 3D dynamic display of food security. Under the influence of the international environment, China's food security situation has also been negatively affected. However, China attaches great importance to this aspect of work and makes it an important national strategy. The deterioration of the international food security situation makes China unable to stay immune [33,34]. In addition, as a significant food country, China attaches great importance to food security. Therefore, the research on food security, especially in the past, which is used to observe the current and future warnings, develops rapidly. Under the threat of global food security, it is significant to guarantee China's food security and provide a guarantee for global food security [35,36]. The research on China's early warning system can make judgments and predictions for the state of domestic food security. Under various uncertainties, it is not only of theoretical significance to guarantee China's food security but also of practical significance to guide practical economic activities. The contributions of this paper are summarized as follows. (i) SOM-based collaborative multi-view 3D dynamic visualization of food security is presented. (ii) The SVR-based food security early warning platform is constructed. (iii) The medium and long-term food security strategy is discussed. The rest of the paper is structured as follows. In Section 2, we study the materials and methods of the construction of food security early warning platforms. Experimental results are reported in Section 3. Section 4 gives the discussion on medium and long-term food security strategy, and Section 5 concludes this paper. Materials and Methods This section presents a SOM-based collaborative multi-view 3D dynamic visualization of food security. This method uses SOM to reduce the dimension of multidimensional grain data, combines with a parallel coordinate system, space-time cube, and other visualization components, realizes multi-angle display and analysis of grain data, and achieves the purpose of deep mining. Most strikingly, we construct the SVR-based food security early warning platform for food data. The overall framework is shown in Figure 1. Theoretical Background As a classical visualization method of high dimensional data in the two-dimensional plane, the parallel coordinate represents geometric visualization technology [37]. Parallel coordinates can be able to visually express the relationship between data through projective geometric interpretation and dual features without using vectors or other visual icons and are easy to understand. The disadvantage is that the increase in data volume and the Theoretical Background As a classical visualization method of high dimensional data in the two-dimensional plane, the parallel coordinate represents geometric visualization technology [37]. Parallel coordinates can be able to visually express the relationship between data through projective geometric interpretation and dual features without using vectors or other visual icons and are easy to understand. The disadvantage is that the increase in data volume and the increase in polyline density led to many overlapping lines, which are difficult to identify [38,39]. The original grain data's dimension is reduced via SOM-based collaborative multi-view 3D dynamic visualization of food security, which then visualizes the parallel coordinates. When using parallel coordinates to further supplement the link between multidimensional grain data, it may better avoid the drawbacks of a high number of overlapping lines brought on by increased data volume and broken line density. According to the clustering results obtained from SOM dimension reduction, typical data in each class are obtained as input variables. Referring to the visualization technology based on parallel coordinates such as data abstraction, coordinate axis exchange, and dimension control, parallel axes are set according to data attribute dimensions to realize the visual expression of spatial-temporal data, and corresponding colors are assigned to each class for convenient observation and analysis. SOM-based collaborative multi-view 3D dynamic visualization of food security provides data support for the following enhanced food security warning platform. Sood and Singh discussed how to reduce food waste by using computer vision and machine learning methods [40]. For sub-Saharan farmers who own very little farmland, Khalif and Nur discussed the role of African farmers in reducing food insecurity [41]. In the production of food, pesticides are commonly utilized. Brazil's shortcomings in monitoring the use of pesticides have been examined by Gerage et al. In order to increase food security, sustainable agriculture must use fewer pesticides [42]. Farmland and water resources are the main factors that influence food production in India. Through the analyses, Kumar et al. found that the gap between agricultural water demand and water availability is the core of problems with both food security and water management. As a result, they proposed appropriate solutions to the water shortage [43]. A space-time cube is mainly used to express the space-time path [44,45]. The spacetime cube model aggregates the sample points into the spatial-temporal data structure employing bin time series. By creating a space-time cube, spatial-temporal data can be visualized and analyzed in the form of time series analysis and integrated spatial and temporal pattern analysis, as shown in Figure 2. In Figure 2, the x-axis and y-axis represent the spatial position of the time, while the z-axis represents the time. The bottom layer is the start time, and the top layer is the latest time. Each cube is composed of the attribute value corresponding to the time, and the value can be distinguished by setting different colors. A space-time cube is formed by multiple time planes to express the change of spatialtemporal data. Its main advantage is that the spatial-temporal data can be expressed in a three-dimensional cube, highlighting the changes in geographical phenomena over time. Similar to parallel coordinates, when the amount of data is large and the attribute dimension is large, it will cause problems such as plane overlapping, path chaos, and multi-attribute challenges to represent [46]. After the data are clustered and dimensioned by SOM dimension reduction technology, the dimension of data attributes is reduced. The color can be used to represent the classification of data. With the help of geographical display tools such as maps, the expression of spatial-temporal data is completed, and the spatial-temporal attribute relationship of data after SOM dimension reduction is displayed. Thus, the visualization of high-dimensional spatial-temporal data is realized. The SOM-based 3D dynamic display platform for food security is divided into three modules: the food data layer, mining layer, and 3D visual interface layer for food security. The food data layer stores spatial-temporal data and supports each module to call food data. The mining layer classifies food data based on SOM's high-dimensional food data reduction and mining, providing a data basis for 3D visualization of food security [47]. The 3D visualization interface layer of food security is the platform visualization display layer, which is used for the 3D visualization expression of food data. Visualization mainly includes two kinds of 3D visualization representation before and after food data mining. The former is mainly used to retrieve food data and acquire interesting data sources through food data retrieval. In contrast, the latter is mainly used to carry out the 3D visual expression of food data based on SOM dimensionality reduction and clustering of data sources selected by the former. The main expression tools include the U-matrix algorithm, parallel coordinate, space-time cube, etc. the spatial position of the time, while the z-axis represents the time. The bottom layer is the start time, and the top layer is the latest time. Each cube is composed of the attribute value corresponding to the time, and the value can be distinguished by setting different colors. A space-time cube is formed by multiple time planes to express the change of spatial-temporal data. Its main advantage is that the spatial-temporal data can be expressed in a three-dimensional cube, highlighting the changes in geographical phenomena over time. Similar to parallel coordinates, when the amount of data is large and the attribute dimension is large, it will cause problems such as plane overlapping, path chaos, and multi-attribute challenges to represent [46]. After the data are clustered and dimensioned by SOM dimension reduction technology, the dimension of data attributes is reduced. The color can be used to represent the classification of data. With the help of geographical display tools such as maps, the expression of spatial-temporal data is completed, and the spatial-temporal attribute relationship of data after SOM dimension reduction is displayed. Thus, the visualization of high-dimensional spatial-temporal data is realized. The SOM-based 3D dynamic display platform for food security is divided into three modules: the food data layer, mining layer, and 3D visual interface layer for food security. The food data layer stores spatial-temporal data and supports each module to call food data. The mining layer classifies food data based on SOM's high-dimensional food data reduction and mining, providing a data basis for 3D visualization of food security [47]. The 3D visualization interface layer of food security is the platform visualization display layer, which is used for the 3D visualization expression of food data. Visualization mainly includes two kinds of 3D visualization representation before and after food data mining. The former is mainly used to retrieve food data and acquire interesting data sources through food data retrieval. In contrast, the latter is mainly used to carry out the 3D visual expression of food data based on SOM dimensionality reduction and clustering of data sources selected by the former. The main expression tools include the U-matrix algorithm, parallel coordinate, space-time cube, etc. Construction of Food Security Early Warning Platform The basis of constructing the early warning platform is constructing a relatively perfect and feasible indicator system. If the indicator is selected incorrectly, the system is not reasonable. Thus, appropriate indicators that can be used for prediction must be selected. The economic early warning platform plays an essential role in promoting the development of the national economy [48] As an important part of the agricultural early warning system, the food security early warning platform plays an essential and positive role Construction of Food Security Early Warning Platform The basis of constructing the early warning platform is constructing a relatively perfect and feasible indicator system. If the indicator is selected incorrectly, the system is not reasonable. Thus, appropriate indicators that can be used for prediction must be selected. The economic early warning platform plays an essential role in promoting the development of the national economy [48] As an important part of the agricultural early warning system, the food security early warning platform plays an essential and positive role in ensuring food security. The purpose of studying the past is to guide future development. Therefore, the food security system must put the early warning work in an important position. Furthermore, a complete early warning system is the main work to maintain food security in the future [49][50][51]. Specifically, constructing a food security early warning platform can collect all kinds of data directly or indirectly related to food security and provide data sources for security status identification [52,53]. Through the data collection system, the platform can collect all kinds of data on food security. The data collection mainly depends on the information network of each region, which is distributed in each main producing area, main marketing area, and production and marketing area. Food security data mainly include production, consumption, circulation, grain output, climate sub-disaster situation, consumption, import and export volume, etc. Through a series of food data collection, a database can be built to prepare for identifying past and current food security situations and provide historical data material for predicting. The early warning platform of food security is an important achievement of national informatization construction and provides a basis for scientific decision-making in the food department. The traditional food security early warning platform is mainly based on the economic cycle theory of the forecast of the prosperity index, statistical single or multiple indicators of the early warning model. The amount of data involved is limited and primarily aimed at a particular aspect of the early warning model, such as a unilateral yield forecast. The food security early warning platform proposed in this paper will be based on the original model, integrating various models to show the food security situation better. The decision system in the food security early warning platform includes the choice of model and the formulation of the decision scheme. The early warning platform is based on past data, that is, historical data, which tends to be relatively comprehensive, showing the situation of food security in the past and identifying the root causes of long-standing food insecurity problems [54,55]. The food security early warning platform can realize the observation of the development trend of long-term food security in the future. The early warning is prediction-based, which cannot only observe the prediction results but also process and describe the future short-term and long-term food security situation according to historical and realistic data. Through the investigation and analysis of the system operation status, different degrees of alarm can be issued to the possible problems so that the decision-makers can observe the problem, formulate the excluding warning plan in time, and reduce the loss caused by food insecurity. Food security early warning platforms can trigger the alarm and make various decision-making plans according to experience to facilitate relevant departments to select the best choice [56]. At the same time, it can dynamically monitor the implementation of relevant programs and assess the programs at any time. When the current programs are not suitable for the current situation, the platform will immediately give feedback to decision-makers and report the problems of the programs for better correction by decision-makers. Different regions have different food conditions. Through various quantified information, early warning models in the early warning platform are constructed to achieve more accurate forecasts and more practical food security early warning. The food security early warning platform is based on history and is more focused on the critical pillar of the future to ensure food security. It is the only way to modernize the future food security powers. Only a high-level warning platform can promote the progress of China becoming a great nation of food security and thus contribute to world food security. The selection of various indicators in the food security early warning platform must follow the principles of representativeness, comprehensiveness, and operability. Representativeness means that the indicators selected must be from accurate and reliable sources, closely related to the warning to be carried out. The comprehensiveness means that the selected indicators can represent the whole early warning platform, and all aspects are representative indicators. They should not be too one-sided, and the characteristics and operating rules of the system should be inferred through the selected indicators. The operability means that the selected indicators must be available, followed by statistical analysis. The data must be complete and processed by various software, rather than desultorily. Warning situation indicators refer to conditions that tend to trigger economic alarm. The determination of warning situation indicators is the most basic and vital prerequisite for early warning. Data Collection To comprehensively reflect the food security situation in China, the warning situation indicator is divided into three subsystems, namely production, consumption, circulation, and reserve, with a total of 10 indicators. The data are selected from 2007 to 2016 for 10 years, all from China Statistical Yearbook [57]. Since the food security data from 2007 to 2016 are very representative and accurate, the data span is considerable, which may cover all levels of warning intensity. Therefore, we select the food security data from 2007 to 2016 for warning intensity analysis, and the early warning indicator system data (2007-2016) is shown in Table 1. The production system includes five indicators: total output, grain planting area, the proportion of disaster area to affected area, fertilizer application amount, and irrigation area of cultivated land. The consumption system includes two indicators: the grain export volume to major agricultural products and the grain import volume to major agricultural products import volume. Circulation and reserve systems include grain self-sufficiency rate, grain reserve rate, and grain foreign trade dependence. Therefore, In this paper, x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , and x 9 are selected as ten warning indicators to represent total output, grain planting area, proportion of disaster area to affected area, fertilizer application amount, irrigation area of cultivated land, grain export volume to major agricultural products, grain import volume to major agricultural products import volume, grain self-sufficiency rate, grain reserve rate, and grain foreign trade dependence. The analysis of warning situation indicators is an integral part of early warning. Without them, the hazard degree of warning cannot be predicted. Therefore, the selection of warning situation indicators is the key to determining the outbreak of a warning situation. In order to ensure the scientificalness of warning situation indicators, we select five suitable warning situation indicators using principal component analysis [58], namely, total output, grain planting area, irrigation area of cultivated land, the proportion of grain exports to primary agricultural products exports, and grain reserve rate, to replace the original ten indicators, which are more straightforward and more representative. After selecting the independent variable, we need to select a comprehensive dependent variable that can reflect food security [59,60]. The warning boundary refers to the threshold value divided by the warning degree. Generally, it is the upper and lower limit artificially divided by scholars through qualitative research and comprehensive data [61]. It is also called a threshold value. The division of warning boundary is the core to determine the size of warning intensity, according to different thresholds to determine all kinds of warning intensity. Warning intensity refers to the outbreak of warning used to judge the intensity of the warning situation. The setting of warning intensity should be based on the constantly changing state of warning situation indicators. More importantly, different standards are divided according to the threshold value into different periods to analyze the current situation in a specific warning boundary and then predict the warning situation. The warning intensity is determined as four levels: no warning, light warning, medium warning, and heavy warning, respectively, with the numbers 0, −1, −2, and −3 representing their degree. According to the operation status of the grain market, the relevant warning intensity degree is defined, as shown in Table 2. Next, the upper and lower warning situation thresholds are determined to divide the actual range of each warning intensity level. The determination of warning intensity is generally set artificially. This paper selects reasonable warning boundaries to judge China's food security according to international standards, the studies of many scholars in the past, and the current economic situation in China, and the setting of warning boundaries is strict [52,62]. As an essential branch of support vector machine (SVM), support vector regression (SVR) is used to solve regression problems and is often used to predict the weather, stocks, and other aspects [63,64]. When SVM is used to process data, it is often assumed that the training samples are linearly separable, which can be solved by the above method. However, if the training samples cannot find a hyperplane that can be divided, the kernel function can be introduced to solve the problem. SVR can solve the problem of linear inseparability through the intervention of the kernel function. Regarding how to select kernel functions, certain kernel functions can be selected according to the experience and research of experts and scholars, or cross-validation can be adopted to test the test results of different kernel functions. If the results of the above methods are not ideal, the method of mixed kernel function can also be adapted to combine different kernel functions and establish new kernel functions. However, the complexity of calculation increases, so this paper does not adopt the mixed form and only selects different kernel functions for model analysis. Regarding total output, the standard to determine whether the total output is within a reasonable range is calculated according to the population growth. If the grain output can meet the grain demand of everyone, the ideal grain output can be obtained by multiplying the per capita grain output with the total national population, and then the security of the total grain output can be judged. The population of the country is from China Statistical Yearbook 2021. Internationally, it is generally believed that the per capita food possession reaches 400 kg, that is, it reaches the basic level. However, the food conditions in China are improving in 2020. Therefore, 400 kg per capita food possession is set as the lower limit of no warning. China used to take the per capita grain possession of 300 kg and 200 kg as the evaluation standard of having only basic needs, as early as in the 1990s, the per capita grain possession of China exceeded 300 kg. From this point of view, it is more reasonable to set 375 kg and 350 kg of grain per capita as the lower limit of light warning and the upper limit of heavy warning. Regarding grain planting area, the criterion is to calculate the standard proportion of the planting area of grain to the total planting area of crops and then multiply this value with the total planting area of crops. The total planting area of crops is from China Statistical Yearbook 2021, and the ratio is generally between 0.6 and 0.7. In order to refine this indicator and more dynamically reflect the state of food security, the lower limit of the proportion of no warning is set as 0.68, the lower limit of the proportion of light warning is set as 0.67, and the lower limit of the proportion of medium warning is set as 0.66. Regarding the irrigation area of cultivated land, the criterion is generally the product of the standard proportion of irrigation area to cultivated land area and cultivated land area. Considering the current grain irrigation situation, the preliminary judgment of the proportion is generally between 0.04 and 0.05. Within a reasonable range, the higher the proportion of irrigated area, the better the reduction of grain production caused by drought and other disasters. In order to better reflect the precise influence of irrigation on grain, the proportion of 0.048, 0.045, and 0.042 close to 0.05 and 0.04 are used as the proportion threshold of each warning intensity, which is multiplied by the cultivated land area to obtain the standard value of the warning boundary under the four warning intensities. Regarding grain export volume to major agricultural products, the criterion is based on the proportion of grain foreign trade dependence. It is generally believed that the higher the proportion, the lower the food security. Therefore, the range of no warning is (−∞, 0.7], light warning is (0.7, 0.8], medium warning is (0.8, 0.9), and heavy warning is (0.9, +∞). Regarding the grain reserve rate, according to the international standard, the grain reserve rate between 0.17 and 0.18 reaches the level of food security. Only when the grain reserve rate is lower than 0.17, it is considered to be lower than the safety line. When it is below 0.14, it is a food security crisis. However, this international standard is not applicable in China. In this paper, the grain reserve rate will be combined with China's grain import and export and inventory status, and the threshold will be set as the following standard. The range of no warning is (0.8, +∞), the range of light warning is (0.6, 0.8), the range of medium warning is (0.4, 0.6), and the range of heavy warning is (−∞, 0.4). After determining the warning boundary of different warning intensities, the criterion of warning intensity is formed, and then it is necessary to carry on the next detailed analysis of the warning intensity. In the early warning system of this paper, a preset provision is made, that is, when the warning situation is no warning and light warning, no warning will be triggered. When medium warning and heavy warning occur, the warning is triggered. It is important to identify all the warning indicators and determine the warning situation indicators that cause the warning. It is very important to find the warning source quickly. Only by finding out the source can we clear the dangerous situation in time and formulate the relevant detailed strategy, so as to truly alleviate the crisis. Early warning systems formulate the rules of the warning trigger mechanism, which is equivalent to the function of a switch, according to whether to pass the switch or to implement different strategic arrangements. Through the above calculation, the upper and lower thresholds of each warning intensity of food security can be preliminarily determined. By substituting the data of each indicator in each year into the threshold, the security degree of each indicator in each year can be determined, as shown in Table 3. As seen in Table 3, from 2008 to 2014, the increase in planting area and other factors led to the rapid growth of total output, which made the two warning intensities quickly turn into no alarm state. At the same time, the irrigated area showed different degrees of light warning and medium warning states, mainly from 2009 to 2010. The warning intensity of grain export volume compared to the export volume of primary agricultural products showed a declining trend. After 2009, a light warning state appeared that did not need a warning, while the security degree of grain reserve rate mainly showed a medium warning state. In 2013, the warning intensity showed a decreasing trend and rapidly entered a light warning state and no warning state. The improvement of the security degree of these two indicators mainly depends on the change in grain collection and storage policy. In 2015, there was no warning state; only the planting area was lightly warned, while all other indicators showed no warning signs. In 2016, there was no warning, in which the planting area was a light warning, and the proportion of grain export volume in the export volume of major agricultural products was a medium warning. Since other indicators were in no warning state, which could make up for the warning situation in export, it was consistent with the fact that there was no warning state this year. The SVR-based food security early warning platform divides the samples into training and test sets. The training set is used to simulate data. In contrast, the test set is used to test whether the model presented by the training set has a solid explanatory ability to judge whether the model is adequate. The training set and test set proportion is 80% and 20%, respectively. The model has ten samples and five features. Therefore, ten samples of the training set and three samples of the test set were determined. The training samples were input into the SVR model to adjust the parameters and determine the optimal model. Specific parameter settings are shown in Table 4. Results In SVR model, y_true represents the true value, and y_pre_kernel represents the predicted value under different kernel functions. When epsilon = 0.1 and other parameters remain unchanged by default, the predicted results are shown in Table 5. As seen from Table 5, in the case of epsilon = 0.1, when the kernel function is rbf, the predicted value in each year is 1.6238, which differs significantly from the true value without any fluctuation. In the case of linear and poly, the values predicted by the linear kernel function are closer to the true values than those predicted by the poly kernel function. Generally, when epsilon is enlarged, the model's prediction accuracy will be improved. When epsilon is enlarged to a certain extent, no matter how much epsilon is enlarged, the prediction result will not be affected. Therefore, selecting an appropriate epsilon is necessary, defining epsilon as 1. The predicted results are shown in Table 6. As seen from Table 6, when epsilon is increased, not only can the prediction result not be more accurate, but also the predicted value of the three kernel functions deviates further from the true value. Moreover, the predicted value of the three kernel functions each year is 1.6961, and the true value of the three years is more than 2. Obviously, this model is not applicable, and epsilon cannot be defined as 1. Therefore, epsilon is defined as 0.2 in the parameter adjustment, and the predicted results are shown in Table 7. It can be seen from Table 7 that the predicted value of the linear kernel function is very close to the true value, while the predicted value of the rbf kernel function is quite different from the true value. The predicted value of the poly kernel function is also close to the true value but not as good as the linear fitting. Therefore, the linear kernel is the most suitable of the three. Since the linear kernel is the kernel with the best performance in the training set and test set results, and it does not need to adjust other parameters, the current parameters are optimal. The results of the prediction model made by linear kernel function are put into the warning boundary, and the warning intensity of the average selling price of major grains per kilogram and the warning intensity predicted from 2014 to 2016 is obtained, as shown in Table 8. There is almost no difference between the alarming degree of predicted value and the real warning intensity, which indicates that the SVR model is very effective for food security early warning, which is very consistent with the actual food security operation state, which also indicates that the food security early warning platform based on SVR is very reasonable. Discussion By constructing a 3D dynamic display and early warning platform for food security, we can intuitively understand the real-time situation of food, which is of great significance in promoting the development of agriculture [65,66]. Ensuring food security is the foundation of economic development and social stability. Currently, COVID-19, extreme weather, and other uncertain events have brought a chain reaction to the global food industry chain and supply chain, and the overall situation of the international food market is not optimistic, raising higher requirements for China to ensure food security and participate in global food governance effectively. Next, we will study China's medium and long-term food security strategy. In the face of the current international food situation and new developments, to ensure domestic food security, we must attach great importance to and solve the short-term impact of the international market. First, strengthen early warning and monitoring of domestic and international grain markets, effectively guide and manage expectations, and ensure unimpeded information, transportation, and logistics in domestic grain markets and circulation. Second, we will stabilize grain yields through cost-cutting and supplementary measures, release some reserves of potash fertilizer and fertilizer for disaster relief ahead of schedule, and curb the rapid rise in domestic fertilizer prices. In response to the rapidly rising prices of agricultural supplies, we provided one-time subsidies to farmers who grew grain and promoted the establishment of a system of subsidies for actual grain production in significant rice and wheat-producing areas. Third, we will improve the domestic grain emergency supply system. Depending on the scale of emergency grain reserves, we will meet the needs of large and medium-sized cities for grain consumption for 10 to 15 days and speed up the construction of emergency grain processing enterprises, emergency supply outlets, emergency storage and transportation enterprises, and regional distribution centers. As bulk agricultural products, the market competitiveness of grain depends on grain price. In recent years, China's minimum purchase price policy for rice and wheat has played an essential role in ensuring grain production capacity and stabilizing the income of food and agriculture. However, the grain price mechanism has not been fundamentally adjusted, resulting in an insufficient supply of high-quality grain. To this end, we should continue to adhere to the direction of grain market reform, and the minimum purchase price policy of grain should not be changed in the short term as long as it conforms to the WTO micro allowance rules after limited purchase and storage. In order to reduce the excess ration stocks, we should strengthen the market function to regulate the output, improve the social tolerance of reasonable fluctuation of ration prices, and expand the reasonable fluctuation range of ration prices. In the long run, we will adhere to the principle of market pricing and separation of prices from subsidies to improve the mechanism for setting grain prices. Currently, grain production in the water-rich south of China is decreasing while that in the water-deficient north is increasing. The risk of mismatch of water, soil, light, and heat resources in food production is becoming increasingly prominent. In terms of the proportion of grain output in the three-year average from 2006 to 2008 and 2016 to 2018, the share of the seven central grain-producing provinces in northern China increased from 60.3 percent to 63.9 percent, while that of the six major grain-producing provinces in southern China decreased from 39.7 percent to 37.1 percent, widening the gap by 3.6 percentage points over the past decade. By 2035, the proportion of major grain-producing areas in the north will approach 70 percent, and that in the south will approach 30 percent. At the same time, as the population continues to gather in critical regions and cities, especially in recent years, the trend of population migration from north to south is noticeable, and food consumption and security risks are also concentrated in these regions. In a state of emergency and under national logistics constraints, food supplies may be strained in some regions if food is transported externally. Therefore, we must attach great importance to the responsibility of food security in urban agglomerations and metropolitan areas and take adequate measures to deal with it. In the coming period, China's agricultural production mode will be changed entirely, the elderly farmers will gradually withdraw from food production, the new agricultural operation subject will gradually grow, and the main body of grain-growing will experience a replacement process. However, the overall progress of the rural land system reform in China is slow, the grain price mechanism has not been fundamentally straightened out, the income of grain growing is not high, resulting in moderate scale operation, and scientific and technological grain growing is restricted. The cultivation of new business subjects is not fast. Based on this, considering China's primary national conditions and agricultural conditions, we should speed up the improvement of the agricultural production service system, strengthen the construction of interest linkage mechanism, introduce a large number of small farmers into the track of modern agricultural development through effective forms such as land trusteeship, promote moderate scale operation of agriculture, and promote the joint development of family operation and cooperative operation, enterprise operation and collective operation based on grain subject dominated by family operation. At the same time, we will accelerate the reform of the rural land and collective property rights systems, strengthen the application of modern information technology, comprehensively upgrade equipment for whole-process mechanized production, and promote mechanized grain growth. Conclusions Ensuring critical agricultural products, especially food security, is the foundation of national economic and social stability and development. First, this paper combines the clustering dimension reduction algorithm with several other visualization methods to realize the 3D visualization of food security with multi-window collaboration, overcomes the problems of dimension and sample size limitations existing in a single visualization method, and dramatically improves the efficiency of mining, provides new ideas for analysis and mining of multi-dimensional spatial-temporal data, and can provide proper technical support for data mining and analysis of mass agriculture. It benefits the development and promotion of fine agriculture and has specific economic and social benefits. However, different clustering methods have different abilities to fit the topological characteristics of datasets, leading to differences in clustering accuracy. Different visualization methods have different emphases on data visualization expression. In this regard, for different agricultural datasets, finding a suitable clustering dimensionality reduction algorithm, determining its topological distribution, judging its clustering accuracy, and choosing appropriate visual tools to display its spatial-temporal relationship is worth further research. Then, we expound on how to construct the food security early warning platform. Among them, the indicator to select the right is a top priority. We select total output, grain planting area, irrigation area of cultivated land, the proportion of grain exports to major agricultural product exports, and grain reserve rate to determine four warnings: no warning, light warning, medium warning, and heavy warning. The relevant scope is determined through careful division of warning boundaries, trigger rules are formulated, and alarm measures are taken for medium and heavy warnings that trigger alarms. Finally, the SVR model was used to predict China's food security situation from 2014 to 2016, and it was found that the predicted value was not significantly different from the true value. The warning intensity is consistent with the actual situation, which shows that the food security early warning platform is effective.
9,447.2
2022-09-01T00:00:00.000
[ "Computer Science" ]
Proposal For A Management System For The Operation Of A Hybrid Network Platform Based On TDM Network Technology Migrating To Next-Generation NGN Networks Develop a proposal for a management model that allows, through knowledge management, to preserve the quality of services during the technological evolution of networks in a telecommunications company. Today, BaghTel is considered the main telecommunications company in Iraq and some time ago it was acquired by the Iraqi government, which has generated a significant organizational transformation, added to the fact of the rapid technological evolution and the strong competition of the sector. These factors give rise to concern on the part of certain units regarding which management model can best adapt to these changes. In addition to these reasons, there is a need for companies like BaghTel to improve the management of this type of updates or migrations, which would later translate into improvements in the quality of services provided by the company. This study seeks to propose a management model that allows optimizing the administration of networks and services, during the organizational transformation processes that occur in a telecommunications company as a result of today's technological evolution. The importance of this work lies in the opportunity to optimize, through a management model, the way to control an important technological change, which will ensure and improve the service levels that any telecommunications company provides to its end users. The final process, measures and assessments, include a viewing and use of management indicators of the results obtained in BaghTel's Specialist Networks Management; this consists essentially of a periodic monitoring and evaluation of the key variables in the management of the TDM (Time Division Multiplexing) and NGN (next generation of networks) networks; Introduction Currently, telephone companies worldwide are in different stages of the transition process to a next generation of networks based on IP (Internet Protocol), BaghTel (Baghdad Telecom) 1 , Iraq is no exception to this process. Telecom networks have recently undergone a great technological shift as a result of the first steps in the modernization of the traditional TDM (Time Division Multiplexing) next generation of networks (NGN). Companies migrating to NGN have to standardise operations and business processes, define new roles, train staff, negotiate relationships and set timetables according to the company's needs and commitments. Telecommunications companies have a desire to be constantly improving, and they are faster than their competitors 2 . The project management unit requires difficult time from the operating units to meet, which could result in service failures, poor quality of service, and day-to-day operational problems in the services provided to end customers. Network services, by means of IP packets, are based on a technology that modifies basic access and communications parts in telecommunications networks. The most important change is the introduction of an IP layer independent of the network layer. From a managerial point of view, the new technologies and the need to acquire new knowledge on ICTs are common in companies that are immersed in this evolutionary process 3 . All technological change has marked a fundamental change in organisations 4 . Organizations can only keep up by managing their internal processes well. In the BaghTel case study, there is a need to design a management model that ensures profit and transition with the least possible impact. There have been several methods of management: empowerment, intellectual capital, coaching, ISO standards, just in time, total quality management, etc.; they helped the organisation improve their products and services, and later provided greater customer satisfaction. Today, many telecommunication companies are switching to a packet switching type of network, known as a next generation network (Next Generation Network). Changes are generally made to reduce costs, both operation and capital. When making changes, changes usually are made quickly, causing a peak of work in the operations areas responsible for managing the network, possibly causing the quality of services offered to end customers. The networks of large telecommunications companies are made up of various technologies designed by different providers. This type of company usually has at least three large interrelated networks: a PSTN (Public Switched Telephone Network), an intelligent network, and the next generation network. Through these services, the company offers prepayment, presubscription, free calling, call with destination charge, nongeographic services. According to Thurow (2000), "knowledge has become the only source that guarantees a long-term competitive advantage". However, today in telecommunications companies like BaghTel, competitiveness can be measured apart from knowledge, depending on the contribution that companies make to society and their ability to adapt to changes. There have been several proposals to improve the way of working at BaghTel and to correctly manage the rapid technological evolution; despite this, the strong resistance to change in relation to paradigms oriented to information technology (IT) versus telecommunications technologies (Telco), time demands, high operational complexity and recent changes in society, may compromise quality of current and future services. The aim of this research work is, how to manage the technological innovation of networks, during the migration from a TDM network to NGN, in telecommunications operating companies, guaranteeing the quality of the services offered to end customers? And specifically, to identify the internal processes related to the management of the technological evolution of a telephony network and services platform, through a study that allows to reflect the information flow used by a technical support department in a telecommunications company, to explore the characteristics of the changes assumed due to the rapid implementation of new technologies in a telecommunications company and to identify how technological changes affect the operational development of a telecommunications company, compromising its evolution. This proposal is applicable to the telecommunications company BaghTel, since it is in constant technological evolution with a changing TDM and NGN hybrid network platform. The estimated time to carry it out is eight months. In the managerial aspect, previously with TDM technology, telecommunications companies such as BaghTel devoted time and effort to training their operational and managerial personnel in the different technical tools, turning this training into an important asset of collective knowledge, key to measuring production 5 . Today, with the transition to NGN technology, although there is a lot of technical training, it is observed that the knowledge in BaghTel is scattered with a focus on the individual, and is managed informally, this leads to a process of changes and intellectual construction by pieces (Telco's Vs. IT), which are responsible for day-to-day operations. In order to carry out this research, it was not possible to have the availability of all the necessary personnel, since there is a lot of resistance to technological change 6 . Methodological Framework Methodology used: The present work represents an investigation of the feasible project type. Said methodology refers to the development of a proposal for a viable model whose purpose is to satisfy a need or solve a problem. Feasible projects are those that are developed as an action to solve a particular situation, offering solutions in a methodological way. According to Arias, Fidias (2001) "a feasible project formulates action proposals and / or operating models as alternative solutions to the problem posed." Level of Investigation: Currently many telecommunications companies have already started a process of migration from TDM networks to NGN such as BaghTel, however there are other companies in which this change represents only a vision for the future. The truth in both cases is that although there is a lot of documentation regarding technical architectures and best practices to manage platforms, there are operational problems and failures that affect customers during migration, a process that, due to the rapid technological implementation, has been little studied. In this sense, the level of the feasible project presented is considered an e xploratory type investigation, since according to Marciniak, (2008) 7 "It is carried out when the objective is to examine a little-studied topic." Research design: The design used in the feasible project refers to a combined documentary and field type research, initially it was necessary to study and analyze information regarding knowledge management in organizations that provide ICT services, PSTN and NGN networks, their operation and management, signalling protocols, and the migration process. Additionally, to know the operating environment of the networks installed in BaghTel, it was necessary to collect data, information and facts directly from reality, that is, in the Company, which in the same order of ideas as "Field research is one that consists of collecting data directly from the investigated subjects, or from the reality where the events occur (primary data), without manipulating or controlling any variable, that is, the researcher obtains the information, but not it alters the existing conditions" 8 . Population and sample: This research covered the knowledge management functionality during the technological migration process from PSTN networks to NGN, the study environment of the research was the BaghTel Specialized Network Operations Management, the unit responsible among other functions of the implementation, operation and maintenance of PSTN and NGN networks. Techniques and instruments for data collection: To compile the data of the present investigation, the Internet facilities (Web pages), specialized publications, texts and recent magazines were used, in order to obtain updated information on the research area. To find out how the PSTN and NGN networks are currently managed, open book interviews were conducted with consultants, specialists and technicians from BaghTel's Specialized Networks Operations Management. PSTN Network Management Before Migration to NGN Networks The case study of the present investigation is BaghTel, since its foundation, until today, it has been characterized as a company with a very large technological platform, where various supplier brands make possible the wide variety of services that are offered to end customers. This diversity in terms of providers has created a robust and complex network to manage, requiring investment in management tools (hardware and / or software) that serve to try to have a little more centralized control of the elements that make up the PSTN network 9 . At BaghTel, several operating units use various proprietary tools to manage particular communication protocols such as MFC-R2 and SS7, the Company has specialists and technicians to correct situations in each network model, however, the diversity of technological brands, added to constant changes have increased the complexity and costs of managing both networks. Additionally, each operational management has its own internal processes, which probably makes it difficult to clearly observe the methodology that each area uses to deal with network and service management. BaghTel's Specialized Networks Management is responsible, among other activities, for managing the signalling of the TDM and NGN network elements. Signalling, as it could be observed in the theoretical framework, can be considered the nerve center of the different services provided by a telecommunications company, which makes it necessary for the personnel working in this area to have a minimum necessary knowledge that allows: to relate to the network elements that interact, the communication between them and their relationship with the services offered to end customers. 10 Currently, this management has processes associated with how to offer technical support to the issue of signalling among the different elements that make up the network, among them, as a result of interviews with the personnel working in this management, the general process for provide technical support for signaling failures in the TDM network. Although the inputs to the process to provide signalling support come from other units in BaghTel, and that additionally there are several ways to record the incidents that occur daily; Internally, the Specialized Networks Management observes that there are regularly services whose operability was compromised, and a fault report related to said incident was not received, which is an indication of management deficiencies. In conclusion, network management is individual according to the resolution area, each area or unit has its own knowledge, tools, methodologies and processes to provide the necessary support to the eventualities that affect its platform on the network and compromise the operability of the services. Management During the Migration from PSTN to NGN Networks (Current Situation) In the beginning of BaghTel a process of migration to NGN networks. At that time, the Planning Management presented to the Company, the NGN project with a vision to: simplify the voice network, centralize control and applications, provide savings in maintenance and energy consumption and optimize operation and maintenance, among other benefits of the new technology, such as convergence with other IP networks 11 . To this day, the Company continues to replace many analogue exchanges with access gateways and Media Gateways interconnected with digital exchanges have been incorporated, in such a way as to packetize the traffic originated in exchanges with circuit-switched technology. In the same way as when there was only the TDM network in BaghTel, the management of each service is individual, where the Network Operations Center (COR) has continued to be the unit responsible for monitoring and following up on the resolution units of all those failures or situations that at an operational level affect any service. Through telephone contact, an email and / or a ticket on Nettrip, the COR documents the events that require support from specialists and / or technicians (who regularly provide their services in other managements). The Nettrip system contains statistics reported by analysts, and it usually documents the following: Currently, the Company continues to incorporate elements into the NGN network according to its growth plan, although in both networks there are various management systems, depending on the technology provider of each element. Having a management system for each technology provider makes BaghTel a company that requires a high investment in specialized human capital, both in the resolution area and in each brand (call it Huawei, ZTE, Nec, Alcatel, Siemens, Ericsson, HP, Nextone, among others), which creates a highly varied and complex work environment, full of diverse knowledge, especially for performing operations and maintenance tasks nationwide. In IT management terms, this is known as isolated technology pool management. 12 BaghTel's Specialized Networks Management currently sees the efficiency in the management of networks and services compromised during the evolution project to NGN networks; basically between the factors of resistance to change (mentioned in the theoretical framework: people, culture, task, technology, design and strategy), which normally occur in any organization due to an important change, which This management is most concerned about the strategy to manage the knowledge related to the evolution of networks and services. All this situation previously exposed, together with the theoretical framework presented, serve as the basis for the present investigation; Without a doubt, the migration to NGN networks poses an important change in the way of managing the network platform, all the knowledge behind this project can represent for companies like BaghTel, a group of useful integration functionalities to improve or maintain the quality of services. Proposal of a management model that allows, through knowledge management, to preserve the quality of services during the technological migration from TDM to NGN networks BaghTel's current hybrid network platform is being managed, using individual process and technology groups. However, having well-designed and implemented processes is of little use if the day-to-day execution of these processes is not properly organised. There is no possibility of making improvements if operations data collection and daily performance measurement activities are not systematically conducted. The process of management of special networks at BaghTel is in an inception phase, in which the required characteristics and requirements needed to properly manage specialised networks are not yet clearly defined. In order to improve network management, the technical staff contacted managers, consultants, specialists and technicians to share knowledge and create a work team. The ability to communicate, network, and share knowledge reside within the organisation, which is cultivated by "People" and "Relationship" and the technical documents, reports, presentations, technical training centre (example: TTC -BaghTel) from the organisation. "Two scenarios related to the management of networks and services that occurred in BaghTel during the migration to NGN networks" are utilised as a "visualisation (in part) of the characteristics of the changes assumed due to the rapid implementation of new technologies". The "Company probably compromised the quality of services with its end customers", because knowledge management is a necessary technical role for a successful implementation of knowledge management. Scenario 1: Management of Telephone Lines, from TDM to NGN Network Elements As TDM systems were first introduced, a set of data was stored at each physical switch. At each physical port the technicians were notified of changes in the system, and new facilities were eventually built with TDM technology. To implement the remote management, a central book was used, which contained information about the zones, exchanges and numbering plans of the line. This information was used with the tool called "central book", which then allowed to know the status of the line and perform any needed corrections (remotely or on site as the case may be). Here is an example (Table 1) of information from the central book. In order to meet the needs of its customers, BaghTel implemented a special register with a database of customers whose telephone line would have undergone a change, in such a way that the information on the zone, central and numbering plan was correct at the time of carrying out the tasks. However, the knowledge of the personnel in the system is not updated, while the migration of the clients, which probably caused that at the time of carrying out management tasks, only a small group of people dispersed in different units, is trained to use the system management of the NGN network. When the above scenario occurred, the technicians who had knowledge about installing and maintaining phones in network, needed to acquire new knowledge about managing phones on the network, which required a relationship with the providers of NGN (at that time Huawei and ZTE), to jointly operate and maintain the already installed network. Technical incidents at BaghTel were compromised and the failure to effectively manage the situation and an insufficient training of personnel resulted in slower work spaces and unprepared personnel which is why the technical incidents would not have been well handled. Scenario 2: Management of Changes Related to the Routing of Telephone Traffic in TDM and NGN Networks Telephone traffic management coordination, responsible for configuring more than 5 years the routing rules for local calls, national long distance (LDN) and international long distance (L DI). This coordination continued to configure the rules for routing only in the TDM network, but it was another work team who tried to carry out this same configuration in the NGN network, specifically in the Soft switch. In the same way as SMS messaging, when using SMS Gateway you have to use the client code to match the number; however, when using the wireless gateway you have to use the group code to match the number. The team of professionals with both telecommunications and information technology training. After the TDM and NGN networks were brought up, it was discovered that problems with badly configured routing rules caused failures in service delivery to local, long distance and international calls. An example of this would be: 1) a call from a location where the NGN network is not present or 2) traffic from a certain serial of calls where it seems that the telephone exchanges keep trying to connect to each other. It shows how in the past, information technology organisations used ITIL (Information Technology Infrastructure Library) as a means of communication and shared knowledge and it is now possible to implement change in a more coordinated way. Organizations tend to start the implementation of ITIL with the management of problems and incidents in the operational areas. However, others tend to start with the management of disciplines. This might affect the organisation through processes of technology changes. Aligning services with IT services management requires a paradigm shift, which is initially proposed, because of the importance of technologies, processes and network management, the personnel working with technologies, processes and network management receive training on service management, using the best ITIL practises. Table 2 shows the most significant differences between traditional management versus management based on the ITIL best practices model.  Staff are regularly engaged in day-to-day failures (firefighter role), overworked, with priority over urgent tasks, and probably do not have the time to document important settings or changes in a formal way.  As there is no documentation on configurations, or recent changes, the operating personnel generally tend not to trust the data recorded in the platforms, so always before carrying out any work, they resort to different ways to search for related information and its approval.  The services are generally not well documented, the documentation is sometimes incomplete or is not adapted to the different areas, and therefore in practice the information that resides in these documents is useless.  The data on the configuration of networks and services is recorded in the elements that make up both networks (TDM and NGN), but subsequently there is no review or operational audit procedure, so the information available is generally out of date or incomplete.  Resistance to change, probably personnel who are used to managing networks and services according to isolated procedures, may feel rejection before the possibility of a change where it is necessary: 1) share knowledge, 2) apply best practices such as those mentioned in the ITIL model, 3) accept that the figure of a knowledge manager is something normal, and 4) Get out of their "comfort zone". To minimise these difficulties, where there is a wide variety of knowledge about different technologies (including TDM and NGN networks), it is suggested to have a Knowledge Manager, whose acronym in English (CKO) refers to "Chief Knowledge Officer". The Knowledge Manager or CKO is responsible for knowledge management, they also might be referred to as Knowledge Gatekeepers, Knowledge Producers, Intellectuals or Intellectual Asset Managers. In BaghTel the Knowledge Manager could represent an important strategic role in initiating, promoting and developing knowledge about networks and services, especially during the technological migration that represents the jump of networks TDM to NGN, and surely in the future from NGN to IMS. The Knowledge Manager suggested to BaghTel, specifically to the Specialized Networks Management, to manage the technological migration from TDM to NGN networks could, according to the following characteristics:  Entrepreneur: Refers to a professional with a base and / or experience in telecommunications or ICT, whose enthusiasm, initiative and knowledge are focused on developing new projects related to ICT networks and services, regularly the profile of this type of people is oriented to act in the face of novelty, adventure and risk, typical of any major technological change; they are visionaries who attract new ways of doing things, focusing on visible results.  Environmentalist: It is an attitude aimed at strengthening explicit and tacit knowledge at all times, by creating social environments such as teamwork meetings, learning spaces and interpersonal relationships. They are regularly technical leaders, with the ability to integrate different knowledge, making use of best practices such as ITIL.  Consultant: Refers to operational experience in the installation, implementation and administration of telecommunications networks and services; with the ability to listen to new ideas and to be consistent in applying them and adjusting them to a vision with a focus on knowledge management. You must be willing to allow others to play the leading role, it is understood that your role is to serve as an agent of change.  Technologist: Able to understand which technologies favor the knowledge management process, at a managerial level he must have technical experience in ICT projects, as well as technical support in telecommunications networks and services, and he must favor the process of applying knowledge in prevention and solution of technical failures. Appointing a Knowledge Manager or CKO, can be a good start for a knowledge management program at BaghTel, subsequently it is suggested to design an architecture through stages, which serves to stimulate the importance that the efficient administration of knowledge on management of knowledge could take networks and services, allowing the generation of greater capacities and greater sustainability of competitive advantages. The Proposed Model and its Stages To suggest a management model in the face of the change from TDM to NGN technology, using knowledge management, a detailed study of the theoretical framework presented was carried out, the presence of a role known as Knowledge Manager (CKO) was recommended and an analysis of two operational management scenarios in BAGHTEL, which allows identifying the need from a technological and organizational point of view of different stages, represented in figure 2. Figure 2. Structure of the Proposed Model The initial stage refers to the need to analyse the current situation and the necessary knowledge in the face of technology changes (specifically from TDM to NGN), which play an important role in maintaining the quality of services offered to end customers. Analysis of the Current Situation The objective of this initial stage, with an approximate duration of one (1) month, is to suggest to BaghTel's Management of Specialized Networks, an analysis that allows understanding the importance of current technical knowledge as a significant role around the value of the Organization, Identifying the sources of knowledge, the best practices for network management, and the current and desired level of quality of service. For this, it is recommended to carry out at least 5 tasks: Create the profile of the Knowledge Manager (CKO): Appoint her and identify the objectives and their scope within the Organization. Establish a work team: Made up of a Knowledge Manager (CKO) responsible for coordinating the team and team agents, represented by responsible technicians and specialists. Establish practical definitions: It is suggested to establish a clear definition of what BaghTel's Specialized Networks Management interprets as "knowledge". In certain managements this concept is related to experience, and in some others to capabilities. The importance of defining "knowledge" is to visualize the relationship that may exist with other concepts such as "value", "limits", "time window", "best practices", among others. 4. Identify the current strategy: It refers to an analysis that allows to know the levels of participation, and the interest of the staff in relation to the operational management of TDM and NGN networks, identifying the methodology and the current action plan to manage the different network elements and services. Analysis of capacities: Within this analysis it is suggested to cover the technical and managerial capacities that the Management of Specialized Networks currently has to ensure the operational management of TDM and NGN networks in different scenarios. Development of a Knowledge Strategy This stage, with an approximate duration of one (1) month, consists of establishing a kind of bridge that will allow the Management of Specialized Networks of BAGHTEL, to go from where it is currently, to where it wants to be (in relation to maintaining the operation of services during the migration from PSTN to NGN networks), basically this stage aims to establish development plans that will allow, through knowledge management, to achieve the general objective of this research. For this, it is suggested to carry out at least 5 activities:  Evaluation of core and secondary competencies: It is necessary to identify those core and secondary competencies necessary to efficiently manage the services and the network (TDM and NGN). Core competencies are those that have a high degree of participation, while secondary competencies, although they do not have a high degree of participation, may be related to some key knowledge factor.  Analysis of knowledge gaps: Once the central and secondary competencies have been identified, it is advisable to establish the existing deficiencies on the sources of knowledge that support said competences, in this way it is possible to establish the current and desired level in relation to knowledge, This analysis will make it possible to establish the differences between what the Specialized Networks Management knows and should know in terms of managing TDM and NGN networks. Analysis of the actual situation Developing a strategy Development of a knowledge architecture Implementation measurements and evaluation  Definition of the goal and strategic objectives: The goal refers to the direction around which the actions should be aimed, it is suggested to develop the knowledge gaps identified above, and goal must be specific, measurable, consensual and realistic. Subsequently, groups can be defined by means of a work division structure, responsible for developing the objectives that will allow the goal to be reached, figure 3 represents an example: Figure 3. Structure of the Division of Objectives to Achieve a Goal  Conversion of knowledge: Once the objectives that will make possible the goal "efficiently manage the TDM and NGN network" have been identified, it is through the interaction proposed by Huang, (2002) 13 that they emphasize the dynamic model of the creation of the knowledge, where it is proposed that through the conversion of knowledge, the individual and technological requirements required by BaghTel's Specialized Networks Management can be defined.  Development of short, medium and long term plans: It refers to the plans that broadly define the activities and actions necessary to give continuity to the goal of "efficiently managing the TDM and NGN network". In these plans, there could be the creation of another objective according to the need for knowledge, a result of the conversion mentioned above. Design of a Knowledge Architecture This stage lasts approximately 2 months, the purpose is to define the logical and technical basis on which the proposed management model will be developed, for this it is suggested to take into account the following:  Requirements analysis: The objective of this analysis is to generate a specification of functional requirements on the environment and nature of the project, from a technological and social point of view. In this research, the technical environment is aimed at improving operational management during the migration of TDM networks to NGN, in the social environment through the use of ICT, the workers of the Specialized Networks Management will have the opportunity to reduce barriers and knowledge gaps in the operation and maintenance tasks of both networks, benefiting the society that makes use of the services offered by BaghTel.  Investments in ICT: In the analysis of the current situation, it is necessary to identify the barriers that affect knowledge, in this sense, the investments suggested for ICT should ensure at least three aspects: 1. Reduction of spatial barriers: through the use of tools that make it possible to identify the stock of knowledge and inventory it, describing where it is located, people or work groups, if they exist in sources outside the Organization, as well as facilitating access to knowledge regardless of their physical location, including: corporate portals, virtual communities, videoconferencing, email and forums. Reduction of temporary barriers: It is suggested to use tools that allow storing knowledge through chronological structures, facilitating simultaneous, structured and controlled access to knowledge, as well as communication and transmission of information in multiple formats in real time, in this sense, the use of: Data warehouse, multimedia systems, simulation software, intranets, email and forums is important. Reduction of social barriers: These are those that facilitate formal and informal communication, temporary or continuous, regardless of hierarchical structures, among them the use of: virtual communities, Internet, Intranet, corporate portals and knowledge maps stand out.  Hardware and software development and integration schemes: Refers to how BaghTel's Specialized Networks Management will establish the guidelines for developing applications (software) and acquisition of equipment (hardware) necessary for the knowledge management implementation process.  Technological analysis: This activity aims to determine those technologies that will support the knowledge management process (identification, selection of useful knowledge, structured storage, transfer and use of created and stored knowledge), for this it is suggested to the Management of Specialized Networks evaluate the existing technological options in the market taking into consideration five aspects: 1. Personalization: It refers to sharing knowledge through person-to-person contact. To facilitate this activity, you can use ICT such as portals, discussion groups, chats, Lotus Notes, among others. 2. Discovery: It is the activity of searching and obtaining knowledge from repositories and databases, for this it is suggested to use techniques of knowledge engineering, such as data mining (Data Mining) and text mining (Text Mining), among others. 3. Coding: It refers to the activity of capturing existing knowledge and placing it in repositories in a structured way, for this the use of techniques for knowledge acquisition (KA) is suggested. 4. Creation / innovation: Through this activity it is possible to generate new knowledge, the technology will be required to allow the use of the "Brainstorming" methodology or brainstorming, which consists of a conference technique, in which a group of people looks for the solution to a specific problem, gathering the ideas contributed by the participants. 5. Capture / monitoring: It refers to the process of capturing the knowledge transported in daily tasks as a result of the interaction between people and ICT, among them the use of useful tools for decisionmaking and knowledge portals can be highlighted, whose idea whether to measure the level of knowledge in a certain area, for example TDM and NGN networks, migration, management, signalling, among others. Each of these activities can be aligned with the knowledge management process, in order to define an architecture of the management model by means of this relationship, (figure 4). • Architecture of the knowledge management model: Represents the relationship of the activities mentioned above in the technological analysis and the knowledge management process. Implementation The implementation phase, which lasts approximately two months, has as its main objective the development of the aforementioned activities, as well as establishing the basic guidelines for implementation. For this, four activities are suggested: 1. Adaptation of the organizational structure: BaghTel's Specialized Networks Management represents the organizational structure of this research, in this sense, managerial and technical support is necessary in order to fulfil the goal "efficiently manage the TDM and NGN network", as well as the decision-making necessary for the implementation of the activities indicated in the knowledge strategy. As far as possible, it is suggested that one person be responsible for each objective and for the implementation of the strategies for their achievement. 2. Creation of the organizational climate: Within the implementation process, the creation of the organizational climate represents the most complex task, due to the resistance to change that will surely occur in the people who make up the organizational structure. To generate an organizational climate that supports knowledge management, it is suggested to communicate: 1) the expected benefits, 2) the objective, 3) the goal, and 4) the expected and obtained results. 3. Execution of activities: It refers to documenting and making known to all personnel the main characteristics of each activity in each of the objectives. It is important to reflect the origin, purpose and description of each activity, additionally, the capacity analysis must be taken into consideration. Each of the activities must be executed according to a planning hierarchy, that is, according to the detail established at the time of developing them. 4. Periodic review: Once the implementation of activities in each objective that lead to a final goal has been carried out, it is suggested to follow up through management measurements, the main idea is to visualize the results obtained with the incorporation of knowledge management Measurements and Evaluation The final stage, measurements and evaluation, consists of visualizing the results obtained in BaghTel's Specialized Networks Management with the implementation of knowledge management, with the use of management indicators; it basically consists of a periodic monitoring and evaluation of the key variables to manage the TDM and NGN network, below three evaluative categories: 1. Quantitative measurements: Describes measurements for the collection of data in conjunction with the tracking of the NETTRIP reporting system, showing how the information management (KM) system is being used, before, after and after its implementation. 2. Qualitative measurements: The breadth of the knowledge is evaluated by assessing the knowledge by analysing its quality by examining the score of all the inputs, the verbalization and the relative classification. Management workers feel comfortable with the network management by best practises and consistent tasks according to a specific goal. 3. Trained observers: Refers to certain measurements that employees trained in network and service management of the Specialized Network Management consider important to visualize. Conclusion Convergence represents a major technical evolution, but it is not about deploying new networks and services, but how to handle them to guarantee the correct activity of the same. Maintaining information in times of technological changes is an essential strategic advantage for organisations that provide these services. It needs a vision aimed at increasing the quality of service and the satisfaction of the customers. Answering the issue of how to handle the technical advancement of networks, during the migration from a TDM network to NGN in telecommunications operating companies, ensuring the quality of the services that are provided to customers. It means that technology migration processes must be closely controlled by the company that makes the transition, as well as knowledge management, without a doubt, contributes to improving the quality of decisions that need to be taken in every company, particularly in technical ones, where new knowledge is developed daily as a result of rapid evolution. The processes are managed by information management, which ensures the resources are used more effectively, work is not duplicated, and errors can be avoided. Businesses historically have relied on higher levels of management to formulate strategies. The problem is that ICT leaders are only now beginning to play a role in the region. IT knowledge complements best practises for management. Organizational growth, leadership, knowledge management, and others are essential to provide a body of knowledge to strengthen businesses. The key challenge of knowledge management is the lack of knowledge by many organisations with respect to the opportunity that an effective knowledge management model can create, and also the lack of interest in implementing a knowledge management method that has been shown to produce a profit. The challenge for professionals in information systems and ICT is the transition from information processing to knowledge processing; new technologies and strategies are required to facilitate the transformation of people from passive recipients of information to active agents contributing to the production of information and knowledge.  The implementation of a knowledge management model can provide BaghTel Specialized Networks Management with the key to efficiently manage the process of migration from TDM to NGN networks, guaranteeing the correct operation of services, as well as improvements in the process of decisionmaking necessary to promote growth and competitiveness, in this sense it is recommended to take into consideration the following aspects:  Training is a fundamental aspect when you want to tackle a knowledge management project, so it is recommended to plan and structure it appropriately at different professional, technical, specialist, consultant, coordinator and managerial levels. Efficiently managing TDM and NGN networks is the key to success not only for the Management of Specialized Networks, but for the entire  Setting realistic goals, the implementation of both knowledge management, represents a complex and cumbersome process that takes time; starting with four basic processes such as incident, problem, change and configuration management, can help define objectives, while facilitating resistance to change in relation to current network and service management.  Delimit the activities of the proposed management model, with realistic times according to BaghTel's organizational culture, avoiding placing insufficiently prepared people in strategic positions.
9,170
2021-04-11T00:00:00.000
[ "Business", "Computer Science" ]
Unitarizing Higgs Inflation We consider a simple extension of the Standard Model Higgs inflation with one new real scalar field which preserves unitarity up to the Planck scale. The new scalar field (called sigma) completes in the ultraviolet the theory of Higgs inflation by linearizing the Higgs kinetic term in the Einstein frame, just as the non-linear sigma model is unitarized into its linear version. The unitarity cutoff of the effective theory, obtained by integrating out the sigma field, varies with the background value of the Higgs field. In our setup, both the Higgs field and the sigma field participate in the inflationary dynamics, following the flat direction of the potential. We obtain the same slow-roll parameters and spectral index as in the original Higgs inflation but we find that the Hubble rate during inflation depends not only on the Higgs self-coupling, but also on the unknown couplings of the sigma field. Introduction Inflation is believed to be the phenomenon that determined the necessary initial conditions for the cosmological evolution of our universe. Although there is mounting observational evidence in favor of inflation, the nature of the inflaton is still a mystery and a compelling link with an established particle theory is still missing. An interesting proposal that aims at filling this gap between cosmology and particle physics is the idea that the Standard Model Higgs boson could play the role of the inflaton [1] (see also ref. [2]- [6]). This appears to be possible if a Higgs bilinear term is coupled to the scalar curvature with an unusually large constant ξ of the order of 10 4 . At first sight, this scenario suffers from a potential problem. Because of the large coupling constant ξ, the theory violates unitarity at the energy M P /ξ. This energy is comparable to the inflationary Hubble rate and is parametrically smaller than the scale of the Higgs field during inflation, which is as low as M P / √ ξ. The violation of unitarity [7,8] at the scale M P /ξ occurs in the theory expanded around the vacuum in which the Higgs field takes a small value, of the order of the electroweak scale. But, as emphasized in ref. [9] (see also ref. [10]), this result does not necessarily spoil the self-consistency of the Higgs inflationary scenario. The energy cutoff, dictated by unitarity arguments, is field dependent. While being equal to M P /ξ at small field value, the energy cutoff grows as the Higgs background field is increased. Thus, the region of the scalar potential relevant during the inflationary epoch could be within the domain of calculability. Nonetheless, there are reasons to be concerned with the unitarity issue of the theory. First of all, the cutoff associated with the would-be Goldstone bosons in the Higgs doublet is lower than the one read from the potential of the Higgs boson. In unitary gauge, the problem becomes manifest in the gauge sector and it has been shown [9] that the cutoff during inflation is given by M P / √ ξ, which is parametrically close to the energy scales involved during inflation. Moreover, if we assume that the Higgs theory is eventually embedded into a more complete scheme that can be reliably extrapolated all the way up to the Planck mass, we will necessarily find new particles or new dynamics appearing at the scale M P /ξ. It is quite reasonable to expect that the new degrees of freedom with mass of the order of M P /ξ will affect the Higgs potential at these scales and modify its form for values of the Higgs field relevant for inflation, of the order of M P / √ ξ. If this is the case, any assessment about the viability of Higgs inflation will require knowledge of the physics responsible for unitarization at the scale M P /ξ. In this paper we will address the issue of unitarization of Higgs inflation. We will present a simple extension of the model, with only one new scalar field, that allows to raise the unitarity cutoff up to the Planck mass. The inflationary process, which can then be reliably computed, occurs in a fashion analogous to the original Higgs model. The procedure for unitarization that we follow is very similar to the one that is used to promote the non-linear sigma model into its linear version. The new scalar field that we introduced, called σ, plays the role of nearly linearizing the non-renormalizable Higgs interactions, as explained in sect. 2. The paper is organized as follows. First, we sketch the procedure for unitarizing Higgs inflation and provide a simple model that implements this scheme with one scalar field. Then we discuss the dynamics of the σ field both in the vacuum and during the inflationary regime. In sect. 3 we calculate the slow-roll parameters in our model and compare them to the original Higgs inflation. To prove unitarity of our model in sect. 4 we consider the gauge-Higgs interactions and the Yukawa couplings and study their impact. Finally, some conclusions are drawn. There are two appendices dealing with the formalism for multi-field inflation and the details of the calculation of the slow-roll parameters in our model. The model In this section, we first explain the procedure for unitarizing Higgs inflation with a new real-scalar field. Then we consider a simple model realizing this scheme with general renormalizable interactions and non-minimal couplings to gravity for the Higgs doublet and the new scalar field. Next we discuss the vacuum structure and the inflationary dynamics along the flat direction in our model. The procedure for unitarizing Higgs inflation The original model of Higgs inflation is based on the Jordan-frame Lagrangian [1] where ξ 0 is the non-minimal coupling of the Higgs doublet. The Einstein-frame Langrangian is obtained after the Weyl rescaling This form of the Lagrangian clearly exhibits the unitarity problem in Higgs inflation, which originates from the first term in the second line of eq. (2). The non-renormalizable dimension-6 operator involving four Higgs fields and two derivatives is suppressed by the mass scale M P /ξ 0 , which plays the role of the energy cutoff at small field value. A procedure for unitarization is suggested by the analogy between the Einstein-frame kinetic term for the Higgs and the non-linear sigma model, which is more transparent in the real representation with H T = 1 √ 2 (φ 1 , φ 2 , φ 3 , φ 4 ), with φ 2 ≡ i φ 2 i . Just as a non-linear sigma model can be completed in the ultraviolet by the presence of a sigma field, we introduce a real-scalar sigma field satisfying the constraint σ 2 = Λ 2 + φ 2 , with Λ 2 ≡ M 2 P /ξ 0 , and rewrite the Higgs kinetic term in the form Here κ(x) is the Lagrange multiplier and F is an arbitrary function satisfying F (0) = 0. The Higgs kinetic term does not yet correspond to a flat metric of the target space, but rather it looks similar to the metric of Euclidean AdS 5 space with AdS radius 1/Λ. However, as suggested by the constraint for the sigma field, it is possible to complete the theory into a linear sigma-model type, in which the sigma field vev is dynamically determined to be (Λ 2 + φ 2 ) 1/2 by the full potential. This effectively corresponds to replacing the Lagrangemultiplier term by an appropriate scalar potential whose minimum lies at the field value σ 2 = Λ 2 + φ 2 . Then, after the field redefinition σ = Λ exp[χ/( √ 6M P )], we find that the canonically-normalized field χ has only Planck-suppressed non-renormalizable interactions. This allows to raise the unitarity cutoff up to M P . In ref. [11] it was claimed that Higgs inflation could be unitarized by introducing additional non-renormalizable operators in the Jordan frame with their coefficients carefully chosen to cancel exactly the dangerous interactions causing the loss of unitarity. We believe that a dynamical solution is necessary to solve the problem. In the next section we will propose a simple model that implements our procedure for unitarization, completing Higgs inflation in the ultraviolet. Higgs inflation with the sigma field Our model, which extends the original Higgs inflation by adding to the SM Higgs doublet H a real scalarσ, is based on the Jordan-frame Lagrangian HereM ,Λ, and v are parameters with dimension of mass. We assume that the electroweak scale v is much smaller than the other masses involved in the Lagrangian (v ≪M ,Λ). This assumption, technically unnatural, is just an expression of the hierarchy problem, which cannot be addressed in the SM using conventional symmetry arguments. Since we are working in the context of the SM, we must accept this assumption without a known justification. In eq. (5), the parameters ξ, ζ are the non-minimal couplings of the sigma field and the Higgs doublet to the scalar curvature. As described later, inflation requires a large coupling ξ, of the order of 10 4 . On the other hand, we will take ζ of order unity, in order to avoid the reappearance of the unitarity problem in the Higgs sector. It is technically unnatural to set ζ = 0, because ζ can be generated by loop effects, but it is possible to keep it significantly smaller than ξ. We assume that this is the case. Finally κ, α, λ are dimensionless coupling constants. The Lagrangian (5) contains the most general renormalizable terms compatible with the Z 2 symmetry under whichσ transforms asσ → −σ. It is useful to choose the unitary gauge for the Higgs doublet, H T = 1 √ 2 (0, φ), and introduce the field variable σ with the definition σ 2 =σ 2 + M 2 with M 2 ≡M 2 /ξ. With this transformation, the above Lagrangian can be rewritten in a form in which the role of the Planck mass is expressed only in terms of fields, where and Λ 2 ≡ M 2 +Λ 2 . Thus, the Planck mass is traded off for the non-canonical kinetic term for the new sigma field in Jordan frame. Since the minimization of the potential sets σ = Λ (up to negligible corrections of order v 2 ), the field σ determines the effective Planck mass. So, we need to choose We can now rewrite the Lagrangian in the Einstein frame by performing a Weyl rescaling of the metric, Note that the field σ is such that σ 2 > M 2 , because of its definition in terms ofσ, and thus the sign of the kinetic term for σ is well defined and no ghost-like instabilities exist. In the limit ζ = 0 and M = 0 the Lagrangian in eq. (9) exhibits a form similar to the one in eq. (4), suggested by the sigma-model discussion, apart from the coefficient of the sigma-field kinetic term which is (1 + 6ξ) instead of 6ξ. In our case, the scalar potential V E contains the term playing the role of the Langrange multiplier in setting the constraint σ 2 = Λ 2 + αφ 2 . The limit M = 0 corresponds to the case of induced gravity [12]. In the limit α = 0 the theory has strong similarities with the model proposed in ref. [13]. Dynamics with the sigma field Let us now study the structure of the theory in the Einstein frame. The vacuum of the model lies at For v ≪ Λ and for large ξ, the kinetic mixing between σ and φ in eq. (9) becomes negligible and the fields χ = √ 6M P ln(σ/Λ) and φ are approximately canonically normalized. The mass of χ can then be read off from the potential in eq. (10), with the result As expected, the mass of the new degree of freedom described by the field σ turns out to be of the order of M P /ξ, the energy scale at which the original Higgs model violates unitarity. Below the scale M P /ξ we can integrate out the field σ and obtain an effective theory, which corresponds to the original Higgs inflation model. Up to some higher-dimensional terms suppressed by M P / √ ξ, the effective theory is described by the Lagrangian (2) with ξ 0 = αξ + ζ. Above the scale M P /ξ, the sigma field cures the unitarity breakdown of the original Higgs inflation, as is easily understood by replacing σ in the Lagrangian of eq. (9) with its expression in terms of the χ field, σ = Λ exp(χ/ √ 6M P ). All the non-renormalizable interactions are suppressed by the Planck mass. Let us now consider the theory for large values of the Higgs background. For |σ|, |φ| ≫ Λ, the Einstein-frame potential (10) becomes approximately a function of only the ratio between φ and σ. It is then convenient to rewrite the Lagrangian (9) in terms of the field φ = Λφ/σ and obtain At the leading order, the potential forφ is whose minimum is atφ ≃ κα λ+κα 2 Λ for ζ ξ ≪ 1. Onceφ is frozen at its minimum value, the potential presents a flat direction along the field component orthogonal toφ. From eq. (13) we find that, during inflation, the fieldsφ and χ = √ 6M P ln(σ/Λ) are approximately canonically normalized and their kinetic mixing is negligible. The scalar potential for χ is obtained by keeping higher orders in Λ 2 in eq. (10) and by freezingφ into its minimum. Ignoring terms proportional to ζ/ξ, we then find The potential V E along the χ direction is exponentially flat, in perfect analogy with the original Higgs inflation. Note that the mass of the heavy modeφ during inflation is equal to mφ ≃ √ 2κα Λ, which is of the order of M P / √ ξ. Therefore, the mass of the heavy mode, which is about M P /ξ in the vacuum, is raised to M P / √ ξ when the fields obtain large background values. In the original Higgs inflation, although new dynamics has to appear at the mass scale M P /ξ to cure the unitarity problem, during inflation the energy cutoff is higher, and is equal to about M P / √ ξ. Thus, our model gives an explicit realization of the mechanism advocated in ref. [9]. In our model, the inclusion of the field σ allows for full control of the theory up to the Planck scale. Slow-roll inflation As discussed in the previous section, our Higgs inflation model with the sigma field is reduced to a single field inflation along the flat direction. In this section, we show that the flat direction indeed drives inflation by explicitly computing the slow-roll parameters. In Appendices A and B, we perform the calculation using the general formalism for multi-field inflation and we show that, in our case, the single-field approximation is adequate. We are interested in the case in which the non-minimal coupling of the Higgs doublet is much smaller than the one of the sigma field. The ε slow-roll parameter, see eq. (B.10), for |σ| ≫ Λ is given by The differential number of e-foldings, see eq. (A.11), is dN ≃ − ∂σ V E 2εV E for ∂V E ∂φ = 0 along the flat direction, so the total number of e-foldings is given by Here we take σ 2 f = (2/ √ 3)Λ 2 , corresponding to the field value at which ε = 1 and the dynamics exit the slow-roll regime. The η parameter is given by the minimum between the two expressions η 1 and η 2 , corresponding to the two independent field directions. For |σ| ≫ Λ, from eqs. (B.19) and (B.20), we obtain Consequently, we find that the mass of the heavy field orthogonal to the flat direction is m 2 2 ≃ η 2 V inf /M 2 P ≃ 2καΛ 2 , in agreement with the result in the previous section. On the other hand, the η parameter for inflaton is given by η = η 1 , so the inflaton mass is m 2 1 ≃ η 1 H 2 . Combining eqs. (16), (17), and (18), we obtain the slow-roll parameters in terms of the number of e-foldings as Since d ln k = dN for an approximately constant Hubble parameter during inflation, using eq. (A.14) and ∂ σ ε ≃ (∂ σ ln V E )(−2ε + η) we obtain the spectral index The combined WMAP 7-year data with Baryon Acoustic Oscillations and Type Ia supernovae [14] show that the spectral index is n s = 0.963 ± 0.012 (68% CL). For N ≃ 60, we obtain ε ≃ 2.0 × 10 −4 and η ≃ −1.6 × 10 −2 , leading to the spectral index n s ≃ 0.966 and ratio of the tensor to scalar perturbations, r = 12.4 ε ≃ 2.4 × 10 −3 , both compatible with observations. All these results for the slow-roll parameters, number of e-foldings, and spectral index are identical to those of the original Higgs inflation. The COBE normalization of the power spectrum constrains the inflation parameters V 1/4 ε 1/4 ≃ 6.7 × 10 16 GeV. For the vacuum energy during inflation V inf in eq. (15), the COBE normalization (22) leads to This determines the scale Λ to be about 10 16 GeV. This result suggests the interesting possibility that the vev of the singlet field σ could be responsible for the scale of the right-handed neutrino masses. The constraint on the non-minimal coupling ξ of the sigma field depends on all the dimensionless parameters of our model. On the other hand, in the original Higgs inflation, the COBE normalization gives ξ 0 / √ λ ≃ 5 × 10 4 . Here ξ 0 is the non-minimal coupling of the Higgs doublet, which is related to the parameters of our model by ξ 0 = αξ, considering the effective theory with σ integrated out. Therefore the constraint on ξ coincides with the one of the original Higgs inflation only when κα 2 ≫ λ. In general, however, it depends on the values of the various unknown coupling constants and cannot be simply related to the observable Higgs quartic coupling. There is another important difference between the Higgs inflation and our model. In our case, at the end of inflation the field configuration will be at φ/σ ≃ κα λ+κα 2 as discussed below eq. (14). Therefore the inflaton is a combination of the Higgs and sigma fields with a mixing angle determined by the values of the various coupling constants. This mixing angle suppresses the decay of the inflaton, because only the Higgs is directly coupled to the SM particles. Thus, the reheating temperature in our case is smaller than the one in Higgs inflation, which is estimated to be about 10 13 GeV [3]. Moreover, it has been observed in ref. [5] that, in Higgs inflation, the loop corrections to the Higgs self-coupling are important for determining the spectral index with a precision measurable by PLANCK. In our case, see eq. (9), the Higgs kinetic term is close to a canonical form, independently of the background field values and so, it would be sufficient to consider the SM running of the Higgs self-coupling. However, the dependence on the running effect coming from the sigma field interactions prevents us from making a simple testable prediction. Gauge and fermion interactions The analysis of the scalar potential performed in sect. 2.3 has shown that no unitarity violation occurs below the Planck scale, independently of the background scalar field values. However, the choice of the unitary gauge hides part of the problem. The interactions of the would-be Goldstone bosons could introduce unitarity violations at a lower cutoff scale. This is actually happening in the case of the original Higgs inflation [7,9]. In the unitary gauge, the extra degrees of freedom contained in the Higgs doublet are gauged away by a local gauge transformation and their information is encoded in the gauge-Higgs interactions. It is therefore important to analyze also the gauge and Yukawa couplings of the Higgs in order to check the absence of any unitarity violation. In this section, we consider the power counting both in the true vacuum and in the inflationary background for gauge-Higgs interactions as well as Yukawa couplings and compare it to the original Higgs inflation. We again neglect contributions coming from the non-minimal coupling of the Higgs doublet with respect to the σ coupling. Field fluctuations around the vacuum • Gauge interactions The gauge kinetic terms are conformally invariant under the Weyl rescaling of the metric. So, the Higgs interactions with two gauge bosons in unitary gauge are given by Expanding around the vacuum σ = Λ + χ/ √ 6ξ and φ = v + h, the above gauge interaction becomes Therefore, the Higgs-gauge interactions are identical to the SM and there is no unitarity violation, while the coupling of the sigma field to the gauge sector is suppressed by the Planck scale. This result is a direct consequence of the fact that the physical Higgs is not rescaled, contrary to the original Higgs inflation. In the original model, because of the correction to the Higgs kinetic term in the Einstein frame, the gauge-Higgs interactions are modified as compared to the SM: HI where Λ HI = M P ξ 0 . So, the unitarity cutoff of the original Higgs inflation must be identified with Λ HI . • Fermion interactions Let us consider the fermion kinetic terms in the Einstein frame We can make the kinetic terms canonical by rescaling the fermions: Then, applying the same expansions of the scalar fields around the vacuum as for the gauge interactions, the Yukawa couplings become Again, the Yukawa couplings to the physical Higgs are the same as in the SM. On the other hand, in the original Higgs inflation, there was a dimension-6 operator, HIψ ′ R ψ ′ L , which is suppressed by Λ HI . Field fluctuations during inflation During inflation, the fields reside at large values, |σ| ≫ Λ and φ 2 ≃ κα λ+κα 2 σ 2 . We can expand the scalar fields around the inflationary background as follows, where σ 0 , φ 0 are the background field values during inflation and χ, h are perturbations having canonical kinetic terms. • Gauge interactions From eq. (24), the gauge interactions become where V ≡ Λφ 0 /σ 0 . Thus, the Higgs-gauge interactions are of the standard form with the Higgs vev being replaced by V , so there is no unitarity violation in the gauge sector. Because of the large scalar values during inflation, the gauge boson mass is increased and saturates at m A ≃ gV ≃ gΛ κα λ+κα 2 . However, even during inflation, there is no unitarity violation below the Planck scale as in the vacuum case. In the original Higgs inflation, the gauge interactions are given by − 1 Thus, due to the suppressed Higgs couplings, unitarity is broken at V HI . • Fermion interactions From eq. (27), the Yukawa interactions become Thus, we find that the fermions have large masses but there is no unitarity violation below the Planck scale. The Yukawa couplings during inflation are not suppressed as compared to the SM ones, unlike the original Higgs inflation in which the Yukawa interactions are given by Conclusions The idea that the Higgs boson could play the role of the inflaton is very intriguing. A scalar theory with quartic interaction in the potential and large non-minimal coupling ξ to the curvature can support inflation. However, the inflationary dynamics occurs at such large values of the scalar field that the identification of the inflaton with the Higgs boson remains suspicious, in view of the existence of the intermediate scale M P /ξ at which the theory around its true vacuum violates unitarity. If we insist that the theory can be extended up to the Planck mass, it is quite plausible that the necessary new physics occurring at the scale M P /ξ will modify the Higgs potential in the regime relevant for inflation. Any conclusion about the viability of Higgs inflation will then require knowledge of the new dynamics that unitarizes the theory. We have considered a simple model, with one additional scalar field σ, which cures the unitarity violation at the intermediate scale and allows for an extrapolation of the theory up to M P . The procedure we followed to construct the model is reminiscent of the unitarization of the non-linear sigma model into its linear version. In our model, the σ field has a mass of order M P /ξ and the effective theory below this scale essentially corresponds to the original model of Higgs inflation [1], namely the SM with a large non-minimal coupling between the Higgs and the curvature. The analysis of our model in the regime above M P /ξ shows that the theory can support inflation in a way completely analogous to the case of the original Higgs inflation. The predictions for the slow-roll parameters and the spectral index are identical in both theories. It is interesting that, in our model, the mass of the heavy mode increases with the field background. Being equal to M P /ξ around the true vacuum, the mass of the new state is about M P / √ ξ during inflation, giving an explicit realization of the mechanism advocated in ref. [9], for which the scale of unitarity violation is raised at large background field. In spite of the similarities with the original Higgs inflation, the σ field plays a crucial role. Besides unitarizing the theory, σ directly participates in the inflationary dynamics. Its role is also reflected in the fact that the relation between ξ and the inflationary scale does not only depend on the measurable Higgs quartic coupling, but also on unknown coupling constants determining the σ interactions. The reheating temperature in our model can be smaller than in the original Higgs inflation. Taking the metric ds 2 = −dt 2 +a 2 (t)δ ij dx i dx j , and time-dependent scalars ϕ I , the Einstein equation and the equation of motion of the scalars are The ε slow-roll parameter for multi-field inflation is defined as Forφ I + Γ I JKφ JφK ≪ G IJ V ,J , we can rewrite the above slow-roll parameter as The counterpart of the η slow-roll parameter in multi-field inflation is defined as η = min a η a where η a are eigenvalues of the matrix N I J , The power spectrum for the multi-field inflation is given by Using the approximate formula, and eq. (A.6), we get the power spectrum as for the single-field inflation, (A.14) Finally, the spectral index is We apply the general formulas for multi-field inflation to calculate the inflationary observables in our model. We consider the case in which the non-minimal coupling of the Higgs doublet ζ is much smaller than ξ. Working in the Einstein frame, we first minimize the scalar potential with two fields. For convenience in the following discussions, we enumerate the terms of the scalar potential (10) with ζ = 0 as follows, Compared to the scalar potential (10), we have chosen the parameters as Then, at the minimum of the total potential, we can determine the Planck scale with a nonzero σ vev and the electroweak scale with a nonzero Higgs vev. From eq. (B.1), for ∂V E ∂φ = ∂V E ∂σ = 0, we obtain the minimization conditions For m 2 φ = m 2 σ = c = 0, we find that there is a flat direction along σ 2 = a b φ 2 . For the Einstein-frame action (9), from the formula (A.6), we get the ε parameter for the two-fleld inflation as The contribution coming from the σ derivative is suppressed by a large non-minimal coupling. So it is reasonable to take the inflaton direction to be along the line with ∂V E ∂φ = 0, which is equal to σ 2 = a b φ 2 + m 2 φ b . This inflaton direction corresponds to the flat direction in the limit of vanishing dimensionful parameters. For the inflaton direction, we simplify the potential and its derivatives, (B.10) In order to compute the η parameter for the scalar kinetic terms given in eq. (9), we first consider the non-zero components of the Christoffel symbol Then, the matrix elements of N I J in eq. (A.7) are For the inflaton direction with large non-minimal coupling satisfying 6ξ ≫ σ 2 σ 2 −M 2 , from eqs. (B.12)-(B.15) we obtain Therefore, plugging eqs. (B.5)-(B.9) in the above, we obtain the eigenvalues η 1 , η 2 of the matrix N I J as (B.20)
6,785.8
2010-10-07T00:00:00.000
[ "Physics" ]
Proteome-Wide Effect of 17-β-Estradiol and Lipoxin A4 in an Endometriotic Epithelial Cell Line Endometriosis affects approximately 10% of women of reproductive age. This chronic, gynecological inflammatory disease results in a decreased quality of life for patients, with the main symptoms including chronic pelvic pain and infertility. The steroid hormone 17-β Estradiol (E2) plays a key role in the pathology. Our previous studies showed that the anti-inflammatory lipid Lipoxin A4 (LXA4) acts as an estrogen receptor-alpha agonist in endometrial epithelial cells, inhibiting certain E2-mediated effects. LXA4 also prevents the progression of endometriosis in a mouse model via anti-proliferative mechanisms and by impacting mediators downstream of ER signaling. The aim of the present study was therefore to examine global proteomic changes evoked by E2 and LXA4 in endometriotic epithelial cells. E2 impacted a greater number of proteins in endometriotic epithelial cells than LXA4. Interestingly, the combination of E2 and LXA4 resulted in a reduced number of regulated proteins, with LXA4 mediating a suppressive effect on E2-mediated signaling. These proteins are involved in diverse pathways of relevance to endometriosis pathology and metabolism, including mRNA translation, growth, proliferation, proteolysis, and immune responses. In summary, this study sheds light on novel pathways involved in endometriosis pathology and further understanding of signaling pathways activated by estrogenic molecules in endometriotic epithelial cells. structural similarity with the weak estrogen estriol, is an estrogen receptor alpha agonist in endometrial epithelial cells (8). In these studies, LXA4 altered estrogen-regulated gene expression as well as functional parameters, notably proliferation, in human endometrial epithelial cells while also demonstrating antiestrogenic potential in a manner similar to that previously shown for estriol (9,10). LXA4 also prevented the progression of endometriosis in a mouse model via anti-inflammatory and anti-proliferative mechanisms and by impacting mediators downstream of ER signaling, including epithelial-expressed molecules such as growth regulation by estrogen in breast cancer (11). As such, the proteomic changes induced by this eicosanoid in in vitro models of endometriosis warrants further study. As the small size of peritoneal endometriotic lesions precludes the culture of sufficient numbers of endometriotic epithelial cells, we used the well-characterized, ER-positive 12Z endometriotic epithelial cell line, originally isolated from peritoneal lesions (12). These cells express inflammatory molecules and exhibit a migrating and invading potential and are therefore an ideal model to study molecular and cellular aspects of endometriosis (13)(14)(15). In recent years, Mass spectrometry (MS) has become a powerful technology to carry out large-scale analyses of cellular proteomes, as it allows qualitative and quantitative analysis of complex mixtures. One widespread quantitative method is based on Isobaric Tags for Relative and Absolute Quantification (iTRAQ) (16), which allows the comparison of multiple combined samples by chemically labeling peptides after digestion of protein extracts. With this labeling, small amine-reactive isobaric mass tags are attached to the N-terminus and lysine residues of peptides. Quantification is then based on a reporter ion generated during peptide fragmentation within the mass spectrometer and specific to each treatment in the mixed sample. The intensities of the reporter ion fragments enable the measurement of relative peptide abundance, and thus of their corresponding proteins (17). The combination of state-of-the-art experimental strategies and advances in computational methods enables the global study of cellular proteomes. Computer-aided data mining such as annotation enrichment analysis enables more efficient mapping of complex proteomics data to biological processes (18). Searching for precise Gene ontology (GO) terms allows comprehensive summary analysis of large data sets, and the recently developed Cytoscape software is an useful tool to visualize protein-protein or protein-annotation networks (19)(20)(21). The goal of the present study was to perform proteomic profiling of endometriotic epithelial cells to compare and contrast responses to E2, LXA4, and both in combination. LC-MS/MS analyses were carried out for relative quantification of proteins between treatments based on an iTRAQ approach. Interactome data, GO terms and pathway annotations were used to create protein-protein and protein-annotation networks to generate new information on estrogen and lipoxin signaling. were maintained at 37°C in humidified air containing 5% CO2 in phenol red-free DMEM F-12 (Sigma) supplemented with 10% charcoal stripped-fetal bovine serum (CSFBS) (Invitrogen), 1% penicillin/streptomycin, and 1% glutamine (Sigma, Switzerland). sample Preparation 6 × 10 5 cells were seeded in 100-mm dish and treated with either vehicle (denoted control), 10 nM E2, 100 nM LXA4, or 10 nM E2 and 100 nM LXA4 in combination for 24 h. All treatments were carried out in duplicate. After the incubation time had elapsed, cells were rinsed with PBS and harvested by centrifugation. The pellet was washed twice with PBS and resuspended in 100 μl homogenization buffer (8M Urea containing Protease inhibitors). Samples were then sonicated three times for 15 s to solubilize the proteins. Samples were subsequently centrifuged for 15 min at 13,000 rpm and the supernatants were transferred into clean Eppendorf tubes and stored at −20°C. Proteins were quantified using a Bradford protein quantification kit (BioRad, Switzerland). Protein lysates were reduced with 5 mM DDT (Applichem, Switzerland) for 30 min at room temperature and then alkylated with 20 mM iodoacetamide (Sigma, Switzerland) for 30 min at room temperature protected from light. Proteins were precipitated with ethanol-acetate and resuspended in TEAB buffer (500 mM tetraethyl ammonium bicarbonate pH 8). For each sample, 220 μg of proteins was digested overnight at 37°C with 5 μg of trypsin. Thereafter, samples were aliquotted and stored at −80°C. After sample clean-up with Proxeon SCX StageTips, labeling was controlled by LC-MS/MS separately for each sample before mixing of the eight digests. The mixed samples were redissolved in 4M Urea with 0.1% Ampholytes pH 3-10 (GE Healthcare, Switzerland) and fractionated by off-gel focusing as previously described (22). The 24 fractions obtained were desalted on a microC18 96-well plate (Waters Corp., Milford, MA, USA), dried, and resuspended in 0.1% (v/v) formic acid, 3% (v/v) acetonitrile for LC-MS/MS analyses. lc-Ms/Ms analyses Extracted peptides were analyzed on a hybrid LTQ Orbitrap Velos mass spectrometer (Thermo Fisher Scientific, Bremen, Germany) interfaced to an Ultimate 3000 RSLC nano HPLC system (Dionex, Switzerland). Peptides were separated for 120 min on a reversed-phase nanocolumn Acclaim PepMap RSLC 100A (75 µm ID × 25 cm, 2 µm, Dionex) at a flow rate of 300 nl/min using a H2O: acetonitrile gradient method. Lock mass option was used for full MS scan recalibration with a polydimethylcyclosiloxane ion from ambient air (m/z 445.12003). In data-dependent acquisition controlled by Xcalibur 2.1 software (Thermo Fisher Scientific, Switzerland), the 15 most intense precursor ions detected in the full MS survey performed in the Orbitrap (range 300-1700 m/z, resolution 30,000 at m/z 400) were selected and fragmented. MS/ MS was triggered by a minimum signal threshold of 3,000 counts, carried out with stepped relative collision energy between 40 and 50% with an isolation width of 2.1 amu. Only precursors with a charge higher than 1 were selected for HCD fragmentation, and fragment ions were analyzed in the Orbitrap at a resolution of 7,500. The m/z of fragmented precursors was then dynamically excluded, with a tolerance of 10 ppm, for 25 s. To identify peptides, MS/MS files were analyzed with Proteome Discoverer 1.3 (Thermo Fisher Scientific, Switzerland) using Mascot 2.3 (Matrix Science, UK) for database searching. Mascot was set up to search the SwissProt database 1 restricted to human taxonomy (database release used was 2011_03, 20,234 sequences after taxonomy filter), using the decoy database search option. Trypsin (cleavage at K and R, not before P) was used as the enzyme definition. Mascot was searched with a fragment ion mass tolerance of 0.02 Da, a parent ion tolerance of 10 ppm, allowing one missed cleavage. Iodoacetamide derivative of cysteine and iTRAQ (eight-plex) modification of lysine and peptide N-terminal were specified as fixed modifications. Deamidation of asparagine and glutamine, oxidation of methionine, and acetylation of protein N-terminal were specified as variable modifications. A False discovery rate (FDR) filter of 1% was applied. statistical analysis Quantitative protein values were calculated as the median of peptide reporter ion intensities. Proteins with at least one identified peptide with a high confidence (FDR <0.01) in both replicates were considered for quantitative analyses. The protein quantification was normalized using the Lowess function (23). The log2 ratio of the treatment value divided by the control value was calculated for each protein and for each replicate. The local pooled error test (LPE) was computed for each comparison (E2 versus control, LXA4 versus Control, and LXA4 + E2 versus control) as previously described (24). The FDR-adjusted p-value was computed using the Benjamini-Hochsberg (BH) correction. In our analysis, we defined two sets of proteins of interest, one with LPE p-values <0.05 and a subset group with FDR-adjusted p-values <0.1. Calculations were performed using R statistical software version 2.10.0. gene Ontology analysis Gene Ontology analysis was performed on significantly differentially expressed proteins for one or more treatments versus control. GO terms were analyzed using DAVID (18) 2 . Biological process annotations with a minimum set of two annotated proteins in our query and an enrichment p-value <0.05 were used. The results of this analysis were subsequently visualized in Cytoscape v3.2.0 3 to model the protein-annotation network. Pathway annotation retrieval KEGG Pathway annotations were retrieved using DAVID (18) with a minimum of two annotated proteins, which were significantly differentially expressed (LPE p-value <0.05) in E2 or LXA4/E2-treated cells. The results of this analysis were subsequently utilized in Cytoscape to model the protein-annotation network. Protein network analysis Cytoscape software for network visualization was employed to build a protein-protein interaction network (20,21) using physical interactions, pathway interactions, and genetic regulation interactions from all available sources within GeneMANIA (25). A maximum of 20 proteins were automatically added to our query by GeneMANIA in order to fill potential gaps in our network, using the guilt by association principle. Western Blotting Cells were treated with indicated concentrations of E2, LXA4, and both in combination for 24 h in E2-free medium. After the incubation time had elapsed, cells were rinsed with PBS and harvested in 2× SDS sample buffer [125 mM Tris-HCl (pH 6.8), 4% SDS, 20% glycerol, 100 mM DTT, 0.01% bromophenol blue and protease inhibitors (Sigma-Aldrich, Switzerland)]. Thirty micrograms of total protein were loaded and separated on 10% sodium dodecyl sulfate (SDS)-polyacrylamide gels. Proteins were electrophoretically transferred onto nitrocellulose membrane (Biorad Laboratories, Switzerland). The membrane was blocked overnight at 4°C in a 5% fat-free dry milk solution in TBScontaining 0.05% Tween 20 (TBS-T) and subsequently incubated for 1 h at RT with a rabbit anti CSN5 antibody (Abcam, UK) diluted 1:2,000 in 5% fat-free dry milk solution in TBS-T or with a mouse anti β-actin antibody (Sigma Aldrich, Switzerland) diluted 1:8,000. After washing, the membrane was incubated for 1 h with HRP-conjugated anti-rabbit secondary antibody or anti-mouse secondary antibody at a 1:3000 dilution in 5% non-fat milk in TBS-T. Immunoreactive bands were visualized using chemiluminescence (PerkinElmer, Wellesley, MA, USA) and densitometric analysis was performed using ImageJ software. resUlTs Our LC-MS/MS analyses ( Figure 1A) resulted in the identification of 26,979 peptides (Table S1 in Supplementary Material). Proteins with at least one identified peptide in both replicates were considered for further quantification and analysis, culminating in 3,232 quantified proteins in Figure 1A. A representative spectrum of an iTRAQ quantitative analysis of CSN5 is represented in Figure 1B. CSN5 is a subunit of the COP9 signalosome implicated in diverse biological processes, such as signal transduction, development, and the cell cycle (26). CSN5 exhibited a log fold change of 0.3 between treatments versus control. To benchmark our quantitative approach, we carried out Western blotting for CSN5 ( Figure 1D). The Western blot and MS data show a similar pattern (Figures 1C,E) and are highly correlated with a R 2 linear regression value of 0.92. Densitometric analysis revealed that CSN5 levels were significantly increased in LXA4E2 conditions versus control (p < 0.05). The variability of both replicates in each condition was checked using the logarithm of geometric mean intensity compared to the logarithm of the intensity ratio of the two replicates ( Figure S1 in Supplementary Material). Approximately 95% of the protein log2 ratio values were between −0.5 and 0.5 in each condition, indicating a low variability between the two replicates. Using the LPE test, 348 significantly differentially expressed proteins were identified for at least one treatment, with a p-value <0.05, while a high quality subset of 27 proteins was identified with a FDR-adjusted p-value <0.1, after correction for multiple testing with the BH method. The volcano plot (LPE p-value versus log fold change) representation of E2 versus control (Figure 2A), LXA4 versus control (Figure 2B), and LXA4 in combination with E2 ( Figure 2C) provides a general overview of significantly differentially expressed proteins. Many significantly differentially expressed proteins, which passed the LPE test, had a low log2 ratio of approximately 0.3. Interestingly, our data suggest that the combination of both treatments exert a suppressive effect on proteins impacted by E2 alone (Figure 2D). The proteins modulated by both LXA4 and E2 in combination showed only an overlap of 59 proteins, and 148 proteins that were impacted by E2 alone were no longer significantly impacted in cells treated with the combination. The quantitative information for each significantly differentially expressed protein is shown in Table S2 in Supplementary Material. The general GO terms for biological processes were retrieved with DAVID (18) for each significantly differentially expressed protein in each treatment condition versus control ( Figure 3A, Table S3 in Supplementary Material). Proteins with GO terms linked to cell proliferation, cell death, growth, cell adhesion, and immune processes, as the most relevant to endometriosis pathology, were identified. This analysis demonstrates an antagonist effect of LXA4 on E2 signaling at the protein level ( Figure 3A). Indeed, LXA4 in combination with E2 modulates the cellular macromolecular complex disassembly process, suggesting an inhibition in the growth and proliferation of endometriotic epithelial cells. LXA4 alone affects proteins annotated with a single biological process, the humoral immune response, which is perhaps unsurprising as only 25 proteins in total were detected ( Figure 2D). A subsequent quantitative analysis of the 348 protein subset annotated by these GO terms (80 proteins) was performed. The Z-score of log ratio of each treatment versus control in 12Z cells is depicted in heat maps using hierarchical clustering for differentially expressed proteins ( Figure 3B). Using Cytoscape and GO biological process annotations, a protein-annotation network (in Figure 3C) was modeled, linking our quantitative analysis with protein function. Next, we wished to delineate the contextual relevance using the most recently developed protein-protein interaction bioinformatics resources. The p-values of the LPE test as a function of the FDR are presented in Figure 4A. The 27 proteins that fell in the pink area on the graph were selected for network analysis. It can be seen from Figure 4A that a greater number of significantly differentially expressed proteins with a controlled FDR were detected in cells stimulated with LXA4E2 versus control and in cells stimulated with E2 versus control. In contrast, a smaller number of significantly differentially expressed proteins with a controlled FDR were detected in cells stimulated with LXA4 versus control. Accordingly, the FDR decreases in a steeper manner for cells stimulated with LXA4 versus control. The proteinprotein interaction network of these 27 significantly differentially expressed proteins with a controlled FDR <0.1, listed in Table 1 ( Figures 4A,B) was generated using GeneMANIA 4 and interactome data from physical interactions, pathway interactions, and genetic interactions (Figure 4C). The functions of these proteins, as retrieved from Uniprot, are detailed in Table 2. Again, the log2 ratio of each treatment divided by control for differentially expressed proteins is depicted in heat maps using hierarchical clustering (Figure 4B). These 27 proteins interact together to a high degree suggesting their implication in closely related biological processes. Using GeneMANIA, based on the guilt-by-association principle, we completed our query network with 20 interaction partner proteins, which were not detected by the mass spectrometer due to limitations in dynamic range detection but intricately associated with our high quality protein set. ERβ (ESR2) and 17-beta Hydroxysteroid dehydrogenase 10 (17HSDb10), implicated in estrogen metabolism, are among these interaction partner proteins depicted by gray circles in Figure 4C. Basal ERα and ERβ expression by 12Z cells was confirmed by Western blotting prior to performing experiments (data not shown), indicating that these are estrogen-responsive cells. In this high quality protein subset, the antagonist effect of LXA4 on E2 signaling was also observed. LXA4 in combination with E2 decreased the expression of GORASP2, NANS, CRYZ, PDXK, LXN, TYMS, RFC5, DDX55, MRPS25, and CHMP2B compared to LXA4 or E2 alone. One of the most impacted proteins was Golgi reassembly stacking protein 2 (GORASP2), implicated in Golgi fragmentation and the subsequent entry into mitosis (27) and therefore cell-cycle progression. Similarly, Sialic acid synthase (NANS), which functions in sialic acid biosynthetic pathways (28) and Replication factor C subunit 5 (RFC5), implicated in proliferation, were less up-regulated by LXA4E2 than by either treatment alone, suggesting that treating 12Z cells with E2 and LXA4 in combination resulted in a decrease of cellular metabolic processes and proliferation. Lipoxin A4 and E2 in combination increased PSMD4, AGFG1, KPNA3, PSMB4, RALY, LAMP1, KRT1, BOP1, and NUTF2 compared to LXA4 or E2 alone. LXA4 increased the expression of Keratin, type II cytoskeletal 1 (KRT1), which could however also be a contaminant in the lysates. The 26S proteasome non-ATPase regulatory subunit 4 (PSMD4) is involved in antigen processing and presentation of exogenous peptide antigen via MHC class I. With the objective of confirming our observation that LXA4 antagonized E2-mediated effects, we compared 12Z cells treated by E2 and LXA4 in combination (LXA4E2) versus E2 alone. Employing the LPE test, the corresponding volcano plot (Figure 5A) was generated. 146 proteins were significantly differentially expressed in cells treated with the LXA4E2 combination versus cells treated with E2 alone (Table S4 in Supplementary Material). Among these proteins, 64 were significantly up-regulated whereas 82 proteins were significantly down-regulated, confirming our hypothesis. There was a 63% overlap between significantly differentially expressed proteins in any of the treatments versus control and LXA4E2 versus E2. Among these 146 proteins (Figure 5B), we found several protein families involved in E2-induced signaling that were impacted by LXA4. The DDX, RPS, and KRT families were among those most affected. DEAD box proteins (DDX) are attenuated by the LXA4E2 combination compared to E2 alone, suggesting a decrease in insulin/glucose metabolism and the TCA cycle, as this enzyme mediates the rate-limiting step in the metabolic pathway that produces glucose from lactate and other precursors derived from the citric acid cycle. The alpha subunit of NAD(+)-specific Isocitrate dehydrogenase 3 (IDH3A) was also markedly affected by LXA4E2 compared to E2. These enzymes catalyze the oxidative decarboxylation of isocitrate to alphaketoglutarate, the rate-limiting step of the TCA cycle (30). In order to study the role of these differentially expressed proteins, we subsequently retrieved all pathway annotations (with a minimum of two proteins annotated) using DAVID (Figure 5C). In summary, several pathways of relevance to endometriosis, such as metabolic pathways, notably pyrimidine, fructose, and mannose metabolism, as well as the cell cycle, oxidative phosphorylation, the proteasome, regulation of autophagy, mTOR signaling, and adipocytokines were affected, demonstrating that LXA4 modulates several cellular processes which are impacted by E2. DiscUssiOn Here, we report the first comprehensive proteomic analysis of changes induced by LXA4 and E2 in an endometriotic epithelial cell line. We have performed proteome-wide biomarker and therapeutic target discovery using the most recent bioinformatics tools and databases, coupled with standard statistical analysis. In order to verify our method, we performed Western blots for CSN5/JAB1, a component of the COP9 signalosome, a complex which regulates several cellular and developmental processes (31), and observed that this protein were significantly increased in LXA4E2 conditions versus control. ERα was previously shown to co-immuno-precipitate with CSN5 and overexpression of CSN5 caused an increase in ligand-induced ERα degradation (32), indicating the functional relevance of this observation. There was a high degree of correlation between the Western blot and the MS data, confirming the robustness of our results. Most proteins whose expression was induced by LXA4, were also induced by E2, as would be expected from our previous observations in endometrial epithelial cells where we characterized this eicosanoid as an estrogen receptor agonist which bound ER, activated canonical ER signaling and also inhibited E2-mediated responses including gene expression and cellular proliferation (8). As observed in that previous study where LXA4 exhibited less potent responses that E2, the natural ligand, in the present study, fewer proteins are regulated by LXA4 than by E2, and LXA4 also inhibited certain E2-mediated changes in protein expression. KRT1 KRTA May regulate the activity of kinases such as PKC and SRC via binding to integrin beta-1 (ITB1) and the receptor of activated protein kinase C (RACK1/GNB2L1). In complex with C1QBP is a high-affinity receptor for kininogen-1/HMWK GORASP2 GOLPH6 Plays a role in the assembly and membrane stacking of the Golgi cisternae NUTF2 NTF2 Facilitates protein transport into the nucleus. Interacts with the nucleoporin p62 and with Ran. Acts at a relatively late stage of nuclear protein import NANS SAS Produces N-acetylneuraminic acid (Neu5Ac) and 2-keto-3-deoxy-d-glycero-d-galacto-non-onic acid (KDN). Can also use N-acetylmannosamine 6-phosphate and mannose 6-phosphate as substrates to generate phosphorylated forms of Neu5Ac and KDN RFC5 The elongation of primed DNA templates by DNA polymerase delta and epsilon requires the action of the accessory proteins proliferating cell nuclear antigen (PCNA) and activator 1 KPNA3 QIP2 Functions in nuclear protein import as an adapter protein for nuclear receptor KPNB1. Binds specifically and directly to substrates containing either a simple or bipartite NLS motif. Docking of the importin/substrate complex to the nuclear pore complex (NPC) is mediated by KPNB1 through binding to nucleoporin FxFG repeats, and the complex is subsequently translocated through the pore by an energy requiring, Ran-dependent mechanism. At the nucleoplasmic side of the NPC, Ran binds to importin-beta and the three components separate and importin-alpha and -beta are re-exported from the nucleus to the cytoplasm where GTP hydrolysis releases Ran from importin. The directionality of nuclear import is thought to be conferred by an asymmetric distribution of the GTP-and GDPbound forms of Ran between the cytoplasm and nucleus. In vitro, mediates the nuclear import of human cytomegalovirus UL84 by recognizing a non-classical NLS. The proteasome is a multicatalytic proteinase complex which is characterized by its ability to cleave peptides with Arg, Phe, Tyr, Leu, and Glu adjacent to the leaving group at neutral or slightly basic pH. The proteasome has an ATP-dependent proteolytic activity. Mediates the lipopolysaccharide-induced signal macrophage proteasome (by similarity). SMAD1/OAZ1/PSMB4 complex mediates the degradation of the CREBBP/EP300 repressor SNIP1 Elevated local E2 levels and increased expression of estrogen-regulated molecules, which promote proliferation, are features of endometriotic lesions (33)(34)(35)(36), and molecules that inhibit estrogen production and/signaling represent potential therapeutics. We and others have demonstrated that LXA4 and its analogs have decreased endometriosis progression in rodent Frontiers in Endocrinology | www.frontiersin.org models and may represent future therapies (11,37,38). The latter study reported attenuated expression of estrogen-regulated mediators implicated in cell proliferation in mice with endometriosis treated with LXA4 compared to vehicle-treated controls. It is perhaps unsurprising that LXA4, as a molecule that acts via several known receptors notably ALX/FPR2 and ERα, also impacts diverse pathways as well as metabolism. For example, Mevalonate kinase (MVK), a key early enzyme in isoprenoid and sterol synthesis and therefore in cholesterol production via the generation of Squalene (39), exhibited a 25% reduced induction in cells stimulated with E2 and LXA4 in combination compared to cells treated with E2 alone. The Mevalonate pathway is a key metabolic pathway as its blockade has been linked to mitochondrial dysfunction and defective autophagy, possibly causing inflammasome activation and subsequent cell death (40). This antagonistic modulation of E2-driven responses by LXA4, also observed for several other proteins, is consistent with the notion that the regulation of these proteins is mediated via ER, or by another common receptor. The LXA4E2 combination appeared to attenuate insulin/ glucose metabolism and the TCA cycle by impacting IDH3A and PCK2, the latter enzyme was already previously studied in decidualized stromal endometrial cells (41) but its role in endometriotic epithelial cells remains unclear. RPS6, another protein detected, is a component of the 40S ribosomal subunit and is therefore thought to be involved in regulating translation. While its precise function is currently under investigation, studies have shown that RPS6 is involved in the regulation of cell size, cell proliferation, and glucose homeostasis (42). RPS6 has been implicated in mTOR signaling in hormone responsive cells (43,44). Of note, E2 has been shown to trigger protein synthesis in mouse uterine epithelial cells via the PKC-ERK1/2-mTOR pathway (45). Furthermore, the PI3K/ mTOR/AKT pathway is activated in endometriosis, and inhibition of this pathway represents a potential therapeutic modality (46,47). Consistent with our previous studies (8), both LXA4 and E2 appear to increase cellular proliferation individually but in combination, this effect appears to be blunted, which may be occurring via crosstalk between the ER and mTOR pathways. Indeed, it is recently appreciated that such crosstalk mechanisms exist in cancer cells (48). Our study also provides novel insights into common intracellular signaling pathways activated by LXA4 and E2, for example those involved in protein synthesis. eIF5B, reported to be the most catalytically active of the translation initiation factors (49), was one of the top 25 proteins regulated. Interestingly, this protein was previously demonstrated to be increased by E2 in MCF-7 estrogen-responsive breast cancer cells via ER (50). Though a cell line cannot reproduce complex tissue conditions, 12Z cells proved useful to optimize the technique, and the data generated will allow the study of potential new therapeutic targets or endometriosis biomarkers. However, proteomic methods are still characterized by some heterogeneity, and further standardization and optimization for clinical use is necessary, especially at the sample preparation level to extract proteins from urine, menstrual blood, or other body fluids (51,52). Although the effects of the different treatments are modest and our experimental design includes only two replicates, which limits our statistical power, we found several relevant processes implicated in endometriosis physiopathology. The proteins identified in this study require further validation in order to be considered as potential therapeutic targets. Emerging fractionation techniques to capture and concentrate low-abundance proteins, new non-gel-based proteomic technologies combined with protein labeling strategies and the ongoing development of advanced bioinformatics tools will hopefully allow the development of relatively non-invasive diagnostic methods for endometriosis. This is a major unmet medical need as it would facilitate earlier diagnosis of this common, chronic disease. aUThOr cOnTriBUTiOns JS: Study design, sample preparation, data analysis, bioinformatics, and manuscript writing. PW: Study design, mass spectrometry expertise, critical discussion, and manuscript corrections. IG: Western blot, cell culture, and manuscript corrections. MQ: study design, mass spectrometry expertise, critical discussion, and manuscript corrections. GC: study conception and design, data analysis, critical discussion, and manuscript writing. acKnOWleDgMenTs We thank Jachen Barblan for advice regarding mass spectrometry, Alexandra Potts for sample preparation supervision, Chiara Pellegrini for assistance with cell culture, Yoima Rodriguez for helping with Western blotting and Frederic Schutz for statistical advice. FUnDing This study was funded by the Roche Research Foundation and by the Swiss National Science Foundation (grant number 310030-120761) (to GC) and by funding from the University of Lausanne to MQ. sUPPleMenTarY MaTerial The Supplementary Material for this article can be found online at http://journal.frontiersin.org/article/10.3389/fendo.2015.00192
6,339.2
2016-01-06T00:00:00.000
[ "Biology", "Medicine" ]
Comparison of Antimicrobial Treatment Incidence Quantification Based on Detailed Field Data on Animal Level with the Standardized Methodology of the European Medicines Agency in Veal Calves, Switzerland, 2016–2018 Estimation of COVID-19 Dynamics in the Different States of the United States during the First Months of the Pandemic † : Precise quantification of antimicrobial treatment incidence (TI) is crucial for benchmarking. Two widespread methods for treatment incidence quantification were compared for agreement. Field data were obtained from 38 veal farms from 2016 to 2018 (1905 calves, 1864 treatments). Calculation of TI swiss for calves was based on detailed treatment records using pharmacokinetic values from the Swiss Veterinary Medicines Compendium. The method published by the European Medicines Agency was used to calculate TI in defined daily doses (TI DDD ). For each calf and treatment, TI swiss and TI DDD were calculated on level of the antimicrobial class, drug, application route, and farm. The quotient (Q) of TI swiss and TI DDD was calculated. Divergence in results between the two methods of ≤ 25% was arbitrarily set as good agreement. The agreement between TI swiss and TI DDD was mostly good. On class level, good agreement was observed for treatments representing 71.5% of the TI DDD, and 74.5% of the total TI DDD on drug level. Poor agreement was mainly observed for tylosin and sulfadimidine. The agreement was better for parenteral than for oral treatments (81.6% vs. 72.3%). For practically orientated calculation on farm level, good agreement was observed (77.5% of the TI DDD ). The TI DDD method showed mostly good agreement, especially for parenteral treatments. Abstract: Estimation of COVID-19 dynamics and its evolution is a multidisciplinary effort, which requires the unification of heterogeneous disciplines (scientific, mathematics, epidemiological, biological/bio-chemical, virologists and health disciplines to mention the most relevant) to work together towards a better understanding of this pandemic. Time series analysis is of great importance to determine both the similarity in the behavior of COVID-19 in certain countries/states and the establishment of models that can analyze and predict the transmission process of this infectious disease. In this contribution, an analysis of the different states of the United States will be carried out to measure the similarity of COVID-19 time series, using dynamic time warping distance (DTW) as a distance metric. A parametric methodology is proposed to jointly analyze infected and deceased persons. This metric allows comparison of time series that have a different time length, making it very appropriate for studying the United States, since the virus did not spread simultaneously in all the states/provinces. After a measure of the similarity between the time series of the states of United States was determined, a hierarchical cluster was created, which makes it possible to analyze the behavioral relationships of the pandemic between different states and to discover interesting patterns and correlations in the underlying data of COVID-19 in the United States. With the proposed methodology, nine different clusters were obtained, showing a different behavior in the eastern zone and western zone of the United States. Finally, to make a prediction of the evolution of COVID-19 in the states, Logistic, Gompertz and SIR models were computed. With these mathematical models, it is possible to have a more precise knowledge of the evolution and forecast of the pandemic. Introduction Antimicrobial resistance (AMR) is likely to become an obst ment, as antimicrobials may become ineffective for disease cont may rise considerably [1]. There is a large basis of evidence show (AMU) is a major driver for the development of AMR in human through selection of resistant strains, which emphasizes the n Different methods are used to quantify and compare AMU am aim of developing strategies to monitor drug use and to identify use. Sales figures of antimicrobial therapeutic products have b for various countries for more than a decade [3][4][5][6]. However, an the interpretation of sales figures is that antimicrobials cannot Introduction The COVID-19 epidemic started in Hubei Province, China, around December 2019. Since then, the disease has spread to all continents and countries of the world, being categorized as a pandemic by World Health Organization on 11 March 2020. In recent months, contributions have been made that analyze the evolution of the pandemic in different countries, implementing mathematical models to predict their evolution. Traditional predictive models for infectious diseases mainly include models for predicting differential equations and models for predicting time series based on statistics and random processes. For example, in [1] a methodology with the aim of estimating the actual number of people infected with COVID-19 in France is presented, since according to the authors, the number of screening tests carried out and the methodology do not directly calculate the actual number of cases and infection mortality rate (IFR). A mechanistic-statistical approach was developed that combines an epidemiological SIR model that describes these unobserved epidemiological dynamics, a probabilistic model that describes the data collection process and a method of statistical inference. The logistic growth model, the generalized logistic growth model, the generalized growth model and the generalized Richards model were used to model the number of infected cases in the 29 provinces of China (and several countries), performing a detailed analysis on the heterogeneous situations by four phases of the outbreak in China [2]. In [3] the Kermack-McKendrick SEIR model (Susceptible, Exposed, Infectious and Recovered) is presented to analyze the effects of behavioral changes on the reduction in community transmission in Mexico. A variable contact rate over time is proposed and the consequences of disease spread in an affected population of non-essential activities is analyzed. The behavior of the virus in Japan has also been analyzed [4]. By 29 February 2020, in addition to the 619 confirmed cases (passengers and crew members) infected with COVID-19 in a cruise ship (near Tokyo), 215 locally transmitted cases had been also confirmed in Japan. To evaluate the effectiveness of reaction strategies based on avoiding large accumulations or crowded areas and to predict the spread of COVID-19 infections in Japan, in [4] a stochastic transmission model produced by expanding the epidemiological model based on SIR (Susceptible-Infected-Removed) had been presented. The simulation results showed that the number of Infected and Removed patients will increase rapidly if there is no reduction in the time spent in crowded zone. In [5] using the Maximum-Hasting (MH) parameter estimation method and the SEIR model, the spread of COVID-19 and its prediction in South Africa, Egypt, Nigeria, Senegal, Kenya, and Algeria under three intervention scenarios (suppression, mitigation, mildness) is presented. In addition to the most relevant epidemiological models used in the literature, models typically based on time series have also been used to analyze the behavior of the pandemic in different countries. The autoregressive integrated moving average (ARIMA) model is a mathematical model widely studied in the context of time series that has been successfully applied in the field of health (estimate the incidence and prevalence of influenza mortality, malaria incidence, hepatitis, and other infectious diseases) as well as in different fields in the past due to its simple structure, fast applicability and ability to explain the data set. In [6], ten Brazilian states are analyzed using the autoregressive integrated moving average (ARIMA), the cubistic regression (CUBIST), the random forest (RF), ridge regression (RIDGE), the support vector regression (SVR) and the stacking-ensemble learning in the task of time series forecasting of the number of patients infected with COVID-19 with one, three, and six-days ahead. A forecasting model based on ARIMA has also been presented in [7] for Pakistan, presenting the high exponential growth in the number of confirmed cases, deaths and recoveries. In [8], ARIMA time series models were applied to forecast the total confirmed cases of COVID-19 for the next ten days using the model ARIMA (0,2,1), ARIMA (1,2,0) and ARIMA (0,2,1) for Italy, Spain, and France, respectively. Currently, the analysis of the evolution of COVID-19 in America is of great importance due to the impact of this epidemic on this continent. In this contribution we will focus on the United States. The first patient detected in the United States was a travel-associated case from Washington state on 19 January 2020. The preponderance of initial cases of infected patients with COVID-19 in the United States were correlated with travel to a "high-risk" country or close contacts of previously identified cases corresponding to the testing criteria adopted by the Centers for Disease Control and Prevention (CDC) (https://www.cdc.gov/, accessed March 2020). From 1 to 31 March 2020, the number of reported COVID-19 cases in the United States rapidly increased from 30 to 188,172, the number of deaths rising from 1 to 5531, and the virus being detected in all the states. At the end of April, the number of infected reached 1,069,424 and the number of deceased stood at 62,996. At the time of writing this contribution (14 June 2020) the number of infected is more than 2× 10 6 and Proceedings 2021, 5, 53 3 of 9 more than 100.000 deaths, the United States being one of the countries that is suffering the most from COVID-19. In a recent paper [9], an attempt was made to estimate the actual number of infected people, even if they have not been counted. It was estimated that the true number of COVID-19 cases in the United States is likely in the tens of thousands, suggesting substantial undetected infections and spread within the country [10]. This contribution presents a methodology to analyze the evolution patterns of COVID-19 in the states of United States (including Puerto Rico and the District of Columbia). A parametric similarity measure is presented, based on a robust distance measure between time series, the dynamic time warping distance (DTW), with which the number of infected and dead in each of the states can be compared simultaneously, even though the start of the epidemic originated on different dates in each zone (therefore, the time series that need to be compared have different lengths). To the best of our knowledge, this contribution is the first study that tries to develop a hierarchical clustering time series algorithm in order to globally compare and classify the behavior of all the states of United State simultaneously in their evolution of infected and deceased patients suffering COVID-19. Carrying out this classification is very useful, since it will allow the establishment of similarities and patterns in the evolution of the pandemic among the states of the United States. Material and Methods A time series is a sequence of numerical (temporal) data points in successive order, which is naturally high dimensional and large in data size. There are two main operations that could be performed when working with time-series with its sequential data: (a) the analysis of a single time series; (b) the analysis of multiple time series simultaneously. This contribution is concentrated on the analysis of multiple time series for all the states of US suffering COVID-19, with the purpose of finding similarities between multiple time series by performing a clustering time-series methodology. Clustering such complex objects is particularly advantageous because it may lead to the discovery of interesting patterns in time-series datasets, which contributes to a better understanding of the COVID-19 spread in different regions of the United States. Clustering of time-series sequences has received noteworthy attention [11,12], not only as a formidable exploratory method and powerful tool for discovering patters, but also as a pre-processing step or subroutine for other tasks [13]. In this section, the database used is presented first (Section 2.1). Subsequently, a review of the most popular distance measures for time series is described (Section 2.2) and a new parametric distance is proposed. Data Set The COVID-19 epidemic data set used in this contribution was collected from the Johns Hopkins University [14]. In this platform, the number of confirmed, deaths and recovered cases until 14 June 2020 for different countries are presented. For the United States, two additional .csv files are provided, in with detail of administration and province/state is reported (including Puerto Rico and District of Columbia). In order to compare countries behavior, the time-series data are divided by state population. Similarity/Distance Measure in Time Series In a simplified way, the similarity of two simple time series with the same number of points (denoted by m), and defined by X = {x 1 , x 2 , . . . .x m } and Y = {y 1 , y 2 , . . . .y m }, can be achieved by simply calculating the Minkowski (or Euclidean) distance (shortest path between two points) between points on both time series that happen at the same time. This distance is the measure of similarity, denoted as d(X,Y), and it is a function that takes both times series (X,Y) as input and calculates their distance "d", defined as: when k = 2, the distance between two series is called Euclidean Distance. Using the Minkowski distance is a good metric to analyze the similarity of two time series, if these time series are synchronized (that is, all similar events in both time series occur at exactly the same time) and have the same length. The evolution of time series in the different states of the United States present a different start date, both for the number of confirmed and death cases, and therefore its length is also different. Suppose as an analogy the time series of the sound of a mother's voice when she speaks slowly to her child. If the mother says the same phrase quickly, the child will most likely recognize that she is still his mother. However, if the Euclidean distance between both series were used as a metric, these two time series would have a very low similarity and would not be considered fundamentally equal. This would lead to the conclusion that the two voices did not come from the same person. To solve this problem, the dynamic time warping distance (DTW) method is frequently used in the bibliography [15]. DTW is a technique that can be considered as an extension of the Euclidean Distance between series [16], that calculates an optimal match between two given time series with certain restriction, performing non-linearly in the series (by stretching or shrinking along its time-axis). This distortion (denoted as warping) between two time series is used to find corresponding regions and determine the similarity between them. The DTW of two series X and Y, defined as X = {x 1 , x 2 , . . . .x n } and Y = {y 1 , y 2 , . . . .y m } is computed in the following way. An n-by-m matrix D is computed with the (i. j)th element, defining the local distance of two elements by: The point-to-point alignment between series X and Y can be represented by a time warping path W, defined as: where p is the length of the warping path W, and w x (k) and w y (k) represent the indexes in time series X and Y, respectively. The warping path w x (k) w y (k) indicates that the w x (k)th element in time series X maps to the w y (k)th element in time series Y. There are some constraints and rules for the construction of the warping path: • Every index from the first time series must be matched with one or more indices from the other time series (and vice versa) • The first (the same for the last index) index from the first time series must be matched (not only this match) with the first (last) index from the other time series. That is, the warping path should start at W(1) = (1,1) and end up at W (p) = (n,m). • The mapping of the indices from the first time series to indices from the other time serie must be monotonically increasing, and vice versa. The adjacent elements of path W, W(k) and W(k + 1) must be subject to w x (k + 1) − w x (k) ≥ 0 and w y (k + 1) − w y (k) ≥ 0. • The warping path should also have the property of continuity, mathematically expressed as adjacent elements of path W, W(k) and W(k + 1) must be subject to w x (k + 1) − w x (k) ≤1 and w y (k + 1) − w y (k) ≤ 1. The optimal match is denoted by the match that satisfies all the restrictions and the rules and that has the minimal cost, where the cost is computed as the sum of absolute differences, for each matched pair of indices, between their values. The DTW (minimal distance and optimal warping path) could be found using a dynamic programming algorithm: where RD x i , y j is the minimal cumulative distance from (0, 0) to (i, j) in matrix D. In the methodology proposed in this paper, for each of the states analyzed, both the time series of the number of infected and the time series of deaths will be simultaneously taken into account. If each of these time series needs to be weighted differently, the following parametric metric, DTW ∝ (S A , S B ) is defined: that measures the similarity in the evolution of the COVID time series for two states of the United States (S A y S B ), TSC A and TSC B represent the time series of the number of infected, TSD A and TSD B represent the time series of the number of deaths for the states S A and S B , respectively. The parameter α (with 0 ≤ α ≤ 1) indicates the relative relevance given to the similarity measure, taking into account the time series of infected or deaths. Clustering Method for Time Series Clustering is a data mining technique in which similar data are divided into related or homogeneous groups, in an unsupervised way, that is, without a priori advanced knowledge of the data. For the problem presented in this contribution, working with time series of states of the United States suffering COVID-19, given a set of individual time series data, the objective is to group similar time series into the same cluster. The problem of grouping time series data is formally defined by, given a dataset of N time series data Q = {X 1 , X 2 . . . X N }, finding in an unsupervised way a partition of Q into K cluster, denoted as C={C 1 , C 2 , . . . C k }, taking into account that homogeneous or similar series are grouped together based on a certain similarity/distance measure. In this paper, the parametric metric DTW ∝ (S A , S B ) is used and there is no intersection between clusters, therefore: The methods used in the area of time series clustering [11,17] are usually based in a conventional clustering algorithm by substituting standard distance measurements with a more suitable distance to compare time series (raw methods) or converting series into normal data and using directly classical algorithms (feature-based methods and models). Among the most popular clustering algorithms, the hierarchical clustering and the k-means algorithm are widely used in time series clustering. In this contribution the hierarchical clustering is used, mainly due to its great visualization power and its simple and intuitive interpretation. Hierarchical clustering creates a nested hierarchy of similar time series, according to a pair-wise distance matrix of the time series analyzed. The similarity measure DTW ∝ (S A , S B ) used is therefore essential in this time-series clustering process. The most widely used linkage criteria, such as single, average and complete linkage variants [18], were analyzed. Hierarchical clustering can be converted into a partitional clustering, with k cluster, by cutting the first k links. Results and Discussion To evaluate the performance of the proposed method, several experiments are conducted in this section for three values of the parameter α in the distance metric DTW ∝ (S A , S B ). The time series of the states of the United States was taken from John Hopkins database. Data range from 22 January 2020 to 14 June 2020, covering all the states (including Puerto Rico and District of Columbia). For the computation of the distance metric, a threshold I min was defined, defining the minimum number of infected people to start the time series, being for this study I min = 5 (the number of confirmed was greater than 5). Therefore, the length of the time series is different for each state, being on average 101 days (Figure 1). The index of each of the states is presented in Table 1. Eng. Proc. 2021, 5, 53 6 of 9 The most widely used linkage criteria, such as single, average and complete linkage variants [18], were analyzed. Hierarchical clustering can be converted into a partitional clustering, with k cluster, by cutting the first k links. Results and Discussion To evaluate the performance of the proposed method, several experiments are conducted in this section for three values of the parameter α in the distance metric ∝ ( , ). The time series of the states of the United States was taken from John Hopkins database. Data range from 22 January 2020 to 14 June 2020, covering all the states (including Puerto Rico and District of Columbia). For the computation of the distance metric, a threshold Imin was defined, defining the minimum number of infected people to start the time series, being for this study Imin = 5 (the number of confirmed was greater than 5). Therefore, the length of the time series is different for each state, being on average 101 days (Figure 1). The index of each of the states is presented in Table 1. Table 1. Distribution of the states obtained by means of hierarchical clustering with 9 clusters (α = 0.5 and, in bold, the state for which the SIR model is calculated). DCluster is the distance between the elements that make up a cluster (its value is zero in the case that there is only one element in a cluster). DCluster States (α = 0.5) Table 1. Distribution of the states obtained by means of hierarchical clustering with 9 clusters (α = 0.5 and, in bold, the state for which the SIR model is calculated). DCluster is the distance between the elements that make up a cluster (its value is zero in the case that there is only one element in a cluster). Cluster Number (C N ) D Cluster States (α = 0.5) The values of the parameter α analyzed will be {0, 0.5, 1}. This section of results begins with the value of α = 0.5, that is, the information of the confirmed patients time series with the same relevance as the time series of deaths for the final computation of the distance DTW ∝ (S A , S B ). The distance matrix between the different states is presented in Figure 2 (for a better visual representation, the distance matrix was multiplied by a constant and the states are ordered according to the cluster to which each one belongs). The values of the parameter α analyzed will be {0, 0.5, 1}. This section of results begins with the value of α = 0.5, that is, the information of the confirmed patients time series with the same relevance as the time series of deaths for the final computation of the distance ∝ ( , ). The distance matrix between the different states is presented in Figure 2 (for a better visual representation, the distance matrix was multiplied by a constant and the states are ordered according to the cluster to which each one belongs). It is important to highlight the existence of various clusters with only one state (corresponding with District of Columbia, New Jersey and Rhode Island). Cluster 7 (New Jersey, listed as 48 in Table 1) links directly to cluster 8 ((49) Connecticut, (50) Massachusetts, (51) New York), which denote similar behavior between these states. For cluster 6 (District of Columbia, number 47), the linkage is performed for both cluster 3 ((32) Michigan, (33) Pennsylvania) and cluster 4 ((34) Delaware, (35) Illinois, (36) Louisiana, (37) Maryland). There are two large clusters (cluster 2 and cluster 5) that contain 29 and 9 states, respectively. Its linkage is performed through cluster 1, which contains only two states (Nebraska and South Dakota). The similarities and distances between the different states and clusters obtained can be analyzed using the results presented in the hierarchical clustering ( Figure 3) and distance matrix (Figure 2). Figure 4 presents a geographical representation of all the states according to the cluster grouping obtained in Table 2, using the similarity between the clusters obtained using the dendogram in Figure 3. It is important to highlight the existence of various clusters with only one state (corresponding with District of Columbia, New Jersey and Rhode Island). Cluster 7 (New Jersey, listed as 48 in Table 1) links directly to cluster 8 ((49) Connecticut, (50) Massachusetts, (51) New York), which denote similar behavior between these states. For cluster 6 (District of Columbia, number 47), the linkage is performed for both cluster 3 ((32) Michigan, (33) Pennsylvania) and cluster 4 ((34) Delaware, (35) Illinois, (36) Louisiana, (37) Maryland). There are two large clusters (cluster 2 and cluster 5) that contain 29 and 9 states, respectively. Its linkage is performed through cluster 1, which contains only two states (Nebraska and South Dakota). The similarities and distances between the different states and clusters obtained can be analyzed using the results presented in the hierarchical clustering ( Figure 3) and distance matrix ( Figure 2). Figure 4 presents a geographical representation of all the states according to the cluster grouping obtained in Table 1, using the similarity between the clusters obtained using the dendogram in Figure 3. The values of the parameter α analyzed will be {0, 0.5, 1}. This section of results begins with the value of α = 0.5, that is, the information of the confirmed patients time series with the same relevance as the time series of deaths for the final computation of the distance ∝ ( , ). The distance matrix between the different states is presented in Figure 2 (for a better visual representation, the distance matrix was multiplied by a constant and the states are ordered according to the cluster to which each one belongs). It is important to highlight the existence of various clusters with only one state (corresponding with District of Columbia, New Jersey and Rhode Island). Cluster 7 (New Jersey, listed as 48 in Table 1) There are two large clusters (cluster 2 and cluster 5) that contain 29 and 9 states, respectively. Its linkage is performed through cluster 1, which contains only two states (Nebraska and South Dakota). The similarities and distances between the different states and clusters obtained can be analyzed using the results presented in the hierarchical clustering ( Figure 3) and distance matrix (Figure 2). Figure 4 presents a geographical representation of all the states according to the cluster grouping obtained in Table 2, using the similarity between the clusters obtained using the dendogram in Figure 3. Table 2 using the ∝ ( , ) and α = 0.5. The hierarchical clustering previously presented was obtained taking into account the time series of the number of confirmed and death cases simultaneously (α = 0.5). Conclusions A powerful tool for the analysis of time series is the grouping through clustering. Clustering time series is usually an unsupervised process, with the aim of finding behavioral similarities between the different time series that are analyzed. This article proposed a parametric metric, based on the dynamic time warping distance, in order to measure the distance or similarity between time series corresponding to different states in the United States, taking into account the behavior of the number of COVID-19 confirmed cases and persons deceased due to COVID-19 simultaneously. The proposed parametric metric, named ∝ ( , ), is robust to the different lengths of data sequences (different beginning of the epidemic in the different states of the United States). - Using the Calinski-Harabasz criterion, the optimal number of clusters in which the different states of United States can be grouped was obtained, taken as a value of α = 0.5 (same relevance for the time series of confirmed and death patients). A total of nine heterogeneous clusters were found, in the sense that there are clusters within a large number of states (there are two large clusters, which encompass 29 and 9 countries) and other clusters with only one state (indicating that their behavior was unique, as they do not have excessive similarities with the rest of states). -With the proposed hierarchical clustering procedure, it is possible to identify and summarize interesting patterns and correlations in the underlying data of the time series of the states of the United States suffering COVID-19 and therefore determine similar behaviors that different states may have. Table 1 using the DTW ∝ (S A , S B ) and α = 0.5. The hierarchical clustering previously presented was obtained taking into account the time series of the number of confirmed and death cases simultaneously (α = 0.5). Conclusions A powerful tool for the analysis of time series is the grouping through clustering. Clustering time series is usually an unsupervised process, with the aim of finding behavioral similarities between the different time series that are analyzed. This article proposed a parametric metric, based on the dynamic time warping distance, in order to measure the distance or similarity between time series corresponding to different states in the United States, taking into account the behavior of the number of COVID-19 confirmed cases and persons deceased due to COVID-19 simultaneously. The proposed parametric metric, named DTW ∝ (S A , S B ), is robust to the different lengths of data sequences (different beginning of the epidemic in the different states of the United States). - Using the Calinski-Harabasz criterion, the optimal number of clusters in which the different states of United States can be grouped was obtained, taken as a value of α = 0.5 (same relevance for the time series of confirmed and death patients). A total of nine heterogeneous clusters were found, in the sense that there are clusters within a large number of states (there are two large clusters, which encompass 29 and 9 countries) and other clusters with only one state (indicating that their behavior was unique, as they do not have excessive similarities with the rest of states). -With the proposed hierarchical clustering procedure, it is possible to identify and summarize interesting patterns and correlations in the underlying data of the time series of the states of the United States suffering COVID-19 and therefore determine similar behaviors that different states may have. Informed Consent Statement: Not applicable.
7,197.4
2021-07-13T00:00:00.000
[ "Economics" ]
Interfacing Arduino Boards with Optical Sensor Arrays: Overview and Realization of an Accurate Solar Compass In this paper, an overview of the potentiality of Arduino boards is presented, together with a description of the Arduino interfacing with light multi-sensors. These sensors can be arranged in linear arrays or in a matrix configuration (CCD or CMOS type cameras) and are equipped with tens, hundreds, or even thousands of elements whose sizes range from a few microns to tens of microns. The use of these sensors requires electronics that have high time accuracy, since they work through regular pulses sent by an external source and, furthermore, have the ability to digitize and store voltage signals precisely and quickly. We show that, with the appropriate settings, a simple Arduino board can handle both 1D and 2D optical sensors. Finally, we describe a solar compass made with such a board coupled to one of the tested optical array sensors that is capable of providing the north direction with a very high degree of accuracy. Introduction The hardware platform called 'Arduino' has become a standard for use among electronics amateurs as well as in research laboratories.The reasons for its success are its ease of use, the versatility of its applications, the countless peripherals that can be connected to it, and the copious literature consisting of algorithms, libraries, forums, and help of all kinds that can be found on the Internet [1], together with scientific papers [2][3][4] and books [5,6]. Nowadays, many research laboratories exploit the Arduino electronics, but in most cases, it is used as an analog-to-digital converter and storage device [7], as a temperature datalogger [8], as a servo-mechanism controller [9], as a brightness and contrast regulator [10], as a triggering system [11], as an RGB sensor controller [12], and so on.In contrast, information on the interface between Arduino and an optical sensor array is difficult to find in the scientific literature.Some examples involve an apparatus including 32 light-dependent resistors [13] or the use of the 128-pixel TSL1401CL array [14,15]. In this paper, we briefly introduce the board hardware and some applications in a research laboratory, and then we focus on the use of the board in interfacing with 1D-and 2D-array light sensors.The optical sensors considered here are standard silicon sensors with a spectral response ranging from infrared to approximately 300 nm.However, some important advances in this field should be taken into account when an electro-optical device has to be designed, especially for UV detection [16,17]. In our laboratory, the main goal was the possibility of using a photosensor array to measure the viewing angle of a light source from a specific observation point with extreme accuracy, as in the case of the solar compass that ENEA designed and patented [18,19].In the case of the solar compass, the light sensor was initially made up of a webcam managed by a laptop PC.Then, this was replaced, respectively, by a matrix sensor and a microcontroller communicating via a serial protocol.The time required to capture a Sensors 2023, 23, 9787 2 of 21 frame to measure the viewing angle of the sun and download and analyze it is about 10 s.This fact led us to search for possible alternatives, both for the sensor and the electronics, possibly also improving other aspects of the device, such as the use of a graphic display or communicating with a mobile phone via Bluetooth.Moreover, due to a specific request to design a new compass prototype, we found that some components are no longer commercially available; therefore, we searched for other sensors and electronic boards.Our choice fell on an Arduino board and linear photosensors. The Arduino Board The main features of Arduino boards are described in numerous books and articles [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]. Figure 1 shows a photo of an Arduino UNO board, where the various elements that make it up are indicated.It is possible to program the board microcontroller through the classic connection to an external programmer, as happens with a common PIC (Programmable Interrupt Controller).More simply, this can also be performed through the dedicated development environment, which, once installed on the PC, allows us to edit a C/C++ program, compile it, and send it directly to the board via the USB serial port. Sensors 2023, 23, x FOR PEER REVIEW 2 of 21 webcam managed by a laptop PC.Then, this was replaced, respectively, by a matrix sensor and a microcontroller communicating via a serial protocol.The time required to capture a frame to measure the viewing angle of the sun and download and analyze it is about 10 s.This fact led us to search for possible alternatives, both for the sensor and the electronics, possibly also improving other aspects of the device, such as the use of a graphic display or communicating with a mobile phone via Bluetooth.Moreover, due to a specific request to design a new compass prototype, we found that some components are no longer commercially available; therefore, we searched for other sensors and electronic boards.Our choice fell on an Arduino board and linear photosensors. The Arduino Board The main features of Arduino boards are described in numerous books and articles [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]. Figure 1 shows a photo of an Arduino UNO board, where the various elements that make it up are indicated.It is possible to program the board microcontroller through the classic connection to an external programmer, as happens with a common PIC (Programmable Interrupt Controller).More simply, this can also be performed through the dedicated development environment, which, once installed on the PC, allows us to edit a C/C++ program, compile it, and send it directly to the board via the USB serial port.Looking at the number of digital (input/output) and analogue ports (input only, except for the DUE board, where there are also analogue output ports), it is evident that applications in which it is necessary to connect many sensors or actuators do not represent a problem. In the Laboratory of Plasma Applications and Interdisciplinary Experiments at the ENEA Research Center in Frascati, several experiments made use of Arduino boards, as in [8], where UV light irradiation was carried out on samples to test their resistance to solar radiation degradation outside the atmosphere.In that case, an Arduino NANO board was interfaced both with some light sensors (photoresistors) to monitor the intensity of the light as a function of time and with a probe to control the temperature of the sample and possibly turn off the light if the temperature had exceeded the alert value. On another occasion, an Arduino UNO board was used to drive linear actuators to simulate the movements of a structure.This is suitable for verifying the behavior of optical fiber sensors under mechanical stress.After a positive final test, the box containing the power supplies, Arduino, and the control console was delivered to an ENEA group that used it within a national project [20] aimed at an experimental study for a seismic isolation system of structures equipped with and monitored by fiber optic sensors. Finally, an Arduino board was used for characterization measurements of a UV-C LED sanitization system.The aim was to measure the angular distribution of the light energy emitted by an LED.For this purpose, we arranged a setup where the LED was Looking at the number of digital (input/output) and analogue ports (input only, except for the DUE board, where there are also analogue output ports), it is evident that applications in which it is necessary to connect many sensors or actuators do not represent a problem. In the Laboratory of Plasma Applications and Interdisciplinary Experiments at the ENEA Research Center in Frascati, several experiments made use of Arduino boards, as in [8], where UV light irradiation was carried out on samples to test their resistance to solar radiation degradation outside the atmosphere.In that case, an Arduino NANO board was interfaced both with some light sensors (photoresistors) to monitor the intensity of the light as a function of time and with a probe to control the temperature of the sample and possibly turn off the light if the temperature had exceeded the alert value. On another occasion, an Arduino UNO board was used to drive linear actuators to simulate the movements of a structure.This is suitable for verifying the behavior of optical fiber sensors under mechanical stress.After a positive final test, the box containing the power supplies, Arduino, and the control console was delivered to an ENEA group that used it within a national project [20] aimed at an experimental study for a seismic isolation system of structures equipped with and monitored by fiber optic sensors. Finally, an Arduino board was used for characterization measurements of a UV-C LED sanitization system.The aim was to measure the angular distribution of the light energy emitted by an LED.For this purpose, we arranged a setup where the LED was rotating around a vertical axis lying on the LED emitting plane, while a photodiode was fixed.An Arduino NANO board was equipped with a motor driver, allowing a stepper motor to rotate the LED, while the photodiode output was connected, via a series resistor, to an analog input on the same board.The Arduino program moved the motor by a certain angle and, at the same time, measured the voltage from the photodiode, recording both the angle and the light intensity, in such a way to reconstruct the angular distribution of the LED. The advantages of using a programmable board equipped with several interfaces and to which a large number of sensors can be connected are evident.The following paragraphs illustrate the feasibility of using these boards to also drive light sensors made up of hundreds or thousands of sensors, arranged in linear arrays or matrices.The use of these sensors is a more challenging task, because for their correct functioning, it is necessary to provide a clock signal of tens of kHz or more, and furthermore, the amount of data generated by these sensors can exceed the memory size of the processors themselves. Light Sensors A very simple light sensor, the photoresistor, whose electrical resistance varies as a function of the light power that excites it, was used during the experiment described in [8].When this sensor is inserted into an electrical circuit, the light intensity is measured by checking the voltage drop across the resistor pins. Other common light sensors are photodiodes, made up of a semiconductor junction, in which the impinging light allows, through the voltage-powered junction, the passage of current, which flows through a series-connected resistor, providing the voltage signal at its ends.Compared to the photoresistor, the operating principle is different, but from a measurement point of view, it is very similar.The situation is unlike that of the linear array or matrix sensors.In this case, there are hundreds of photodiodes aligned along a row or thousands of cells arranged in a grid.These sensors are internally equipped with an electrical circuit, which 'downloads' the signals coming from the individual microdetectors and sends them sequentially to the external processor.As anticipated in the Introduction, the need to use a sensor of this type arose during the development of an electronic solar compass in order to measure, with extreme precision, the arrival direction of the sun's rays with respect to a vertical reference plane and then to determine the true north direction.Consequently, an investigation was conducted to verify which sensors were available on the market and which of these could be compatible with the Arduino hardware. The linear sensors taken into consideration were the following: -TSL1401CL by AMS; -LF1401 by IC HAUS; -S9226 by Hamamatsu; -ILX554A from Sony; -TCD1304DG from Toshiba. The option of using a 2D-array was also explored, in particular, the Omnivision OV7670 CMOS (Complementary Metal-Oxide Semiconductor) camera.Table 1 shows the main characteristics of the various sensors examined.It is clear that, apart from the AMS and IC Haus sensors, which are practically identical, their characteristics vary a lot from each other, both in terms of geometry, in particular, the size of the pixels, and in terms of the clock speed for the operation of the internal circuit of the chip that drives the photodiodes.The cost of these sensors is quite low, and some of them can be found in the catalogs of large electronics vendors on the Internet.The request for a high spatial resolution leads to the choice of Sony or Toshiba chips, while the need to use a low clock time while maintaining the execution of a measurement within a few fractions of a second requires the number of pixels to not be excessively high.In the last case, the choice would hence fall on the Hamamatsu array or on the twins of AMS or IC Haus.Clearly, if the purpose of the measurement is to obtain an image or if two-dimensional information is needed, the only possibility is to opt for the CMOS camera.As we show in the next paragraph, with Arduino, it was possible to drive all of the sensors listed in Table 1, except for the Toshiba one, probably due to pulse synchronization issues. The Characteristics of Arduino and the Requirements of Linear Sensors The photodiode arrays are made up of a light-sensitive area, where hundreds or thousands of sensors are aligned, and an externally controlled electronic circuit, which regulates the operation of the electro-optical elements and sends voltage data proportional to the local value of the light intensity.These data are sent sequentially, and timing synchronization occurs via a clock signal provided by the external driver.In the case of the CMOS matrix, the external clock is needed to not only synchronize the sending of the intensity data but also to allow all the operations of the on-board electronics.The number of communication pins between the driver and sensor is always three for all models, except for the IC Haus and the Toshiba models.The three pins represent the clock signal (input), start signal (input), and output signal.In the IC Haus and Toshiba models, there is an additional input signal to regulate the light integration time.The temporal graph representing the sequence of signals to be sent to the arrays is represented in Figure 2 in its most general form (the true sequence may vary slightly from sensor to sensor).In addition to the stability of the clock frequency and the necessity of falling between the minimum and maximum allowed timing values, there are other time requirements that are essential for successful communication between the driver and the sensor.These requirements consist of the speed of the rising (and falling) edge of the clock signal and the temporal distance between the rise (or falling) of the start pulse with respect to the clock pulse and the digitization speed of the output signal, which must conclude within a duration of approximately half a clock period to avoid an overlap between the acquisition of one pixel and the next.Figure 3 shows, for example, the indication given in the datasheet of the Hamamatsu model regarding the speed of the rising and falling edges and the delay between the clock and the start. Although very high frequencies are not required, the electronics responsible for driving these sensors must still be able to maintain the high-low transition of the logical states of the digital outputs within the limit of a few tens of nanoseconds.Moreover, at the same time, the stable sending of a train of pulses with a period of a few microseconds and the simultaneous digitization and storage of the input analog signals must be guaranteed.The clock speed of the ATmega processor on almost all Arduino boards is 16 MHz, so the execution time of an elementary operation is in the order of 60 ns.A high-level computer instruction requires numerous elementary operations and, consequently, some periods, or a few dozen periods, of execution time for a single instruction.For more stringent time requirements, it is possible to use the Arduino DUE board which has a microcontroller with a clock that is about five times faster.The language used to program the processor was C/C++, and the high-level instructions that generate a square wave signal, i.e., the 'clock' signal, are as follows: digitalWrite(10, HIGH); //set pin 10 to logic level '1'; digitalWrite(10, LOW); // set pin 10 to logic level '0'.Although very high frequencies are not required, the electronics responsible for driving these sensors must still be able to maintain the high-low transition of the logical states of the digital outputs within the limit of a few tens of nanoseconds.Moreover, at the same time, the stable sending of a train of pulses with a period of a few microseconds and the simultaneous digitization and storage of the input analog signals must be guaranteed.The clock speed of the ATmega processor on almost all Arduino boards is 16 MHz, so the execution time of an elementary operation is in the order of 60 ns.A high-level computer instruction requires numerous elementary operations and, consequently, some periods, or a few dozen periods, of execution time for a single instruction.For more stringent time requirements, it is possible to use the Arduino DUE board which has a microcontroller with a clock that is about five times faster.The language used to program the processor was C/C++, and the high-level instructions that generate a square wave signal, i.e., the 'clock signal, are as follows: digitalWrite(10, HIGH); //set pin 10 to logic level '1'; digitalWrite(10, LOW); // set pin 10 to logic level '0'.By executing a cycle in which these instructions are continuously sent to the pro- Although very high frequencies are not required, the electronics responsible for driving these sensors must still be able to maintain the high-low transition of the logical states of the digital outputs within the limit of a few tens of nanoseconds.Moreover, at the same time, the stable sending of a train of pulses with a period of a few microseconds and the simultaneous digitization and storage of the input analog signals must be guaranteed.The clock speed of the ATmega processor on almost all Arduino boards is 16 MHz, so the execution time of an elementary operation is in the order of 60 ns.A high-level computer instruction requires numerous elementary operations and, consequently, some periods, or a few dozen periods, of execution time for a single instruction.For more stringent time requirements, it is possible to use the Arduino DUE board which has a microcontroller with a clock that is about five times faster.The language used to program the processor was C/C++, and the high-level instructions that generate a square wave signal, i.e., the 'clock signal, are as follows: digitalWrite(10, HIGH); //set pin 10 to logic level '1'; digitalWrite(10, LOW); // set pin 10 to logic level '0'.By executing a cycle in which these instructions are continuously sent to the processor, a square wave signal whose period is equal to 9 μs is obtained, that is, a whopping By executing a cycle in which these instructions are continuously sent to the processor, a square wave signal whose period is equal to 9 µs is obtained, that is, a whopping 150 times the elementary operation duration.Using this signal as an oscillator for the sensors, it is clear that neither the Toshiba chip nor the CMOS camera would be manageable (see Table 1).For the other sensors, this value is within the limits, and the signal transfer of all points shall occur within an acceptable period of time (with a maximum of approximately 37 ms in the case of the Hamamatsu sensor, for which the reading of a pixel occurs every four cycles of clock pulses).It is not difficult, however, to increase the frequency of the square wave emitted by the digital pins of the Arduino, resorting to direct manipulation of the processor registers, rather than using high-level instructions.The two instructions given previously, if transformed into low-level ones, would change as follows: PORTB = 0b00000100; // set pin 10 to logic level '1'; PORTB = 0b00000000; // set pin 10 to logic level '0'.This sequence of instructions generates a square wave with a period of only 125 ns, just two elementary cycles.Figure 4 shows the waveforms recorded by an oscilloscope corresponding to a train of pulses sent from an Arduino UNO board using high-and low-level instructions respectively. The two instructions given previously, if transformed into low-level ones, would change as follows: PORTB = 0b00000100; // set pin 10 to logic level '1'; PORTB = 0b00000000; // set pin 10 to logic level '0'.This sequence of instructions generates a square wave with a period of only 125 ns just two elementary cycles.Figure 4 shows the waveforms recorded by an oscilloscope corresponding to a train of pulses sent from an Arduino UNO board using high-and low-level instructions respectively.If the clock speed appears to be sufficient, we need to verify whether the digitization speed of the sensor output signal sent to an analog input of the Arduino board is also within the limits.This digitization must necessarily be inserted within the clock cycle therefore, the clock/read cycle should follow this scheme: -Send a high clock signal; -Read and digitize the analog signal output from the sensor; -Send a low clock signal. In this way, however, since the reading operation lengthens the high-clock time, the pulses no longer have a 50% duty cycle, which is required (even if not strictly) by the sensor electronics.To bring the cycle close to 50%, therefore, the times must necessarily be extended.To find out the digitization time of a signal using high-level instructions, we just run the following loop: for (n = 1; n < 10,000; n++); int value = analogRead(analogPin).This cycle of ten thousand digitizations is completed in approximately 1.1 s; therefore, a single operation takes place in 100 μs.This is an unacceptable speed, because it should be comparable to at least the slowest clock period (9 μs).Beyond the fact that 100 μs is excessive, even for the Hamamatsu chip (as well as for the Toshiba one), the reading of the 1024 pixels would take place in almost half a second, which would be intolerable if several readings per second were expected.Actually, the digitalization speed is limited by an Arduino hardware setting that can be changed with a few instructions.In practice the ADC also has its own clock, which is derived from the main clock (the 16 MHz one) scaled by a numerical factor.This factor, called the 'prescaler', is equal to 128 by default which, when combined with the 13 cycles used by the digitization instruction, leads to the 100 μs previously measured.Decreasing the prescaler from 128, for example, to 16 brings the ADC clock to the value of 1 MHz.This is achieved via the following instructions (see Appendix A): If the clock speed appears to be sufficient, we need to verify whether the digitization speed of the sensor output signal sent to an analog input of the Arduino board is also within the limits.This digitization must necessarily be inserted within the clock cycle; therefore, the clock/read cycle should follow this scheme: -Send a high clock signal; -Read and digitize the analog signal output from the sensor; -Send a low clock signal. In this way, however, since the reading operation lengthens the high-clock time, the pulses no longer have a 50% duty cycle, which is required (even if not strictly) by the sensor electronics.To bring the cycle close to 50%, therefore, the times must necessarily be extended.To find out the digitization time of a signal using high-level instructions, we just run the following loop: for (n = 1; n < 10,000; n++); int value = analogRead(analogPin).This cycle of ten thousand digitizations is completed in approximately 1.1 s; therefore, a single operation takes place in 100 µs.This is an unacceptable speed, because it should be comparable to at least the slowest clock period (9 µs).Beyond the fact that 100 µs is excessive, even for the Hamamatsu chip (as well as for the Toshiba one), the reading of the 1024 pixels would take place in almost half a second, which would be intolerable if several readings per second were expected.Actually, the digitalization speed is limited by an Arduino hardware setting that can be changed with a few instructions.In practice, the ADC also has its own clock, which is derived from the main clock (the 16 MHz one) scaled by a numerical factor.This factor, called the 'prescaler', is equal to 128 by default, which, when combined with the 13 cycles used by the digitization instruction, leads to the 100 µs previously measured.Decreasing the prescaler from 128, for example, to 16, brings the ADC clock to the value of 1 MHz.This is achieved via the following instructions (see Appendix A): #define cbi(sfr, bit) (_SFR_BYTE(sfr) &= ~_BV(bit)) #define sbi(sfr, bit) (_SFR_BYTE(sfr) |= _BV(bit)) sbi(ADCSRA, ADPS2); cbi(ADCSRA, ADPS1); cbi(ADCSRA, ADPS0). By setting the prescaler to 16, the time to complete the same cycle seen previously becomes 150 ms, that is, 15 µs for each analog-digital conversion.If the ADC clock is further pushed up to 2 MHz (prescaler equal to 8), a complete cycle that includes sending the clock signal to the sensor, during which the voltage signal is read and digitized, would have a period of approximately 16 µs (8 µs for the conversion and another 8 µs to have a 50% duty cycle).Using this cycle duration, Table 2 shows the times of a complete scan for each of the linear sensors (for the sake of comparison, in the table we also report those that do not allow such a slow clock).For completeness, the same information is shown if the Arduino DUE board is used (optimizing the ADC times).However, if a light sensor does not require a particularly high refresh rate, the board with the 16 MHz processor is suitable for this purpose. Results of the Tests Carried out on the Linear Sensors All sensors presented in Table 2 and shown in Figure 5 were interfaced to an Arduino board to verify the ability of this board to drive the sensors and acquire data as well as to evaluate the reliability of a measurement device. By setting the prescaler to 16, the time to complete the same cycle seen previously becomes 150 ms, that is, 15 μs for each analog-digital conversion.If the ADC clock is further pushed up to 2 MHz (prescaler equal to 8), a complete cycle that includes sending the clock signal to the sensor, during which the voltage signal is read and digitized, would have a period of approximately 16 μs (8 μs for the conversion and another 8 μs to have a 50% duty cycle).Using this cycle duration, Table 2 shows the times of a complete scan for each of the linear sensors (for the sake of comparison, in the table we also report those that do not allow such a slow clock).For completeness, the same information is shown if the Arduino DUE board is used (optimizing the ADC times).However, if a light sensor does not require a particularly high refresh rate, the board with the 16 MHz processor is suitable for this purpose. Results of the Tests Carried Out on the Linear Sensors All sensors presented in Table 2 and shown in Figure 5 were interfaced to an Arduino board to verify the ability of this board to drive the sensors and acquire data as well as to evaluate the reliability of a measurement device. TSL1401CL The first sensor to be tested was the AMS model TSL1401CL [21].This sensor, prepared for surface soldering, was soldered to semirigid wires and put onto a prototype board.For the test, the board was inserted into a light shielding container with a 2 mm wide slit centered above the sensor, 2 cm away.The timing requirements for this array are rather moderate and the number of pixels is low, so we decided to work at a low clock frequency using exclusively high-level instructions.The sequence of pulses sent to the sensor is illustrated in Figure 6.After a while, the program was started, Arduino sends a START signal to empty the content that the photodiodes have previously accumulated. TSL1401CL The first sensor to be tested was the AMS model TSL1401CL [21].This sensor, prepared for surface soldering, was soldered to semirigid wires and put onto a prototype board.For the test, the board was inserted into a light shielding container with a 2 mm wide slit centered above the sensor, 2 cm away.The timing requirements for this array are rather moderate and the number of pixels is low, so we decided to work at a low clock frequency using exclusively high-level instructions.The sequence of pulses sent to the sensor is illustrated in Figure 6.After a while, the program was started, Arduino sends a START signal to empty the content that the photodiodes have previously accumulated.After this signal, the light is collected by the photodiodes until the second START signal.In this sensor, the minimum light integration time (the interval between two START signals) is equal to 110 times the clock duration plus 20 µs.Using high-level instructions, therefore, with a minimum clock time of 9 µs, the minimum integration time is equal to about 1 ms. After launching the acquisition program, we used two cycles of 150 clock signals plus another cycle of 128 pulses for the voltage reading.In order to adjust the amount of light recorded by the sensor to exploit the maximum signal/noise ratio, a parameter was inserted into the program that allows us to increase or decrease the clock period and therefore modify the integration time.In this way, the time taken for a complete acquisition ranges from a minimum of 16 ms to a maximum of 40 ms.This procedure immediately gave good results. LF1401 The situation for the IC Haus sensor [22] is similar to that of the AMS.The two chips, in fact, are identical in terms of both the pixel dimensions and electrical characteristics of the electronics.Therefore, the test carried out on this sensor gave results comparable to those of the TSL1401 using the same instructions.The IC Haus chip, however, has some advantages over the AMS one.First of all, the sensor is placed on a plate equipped with holes, on which the electrical contacts can be soldered with a passing-through wire (see Figure 5).Therefore, unlike the twin TSL1401, there is no need to perform surface soldering and the risk of contacts detaching is drastically reduced.Furthermore, in the LF1401, the signal integration time can be directly adjusted using a dedicated input.If this input is connected to the ground, integration is active, while if it is connected to the power supply, integration is inhibited.In this way, the integration time can be adjusted down to zero, which can be useful in case of particularly intense light sources, or increased as desired without having to change the clock period and the acquisition time. S9226 The Hamamatsu sensor [23] is similar to the previous two in size but contains eight times more pixels.Among all of the sensors taken into consideration in this paper, this is the most expensive and not even the one with the most interesting features, but it is a device with excellent quality and reliability.Unlike 128-pixel devices, this one comes in a package that resembles a common dual in-line package integrated circuit (as the Sony and Toshiba sensors discussed below).In fact, it is equipped with pins that can be inserted into a common eight-pin socket, so there is no need to perform soldering.The number of pixels, although not at the level of the Sony or Toshiba model, is still very high, and the size is the smallest among the linear arrays.A small pixel accomplishes the accuracy of the measurement of the width of a light line, since, for the same-sized illuminated area, the line will intercept a greater number of pixels, thus reducing errors due to background noise or the non-uniformity of the response.This chip requires a maximum value of 100 μs for the clock oscillation period, which is equal to the duration of the digitizing operation on an ATmega board when using high-level instructions (see Section 4).Furthermore, some requirements regarding the rise/fall times of the pulses and the duty After launching the acquisition program, we used two cycles of 150 clock signals plus another cycle of 128 pulses for the voltage reading.In order to adjust the amount of light recorded by the sensor to exploit the maximum signal/noise ratio, a parameter was inserted into the program that allows us to increase or decrease the clock period and therefore modify the integration time.In this way, the time taken for a complete acquisition ranges from a minimum of 16 ms to a maximum of 40 ms.This procedure immediately gave good results. LF1401 The situation for the IC Haus sensor [22] is similar to that of the AMS.The two chips, in fact, are identical in terms of both the pixel dimensions and electrical characteristics of the electronics.Therefore, the test carried out on this sensor gave results comparable to those of the TSL1401 using the same instructions.The IC Haus chip, however, has some advantages over the AMS one.First of all, the sensor is placed on a plate equipped with holes, on which the electrical contacts can be soldered with a passing-through wire (see Figure 5).Therefore, unlike the twin TSL1401, there is no need to perform surface soldering and the risk of contacts detaching is drastically reduced.Furthermore, in the LF1401, the signal integration time can be directly adjusted using a dedicated input.If this input is connected to the ground, integration is active, while if it is connected to the power supply, integration is inhibited.In this way, the integration time can be adjusted down to zero, which can be useful in case of particularly intense light sources, or increased as desired without having to change the clock period and the acquisition time. S9226 The Hamamatsu sensor [23] is similar to the previous two in size but contains eight times more pixels.Among all of the sensors taken into consideration in this paper, this is the most expensive and not even the one with the most interesting features, but it is a device with excellent quality and reliability.Unlike 128-pixel devices, this one comes in a package that resembles a common dual in-line package integrated circuit (as the Sony and Toshiba sensors discussed below).In fact, it is equipped with pins that can be inserted into a common eight-pin socket, so there is no need to perform soldering.The number of pixels, although not at the level of the Sony or Toshiba model, is still very high, and the size is the smallest among the linear arrays.A small pixel accomplishes the accuracy of the measurement of the width of a light line, since, for the same-sized illuminated area, the line will intercept a greater number of pixels, thus reducing errors due to background noise or the non-uniformity of the response.This chip requires a maximum value of 100 µs for the clock oscillation period, which is equal to the duration of the digitizing operation on an ATmega board when using high-level instructions (see Section 4).Furthermore, some requirements regarding the rise/fall times of the pulses and the duty cycle (see Figure 3) are such that it is better to drive this chip through the Arduino DUE board.Furthermore, this board works at 3.3 V and can be used to power the S9226.The digitization time of the Arduino DUE is only 4 µs (using high-level instructions), and it is possible to increase the resolution of the conversion up to 12 bit.The electronics of this chip provide data every four clock cycles.Then, the minimum acquisition time of 1024 pixels is just over 32 ms without the need to manipulate the processor registers.As in the case of the 128-pixel arrays, with the Hamamatsu chip, the first tests gave a comforting outcome, so we decided to design and create the new solar compass with this sensor, as detailed in Section 7. ILX554A This Sony chip [24] has the only drawback of no longer being in production, although, at the time of writing this paper, many samples are still being sold by several international retailers.Compared to the sensors just illustrated, the ILX554A has the notable advantage of having a much larger number of photodiodes (2048) and a very small-sized single photosensor (14 µm instead of the 63.5 µm of the 128-pixel sensors and comparable with the 7.8 µm of the Hamamatsu one).Moreover, it has a much wider sensitive length (28.7 mm versus approximately 8 mm), and therefore, it is more suitable for precision measurements such as the reading of bar codes or the measurement of electromagnetic spectra.On the other hand, the large number of pixels is not suitable for a modest clock speed, and even storing 2048 values requires hardware that is beyond the capabilities of the Arduino boards.The Arduino analog-digital converter, in fact, works at 10 bits, which implies the use of 2 byte variables (unless you wish to lose resolution or make complicated algorithms to pack 10 bit numbers into 1 byte variables).Therefore, storing 2048 numbers requires a 4096 byte RAM, double that available for the UNO and NANO boards and half that of the MEGA2560 board.A trick that is useful for testing the Arduino-ILX554A combination is to memorize only part of the array or to send the digitized data to an external interface such as a display or a computer connected serially.The latter system allows us to store the data (by the computer), but it has the disadvantage of lengthening the acquisition times, since between one digitization and another, it is necessary to send the data to the serial port.Another possibility is to use the Arduino DUE board, which is equipped with 96 kB of memory and also has a higher clock speed.On the other hand, the 3.3 V working voltage of the Arduino DUE is technically incompatible with that of the Sony chip, which is 5 V.To use this board, therefore, you need a voltage signal adapter.From an electrical point of view, this sensor is slightly more demanding than those previously seen.The clock, in fact, does not set an upper limit to the oscillation period while, on the contrary, the rise and fall times must be contained within 100 ns, and the duty cycle can vary from 40% to 60% at most. With these assumptions, we decided to verify the possibility of driving this sensor via an Arduino UNO board Rev. 3 using low-level instructions to produce a fast clock and storing the 1500 central pixels using only 1 byte per pixel, thereby decreasing the analog-to-digital resolution to 8 bits.To exploit the full potential of this sensor, a prototype that uses an Arduino to drive the ILX554A could be interfaced to an external memory (an EEPROM or an SD card) to save all 2048 values in a reasonably short time.Alternatively, if the objective of the measurement is to find a single peak, after having verified the presence of a peak (without storing the data), a second scan could be restricted to those pixels that are located around the peak itself.The algorithm for driving and downloading data from the ILX554A sensor does not change much compared to that used for the TSL1401, apart from the use of low-level instructions and with the prescaler of the analog-digital converter decreased from 128 to 32 (see Section 4), providing a frequency of 0.5 MHz.In order to respect the 50% duty cycle, however, the two clock half-periods, where digitization is performed during the first, must have similar durations.With the instructions used, the digitization time was 28 µs, so each half-oscillation of the clock in which there was no digitization was lengthened to 20 µs via a software delay (with a duty cycle of 20/48 = 42%). The global acquisition time, taking into account a period of 48 µs, an integration time of 20 clock cycles, and 2088 cycles for a complete scan (40 optically inactive pixels and 2048 active pixels), was approximately 100 ms. A problem that must be taken into consideration when testing such sensors, which are extremely sensitive to light, is the difficulty of distinguishing a malfunction of the Sensors 2023, 23, 9787 10 of 21 entire sensor from saturation due to excessive light intensity.The photodiodes of ILX554A saturate when hit by a light intensity of 4 millilux for one second.An excessive amount of light compared to this limit, which hits even just a part of the sensor, does not simply lead to a maximum signal (which, for this sensor, corresponds to a minimum output voltage), but to anomalous behavior of all of the pixels, like a broken sensor.To minimize this problem and verify the correct functioning of the chip, we shielded almost the entire sensor with cardboard, leaving it uncovered on a few millimeters, and we inserted the sensor in a box that was completely closed, except for a hole of approximately 6 mm 2 , that was not directly facing the sensor, from which light could enter.In this way, a very small amount of ambient light could reach the sensor area.Figure 7 shows images of the behavior of the signal acquired with the oscilloscope under these conditions, the same signal in which the central pixels are weakly saturated, and finally, the signal obtained when the box containing the sensor was opened on one side.Taking care to avoid sensor saturation, the trial test of the ILX554A via Arduino UNO had a positive outcome. of the clock in which there was no digitization was lengthened to 20 μs via a software delay (with a duty cycle of 20/48 = 42%). The global acquisition time, taking into account a period of 48 μs, an integration time of 20 clock cycles, and 2088 cycles for a complete scan (40 optically inactive pixels and 2048 active pixels), was approximately 100 ms. A problem that must be taken into consideration when testing such sensors, which are extremely sensitive to light, is the difficulty of distinguishing a malfunction of the entire sensor from saturation due to excessive light intensity.The photodiodes of ILX554A saturate when hit by a light intensity of 4 millilux for one second.An excessive amount of light compared to this limit, which hits even just a part of the sensor, does not simply lead to a maximum signal (which, for this sensor, corresponds to a minimum output voltage), but to anomalous behavior of all of the pixels, like a broken sensor.To minimize this problem and verify the correct functioning of the chip, we shielded almost the entire sensor with cardboard, leaving it uncovered on a few millimeters, and we inserted the sensor in a box that was completely closed, except for a hole of approximately 6 mm 2 , that was not directly facing the sensor, from which light could enter.In this way, a very small amount of ambient light could reach the sensor area.Figure 7 shows images of the behavior of the signal acquired with the oscilloscope under these conditions, the same signal in which the central pixels are weakly saturated, and finally, the signal obtained when the box containing the sensor was opened on one side.Taking care to avoid sensor saturation, the trial test of the ILX554A via Arduino UNO had a positive outcome. TCD1304DG This Toshiba sensor [25] appears to be even more promising than the ILX554A if you need to make highly accurate measurements.Its 3648 pixels of 8 μm width, in fact, if used to measure the angle of a solar ray passing through a slit as in the case of our solar compass, can cover an angular aperture of greater than 70 degrees with a resolution of 2/100 of a degree per pixel.Furthermore, this sensor is still produced by the parent company, and its cost is definitely low.Unfortunately, this was also the only sensor that could not be driven with Arduino.Probably, the request for a very high clock time (minimum frequency 800 kHz) combined with a rather demanding requirement regarding the synchronization between the integration pulse and the start pulse (not exceeding a microsecond) did not allow us to obtain any useful results.In fact, the signal output from the sensor has never provided a trend proportional to the light intensity.Even the use of the Arduino DUE board did not give positive results.The only viable solution at the moment is to have additional electronics to send the clock to the chip and synchronize the other two signals (SH and ICG pins) and to delegate the Arduino only to the digitization of the signal. Arduino Interfacing a Matrix Point Sensor Interfacing a matrix sensor is significantly different from that of a linear array.Matrix sensors (CCD or CMOS) are essentially used for imaging, that is, reproducing a sub- TCD1304DG This Toshiba sensor [25] appears to be even more promising than the ILX554A if you need to make highly accurate measurements.Its 3648 pixels of 8 µm width, in fact, if used to measure the angle of a solar ray passing through a slit as in the case of our solar compass, can cover an angular aperture of greater than 70 degrees with a resolution of 2/100 of a degree per pixel.Furthermore, this sensor is still produced by the parent company, and its cost is definitely low.Unfortunately, this was also the only sensor that could not be driven with Arduino.Probably, the request for a very high clock time (minimum frequency 800 kHz) combined with a rather demanding requirement regarding the synchronization between the integration pulse and the start pulse (not exceeding a microsecond) did not allow us to obtain any useful results.In fact, the signal output from the sensor has never provided a trend proportional to the light intensity.Even the use of the Arduino DUE board did not give positive results.The only viable solution at the moment is to have additional electronics to send the clock to the chip and synchronize the other two signals (SH and ICG pins) and to delegate the Arduino only to the digitization of the signal. Arduino Interfacing a Matrix Point Sensor Interfacing a matrix sensor is significantly different from that of a linear array.Matrix sensors (CCD or CMOS) are essentially used for imaging, that is, reproducing a subject through the use of a lens and, possibly, ensuring an image processing speed such that it is attainable to view or record a movie; therefore, it should have no less than 15 frames per second.Unlike the sensors previously seen, in the OV7670 [26], the signal is digitized at the start signal, and the value corresponding to the brightness of the single pixel is sent at each clock cycle via a parallel connection made up of eight pins.At each clock cycle, therefore, the microcontroller must simultaneously read eight digital inputs.Although paralleling eliminates the slow timing problem of serial communication, a camera with VGA resolution (640 × 480 pixels) used to shoot color video at 15 frames per second must send 640 × 480 × 2 × 15 = 9216 million of pixels per second (the sensor uses one byte for chrominance and one byte for luminance information).The required clock speeds, at least theoretically, should not be smaller than 0.1 µs.This requirement is extremely stringent, and a 16 MHz board cannot achieve it.Even if using just one byte per pixel (obtaining a black and white image), the clock would be excessively short.Furthermore, the 300,000 bytes that make up the matrix require a memory that goes beyond that of any Arduino board.This prevents storage for subsequent processing directly on the board's hardware, and the use of external memory becomes essential. However, there is a commercial version of the OV7670 camera where, on the same sensor board, there is 384 kB of memory and an oscillator responsible for sending the fast clock to the CMOS.When using this camera, the Arduino processor interfaces only with the memory, whose clock is less demanding (it can reach 1 µs) and also sends commands directly to the sensor and does not need to keep the data for processing, because it can exploit this external memory to read the already acquired frame, even several times.The disadvantage is a small increase in price (of a few euros) and an increase in time, since there is an extra step.On the other hand, the hardware is simplified, and for our purposes, the use of a microsecond clock guarantees an acceptable acquisition time (about 300 ms).The described version is an OV7670 camera with AL422B memory [27] (see Figure 8).For the sake of clarity, we also tried to use an OV7670 without the AL422B, but the result was never very reproducible when we tried to exploit the maximum resolution.The problems essentially arose from electromagnetic noise that interfered with the clock signal (sent by Arduino and traveling along shielded cables for about 30 cm), so that the images obtained from the CMOS were often affected by spurious signals. theoretically, should not be smaller than 0.1 μs.This requirement is extremely stringent and a 16 MHz board cannot achieve it.Even if using just one byte per pixel (obtaining a black and white image), the clock would be excessively short.Furthermore, the 300,000 bytes that make up the matrix require a memory that goes beyond that of any Arduino board.This prevents storage for subsequent processing directly on the board's hardware and the use of external memory becomes essential. However, there is a commercial version of the OV7670 camera where, on the same sensor board, there is 384 kB of memory and an oscillator responsible for sending the fas clock to the CMOS.When using this camera, the Arduino processor interfaces only with the memory, whose clock is less demanding (it can reach 1 μs) and also sends commands directly to the sensor and does not need to keep the data for processing, because it can exploit this external memory to read the already acquired frame, even several times.The disadvantage is a small increase in price (of a few euros) and an increase in time, since there is an extra step.On the other hand, the hardware is simplified, and for our pur poses, the use of a microsecond clock guarantees an acceptable acquisition time (abou 300 ms).The described version is an OV7670 camera with AL422B memory [27] (see Figure 8).For the sake of clarity, we also tried to use an OV7670 without the AL422B, bu the result was never very reproducible when we tried to exploit the maximum resolution The problems essentially arose from electromagnetic noise that interfered with the clock signal (sent by Arduino and traveling along shielded cables for about 30 cm), so that the images obtained from the CMOS were often affected by spurious signals. The use of the same sensor, equipped with external memory, however, gave much more satisfactory results.Compared to linear sensors, the connections between the board and sensor are much more numerous.As already mentioned, the signal is sent on eigh lines in parallel, rather than on just one serial line.There are two communication chan nels (by I 2 C protocol) for the setting commands and for other signals like the clock and reset.We used the bare minimum number of pins, i.e., 17 out of 22 (see Table 3).The use of the same sensor, equipped with external memory, however, gave much more satisfactory results.Compared to linear sensors, the connections between the board and sensor are much more numerous.As already mentioned, the signal is sent on eight lines in parallel, rather than on just one serial line.There are two communication channels (by I 2 C protocol) for the setting commands and for other signals like the clock and reset.We used the bare minimum number of pins, i.e., 17 out of 22 (see Table 3). The board used was an Arduino DUE and the instructions both for the clock and for reading the digital pins were written by directly driving the processor registers.The sequence of operations to be performed before capturing a picture is quite complex and concerns the setting of the CMOS registers.There are 201 registers and a large number of them must be modified compared to the default value for the entire image acquisition chain to be successful.Fortunately, algorithms can be found on the web that include instructions for setting these registers and that can be adapted according to specific needs.In Appendix B, for the benefit of those who wish to venture into interfacing an Arduino board with the OV7670, there are instructions for the initial setting of the registers to be modified and which, in our case, produced a positive outcome.This CMOS can be set with different resolutions: 120 × 160, 320 × 240, and 640 × 480.Considering that resolution is an important element in any type of measurement and that the acquisition time was not decisive, as long as it was below 1 s, we set the program so that the memory exclusively acquired a VGA resolution image (that is, the highest).Furthermore, since a black and white image was more than sufficient for our purposes, we reduced the signal of each pixel to one byte.Even with only one byte per pixel, however, a 640 × 480 byte matrix exceeds the memory limits of the Arduino DUE.Therefore, to verify the correct acquisition of a VGA frame, there were only two possibilities: dividing the frame into sectors and sending them to a computer for an "a posteriori" entire image reconstruction or giving up storing the bytes in Arduino, sending them directly to the computer.Initially, we checked the correct functioning of the electronics by acquiring a small portion of the entire frame, after which we opted to send the bytes directly to the computer.This solution involved a strong increase in the clock time due to the serial communication timing and, since the AL422B limits the oscillation time of the reading clock (one microsecond at most, according to the datasheet), the insertion of a command to send any input data to the serial port could lead to malfunctions.The duration of the oscillation of a clock pulse, in fact, was found to be equal to 40 µs, of which more than 39.5 µs was due to the use of the serial line (despite using a baud rate of 250,000).Despite this, the test was successful, and it was possible to obtain the VGA image of Figure 9, in black and white, in about 12 s.The sequence of instructions to send to the chip, after setting the parameters, is quite simple and is summarized here in pseudo-code: -Wait for the vertical sync pulse (start of frame); -Send a reset pulse to the WRST pin; -Enable writing to memory by raising the WR pin; -(at this moment, the AL422B memory acquires the image from the camera); -Wait for the vertical sync pulse (end of frame); -Disable writing to memory by lowering the WR pin; -Lower the RRST pin to bring the read pointer to the beginning of the frame; -Wait a few clock cycles and then raise the RRST pin; -Now, at each clock cycle, the byte relating to the brightness of each pixel arrives on pins D0-D7, sequentially. Without inserting the serial transmission between one clock and another, the acquisition speed of an entire frame, considering the waiting times of vertical synchronization, is approximately 300 milliseconds.Any mathematical processing on the image, therefore, can take place extremely quickly without prejudice due to the fact that it is possible to store only a portion of the entire frame on the Arduino.The presence of the AL422B external memory, however, allows you to acquire a frame and then download various portions of the same image several times in successive moments.Without inserting the serial transmission between one clock and another, the acqui sition speed of an entire frame, considering the waiting times of vertical synchronization is approximately 300 milliseconds.Any mathematical processing on the image, therefore can take place extremely quickly without prejudice due to the fact that it is possible to store only a portion of the entire frame on the Arduino.The presence of the AL422B ex ternal memory, however, allows you to acquire a frame and then download various por tions of the same image several times in successive moments. Design and Construction of a Solar Compass with Arduino and a Linear Sensor The ENEA Solar Compass is a pointing measurement instrument, patented in 2013 [18], that is able to accurately determine the viewing direction orientation of any ob served object of interest with respect to the observation point.The operating principle o a solar compass is based on the knowledge of the position of the Sun in the sky, that is, its angular coordinates viewed from a particular observation point on Earth.These coordi nates consist of the elevation of the Sun above the horizon and its azimuth, i.e., the angle between the vertical plane containing the Sun and the local meridian plane (the plane where the two Poles and the observation point lie) [19].Once the direction of view of the Sun has been measured with respect to a vertical reference plane on the solar compass we consequently know the angle that the direction of view of the reference plane makes with respect to the geographical north. Thus, on one side, the instrument is based on a dedicated simplified algorithm for the calculation of the ephemeris and, on the other hand, on the measurement of the direction o the Sun with respect to the vertical reference plane of the compass, obtained with an inno vative electro-optical device which uses a slit and an optical sensor.The measurement o the direction of the Sun is performed by calculating the center of the line of light projected on the sensor (2-D or 1-D) with respect to the reference column (2D case) or pixel (1D case) This instrument has been proven to be one of the most precise methods for deter mining true north (with an RMS accuracy within 1/100 of a degree).Unfortunately, the prototype that we built and patented with a 2D sensor cannot be replicated, because some components are no longer available on the market.Anyway, since a linear sensor is sufficient to measure the line of sight of the Sun, after the first tests were carried out on the Hamamatsu chip, we decided to design and create a new solar compass based on this sensor and an Arduino board.Clearly, the accuracy of such an instrument is highly de pendent on the geometrical features of the optical configuration.In order to keep the system compact and to avoid excessive broadening of the light line, the sensor cannot be Design and Construction of a Solar Compass with Arduino and a Linear Sensor The ENEA Solar Compass is a pointing measurement instrument, patented in 2013 [18], that is able to accurately determine the viewing direction orientation of any observed object of interest with respect to the observation point.The operating principle of a solar compass is based on the knowledge of the position of the Sun in the sky, that is, its angular coordinates viewed from a particular observation point on Earth.These coordinates consist of the elevation of the Sun above the horizon and its azimuth, i.e., the angle between the vertical plane containing the Sun and the local meridian plane (the plane where the two Poles and the observation point lie) [19].Once the direction of view of the Sun has been measured with respect to a vertical reference plane on the solar compass, we consequently know the angle that the direction of view of the reference plane makes with respect to the geographical north. Thus, on one side, the instrument is based on a dedicated simplified algorithm for the calculation of the ephemeris and, on the other hand, on the measurement of the direction of the Sun with respect to the vertical reference plane of the compass, obtained with an innovative electro-optical device which uses a slit and an optical sensor.The measurement of the direction of the Sun is performed by calculating the center of the line of light projected on the sensor (2-D or 1-D) with respect to the reference column (2D case) or pixel (1D case). This instrument has been proven to be one of the most precise methods for determining true north (with an RMS accuracy within 1/100 of a degree).Unfortunately, the prototype that we built and patented with a 2D sensor cannot be replicated, because some components are no longer available on the market.Anyway, since a linear sensor is sufficient to measure the line of sight of the Sun, after the first tests were carried out on the Hamamatsu chip, we decided to design and create a new solar compass based on this sensor and an Arduino board.Clearly, the accuracy of such an instrument is highly dependent on the geometrical features of the optical configuration.In order to keep the system compact and to avoid excessive broadening of the light line, the sensor cannot be placed at a distance of more than 25 mm from the slit.Then, keeping the sensor at a distance of 20 mm, for example, to distinguish the direction of view of the Sun with a resolution of 0.1 degrees, a sensor with pixel size of less than 35 µm is required.In this respect, the 7.8 µm size of the Hamamatsu array is a very good choice. The electronics supplied with the compass, in addition to these two elements, include a GPS receiver, an alphanumeric display, an SD card reader, and a Bluetooth module.Everything (except the GPS) is inserted into a console which also contains the battery pack and the voltage regulation modules; externally, there are buttons, LEDs, and various connectors.The linear sensor is placed, as in the original compass, in an aluminum case to be placed above a pointing system, such as a theodolite.The wall facing the Sun, behind which the sensor is placed, is inclined at 45 • and presents a vertical slit for the passage of Sun rays.The slit is 4 cm high and approximately 70 µm wide and has been realized by the Institute of Photonics and Nanotechnologies of the CNR in Rome by depositing a thin layer of metal (chromium) on a glass slide and removing part of it using a lithographic technique.The compass head and electronics are connected via a common eight-pole ethernet cable. Once all the interfaces connected to the Arduino board were verified, the most difficult problem to solve was to calibrate the electro-optical device.In particular, it was necessary to know, with high accuracy, the distance between the sensor and the slit (D), the identification number of the pixel representing the projection of the slit on the sensor (XR0), the two rotation angles of the sensor with respect to the vertical plane containing the slit (α) and the direction of the slit itself (β).Finally, also the deviation angle (ψ) along the horizontal plane between the direction of the vertical reference plane of the compass compared to that pointed to by the sighting system (a telescope) of the support where the compass head is placed (see Figure 10) has to be taken into account.This is because, due to mechanical errors, the last two planes might not coincide. which the sensor is placed, is inclined at 45° and presents a vertical slit for the passage of Sun rays.The slit is 4 cm high and approximately 70 μm wide and has been realized by the Institute of Photonics and Nanotechnologies of the CNR in Rome by depositing a thin layer of metal (chromium) on a glass slide and removing part of it using a lithographic technique.The compass head and electronics are connected via a common eight-pole ethernet cable. Once all the interfaces connected to the Arduino board were verified, the most difficult problem to solve was to calibrate the electro-optical device.In particular, it was necessary to know, with high accuracy, the distance between the sensor and the slit (D), the identification number of the pixel representing the projection of the slit on the sensor (XR0), the two rotation angles of the sensor with respect to the vertical plane containing the slit (α) and the direction of the slit itself (β).Finally, also the deviation angle (ψ) along the horizontal plane between the direction of the vertical reference plane of the compass compared to that pointed to by the sighting system (a telescope) of the support where the compass head is placed (see Figure 10) has to be taken into account.This is because, due to mechanical errors, the last two planes might not coincide. Calibration took place as follows: the compass was placed on a theodolite equipped with a spirit level and a high-precision goniometer.Then, some LEDs were placed in front of the compass along a vertical line to simulate the Sun at different heights.For each of these LEDs, the pixel number xa, corresponding to the center of the light line formed on the sensor, was measured for different positions of the goniometer.By plotting the values of this line center as a function of the angle of the goniometer, the result obtained is a series of points placed along different lines, each corresponding to a different height of the light source.The slope and y-axis intercept of these lines depend on the construction parameters of the compass, so by acting on these parameters, it is possible to optimize the correspondence between the theoretical curve and the experimental data.Assuming that the sensor is mounted perfectly parallel to the wall with the slit and orthogonal to the slit itself (i.e., with α = 0 and β = 90°), the equation that determines the point xa hit by a ray of Calibration took place as follows: the compass was placed on a theodolite equipped with a spirit level and a high-precision goniometer.Then, some LEDs were placed in front of the compass along a vertical line to simulate the Sun at different heights.For each of these LEDs, the pixel number x a , corresponding to the center of the light line formed on the sensor, was measured for different positions of the goniometer.By plotting the values of this line center as a function of the angle of the goniometer, the result obtained is a series of points placed along different lines, each corresponding to a different height of the light source. The slope and y-axis intercept of these lines depend on the construction parameters of the compass, so by acting on these parameters, it is possible to optimize the correspondence between the theoretical curve and the experimental data.Assuming that the sensor is mounted perfectly parallel to the wall with the slit and orthogonal to the slit itself (i.e., with α = 0 and β = 90 • ), the equation that determines the point x a hit by a ray of light coming from a source placed at elevation ϕ and azimuth ϑ is the following, where x a and XR0 are given in mm (details are in Appendix C): The distance x a (measured with respect to the edge of the array, i.e., with respect to the first pixel) is exactly XR0 when ϑ = 0, i.e., when the light is orthogonal to the sensor.The values of ϑ, ϕ, and x a are obtained experimentally, while the D and XR0 values must be determined so that the difference between the two members of Equation ( 1) is minimized.In Figure 11, it is possible to appreciate the sensitivity to these parameters by observing the difference between the experimental points and the straight line obtained through Equation (1) for a particular elevation value ϕ and for different values of the parameters D and XR0. compass always points in the same direction.A trend other than a series of values oscillating around an average value, for example, values that tend to increase, decrease, or form a parabola, proves that some calibration parameters are wrong.In that case, by using specifically developed software to simulate the behavior of the compass, it is possible to change the values of the construction parameters to see the expected output given by the compass and to identify the parameter responsible for that wrong result. To this end, we performed several scans with the compass fixed and illuminated by the Sun and recorded a series of line center values xa vs. time.At the same time, we used our ephemeris calculation software (written in C++ using Microsoft Visual Studio Community 2019 (release 16.11.21),which, by entering the geographical data as the date, time, latitude, and longitude, provides the azimuth of the Sun) and calculated the foreseen position of the light line on the sensor, depending on the given calibration parameters.In this way, firstly, the correctness of the calibration parameters is verified by comparing the sequence of theoretical and experimental azimuth values and, secondly, if necessary, the estimate of the parameters can be refined by forcing this trend to be constant (with the sensor stationary, as already mentioned, the azimuth of the compass should not change).The simulation program, moreover, is able to optimize the parameters automatically, leaving the possibility of changing them manually.Figure 12 shows both the azimuth measured by the compass and the azimuth resulting from the simulation with this program (left graph) and the result after optimization (right graph).Once the calibration parameters have been determined (ψ is measured successively), the second step is to check the behavior of the compass once it is exposed to the Sun. The correctness of the parameters can be estimated by observing the azimuth values provided while keeping the compass stationary for a certain period of time.In this case, its azimuth should be constant since, regardless of the apparent motion of the Sun, the compass always points in the same direction.A trend other than a series of values oscillating around an average value, for example, values that tend to increase, decrease, or form a parabola, proves that some calibration parameters are wrong.In that case, by using specifically developed software to simulate the behavior of the compass, it is possible to change the values of the construction parameters to see the expected output given by the compass and to identify the parameter responsible for that wrong result. To this end, we performed several scans with the compass fixed and illuminated by the Sun and recorded a series of line center values x a vs. time.At the same time, we used our ephemeris calculation software (written in C++ using Microsoft Visual Studio Community 2019 (release 16.11.21),which, by entering the geographical data as the date, time, latitude, and longitude, provides the azimuth of the Sun) and calculated the foreseen position of the light line on the sensor, depending on the given calibration parameters. In this way, firstly, the correctness of the calibration parameters is verified by comparing the sequence of theoretical and experimental azimuth values and, secondly, if necessary, the estimate of the parameters can be refined by forcing this trend to be constant (with the sensor stationary, as already mentioned, the azimuth of the compass should not change).The simulation program, moreover, is able to optimize the parameters automatically, leaving the possibility of changing them manually.Figure 12 shows both the azimuth measured by the compass and the azimuth resulting from the simulation with this program (left graph) and the result after optimization (right graph). Once D and XR0 have been optimized, the deviation between the reference plane of the compass and the direction aimed at by the pointing system (the angle ψ in Figure 10) remains to be included in the calculations. on the experimental azimuth given by the compass.Obviously, the ψ angle should be measured any time the compass is removed from the pointing instrument and then placed again on it, unless a very accurate mechanical system for a reproducible positioning is adopted. Once this last calibration phase has been completed, the compass is ready to be used as a measuring instrument.A few days after completing the whole calibration procedure, we checked if the compass continued to provide the correct azimuth values.Therefore, we placed the compass on an observation point at the ENEA Center in Frascati, whose azimuth values with respect to some reference points, located in Rome, are known (using the geographical coordinates method, as described above), and we carried out some measurements.The results are shown in Table 4. Table 4. Azimuth values measured using the compass made with the Arduino DUE board and the Hamamatsu S9226 sensor a few days after completing the calibration process and a comparison with the actual values.From these data, both the precision and accuracy of this instrument can be appreciated.Note that the differences between the data obtained by the maps (the expected value) and the experimental data are expressed in arc minutes.In order to estimate this last parameter, it is crucial to know the true azimuth of a given target seen by the observation point.If the azimuth between points A (observer) and B (target) is known, in fact, it is sufficient to aim at point B from A, measure the azimuth provided by the compass, and calculate the difference between this value and the known azimuth.To obtain, with a suitable level of accuracy, the true azimuth of the line joining points A and B, it is necessary to know the geographical coordinates of the two points and use a method that takes into account the curvature of the Earth [28].For a typical accuracy of a few meters on the geographical coordinates, a distance between A and B of larger than 10 km is sufficient to reduce the error on the true azimuth to values smaller than the error on the experimental azimuth given by the compass.Obviously, the ψ angle should be measured any time the compass is removed from the pointing instrument and then placed again on it, unless a very accurate mechanical system for a reproducible positioning is adopted. Saint Once this last calibration phase has been completed, the compass is ready to be used as a measuring instrument. A few days after completing the whole calibration procedure, we checked if the compass continued to provide the correct azimuth values.Therefore, we placed the compass on an observation point at the ENEA Center in Frascati, whose azimuth values with respect to some reference points, located in Rome, are known (using the geographical coordinates method, as described above), and we carried out some measurements.The results are shown in Table 4. Even if the error of a single measure doubles when compared with the 2D-sensor case [19], these data show that this device has an error of less 0.5 arc minutes (0.008 degrees) during repeated measurements. Although the solar compass is a very old orientation tool, with some interesting handy examples [29], it is still one of the most accurate devices for determining the geographic north.Apart from magnetic compasses, which point towards the magnetic pole and not towards the true geographic north, other kinds of compasses are based on the gyroscopic effect and on GPS satellites. Gyroscopic compasses are the most accurate ones (with an uncertainty of a few arc seconds), but their cost is very high (tens of thousands of euros), and the time necessary to build and align the setup is very long [30].The compasses based on GPS satellite constellations are cheaper than gyroscopic compasses, but their accuracy is worse than our electronic solar compass.Recent works testing the validity of GPS measurements provide accuracies ranging from 0.07 to 1.5 degrees [31] and from 0.09 to 0.21 degrees [32], and, in a paper concerning a measurement campaign for the orientation of paleomagnetic drill cores, the RMS deviation of a GPS compass was shown to be well above 0.1 • [33]. Therefore, with respect to other survey devices, we can conclude that our solar compass, based on the Arduino-Hamamatsu pair, is an instrument with excellent accuracy and compactness and a competitive cost.Table 4. Azimuth values measured using the compass made with the Arduino DUE board and the Hamamatsu S9226 sensor a few days after completing the calibration process and a comparison with the actual values.From these data, both the precision and accuracy of this instrument can be appreciated.Note that the differences between the data obtained by the maps (the expected value) and the experimental data are expressed in arc minutes. Conclusions The versatility and reliability of Arduino boards are well known, and the possibility of finding a considerable quantity of sensors on the market at a low price as well as retrieving, on the Internet, the codes for their interfacing, makes them devices that can also be used in a research laboratory.In this paper, we illustrated tests of light sensors, in both linear and matrix arrangements, connected to Arduino boards to verify their compatibility and the possibility of using these systems as measuring instruments.We took into consideration five linear sensors manufactured by AMS, IC Haus, Hamamatsu, Sony, and Toshiba, consisting of a minimum of 128 pixels to a maximum of 3648 pixels and a 640 × 480 pixel CMOS camera from Omnivision.In all tests, except for the Toshiba sensor, the trials were positive, and we were able to verify how the Arduino/photosensor combination could be a valid device for measurements involving light, such as a spectrometer or a reader of bar code, or even for imaging systems.In particular, a solar compass was designed and created based on an Arduino DUE board and the Hamamatsu chip for measuring the direction of geographic north (or the viewing direction of any object of interest with respect to geographic north) by exploiting the equations of terrestrial motion.This solar compass obtained results that are similar to those achieved with the patented ENEA solar compass, whose performance is, in turn, comparable with the best devices available on the market, but whose cost is at least two orders of magnitude greater than that of our device. Figure 1 . Figure 1.The Arduino UNO board with its main components. Figure 1 . Figure 1.The Arduino UNO board with its main components. Sensors 2023 , 23, x FOR PEER REVIEW 5 of 21 of the Hamamatsu model regarding the speed of the rising and falling edges and the delay between the clock and the start. Figure 2 . Figure 2. Example of the sequence of the clock, start, and output pulses for the acquisition of the light signal. Figure 3 . Figure 3. Rising and falling edge requirements of the Hamamatsu sensor. Figure 2 . Figure 2. Example of the sequence of the clock, start, and output pulses for the acquisition of the light signal. Figure 2 . Figure 2. Example of the sequence of the clock, start, and output pulses for the acquisition of the light signal. Figure 3 . Figure 3. Rising and falling edge requirements of the Hamamatsu sensor. Figure 3 . Figure 3. Rising and falling edge requirements of the Hamamatsu sensor. Figure 4 . Figure 4.A square wave emitted from an Arduino UNO pin at the maximum frequency.(Left) The output signal when high-level instructions are used (the sharp peaks visible at any voltage flip are due to imperfect balance of the electronic circuit); (Right) the same signal when the processor regis ters are directly manipulated.Note that the time scale of the right figure is 20 times shorter than the left one. Figure 4 . Figure 4.A square wave emitted from an Arduino UNO pin at the maximum frequency.(Left) The output signal when high-level instructions are used (the sharp peaks visible at any voltage flip are due to imperfect balance of the electronic circuit); (Right) the same signal when the processor registers are directly manipulated.Note that the time scale of the right figure is 20 times shorter than the left one. Figure 6 . Figure 6.Sequence of the pulses sent by Arduino to the TSL1401CL sensor for the acquisition of the 128 pixel signals. Figure 6 . Figure 6.Sequence of the pulses sent by Arduino to the TSL1401CL sensor for the acquisition of the 128 pixel signals. Figure 7 . Figure 7.Light signals measured by the ILX554A sensor, driven by an Arduino UNO and recorded by the oscilloscope.(A) Signal under normal conditions (curve that drops to a minimum); (B) signal under weak saturation conditions; (C) signal under strong saturation conditions.Situation (C) could be mistaken for a malfunction of the sensor, since it seems to saturate in an unlit area and give zero signal (indeed, even below the minimum) in the part hit by the light. Figure 7 . Figure 7.Light signals measured by the ILX554A sensor, driven by an Arduino UNO and recorded by the oscilloscope.(A) Signal under normal conditions (curve that drops to a minimum); (B) signal under weak saturation conditions; (C) signal under strong saturation conditions.Situation (C) could be mistaken for a malfunction of the sensor, since it seems to saturate in an unlit area and give zero signal (indeed, even below the minimum) in the part hit by the light. Figure 8 . Figure 8.The camera OV7670 with the AL422B memory chip.The classic version does not have the integrated circuit on the back side, and there are 18 connection pins instead of 22. Figure 8 . Figure 8.The camera OV7670 with the AL422B memory chip.The classic version does not have the integrated circuit on the back side, and there are 18 connection pins instead of 22. Figure 10 . Figure 10.Representation of the variables and parameters to take into account when calibrating the solar compass. Figure 10 . Figure 10.Representation of the variables and parameters to take into account when calibrating the solar compass. Figure 11 . Figure 11.Line of light position measured on the array, from a light source with an elevation of 56.1° and its azimuth varied with respect to the normal of the sensor, compared with the corresponding theoretical value for different distances D (left) and XR0 (right).We can see how very small differences in parameters easily lead to a disagreement with the experimental data. Figure 11 . Figure 11.Line of light position measured on the array, from a light source with an elevation of 56.1 •and its azimuth varied with respect to the normal of the sensor, compared with the corresponding theoretical value for different distances D (left) and XR0 (right).We can see how very small differences in parameters easily lead to a disagreement with the experimental data. Figure 12 . Figure 12. (Left) Experimental data of the azimuth measured by the stationary compass (circles) and the corresponding values calculated by the software (continuous curve), in which the construction parameters of the compass and the light line position, resulting from the measurement, were entered; (Right) the experimental data values (circles) and the new data calculated using the optimized parameters (squared).With optimization, by changing only parameter D of 0.4 mm, the points are arranged around a constant value within ±1.5 arc minutes, i.e., ±1/25 of a degree. Figure 12 . Figure 12. (Left) Experimental data of the azimuth measured by the stationary compass (circles) and the corresponding values calculated by the software (continuous curve), in which the construction parameters of the compass and the light line position, resulting from the measurement, were entered; (Right) the experimental data values (circles) and the new data calculated using the optimized parameters (squared).With optimization, by changing only parameter D of 0.4 mm, the points are arranged around a constant value within ±1.5 arc minutes, i.e., ±1/25 of a degree. Table 1 . Optical/electrical features and costs of the tested light sensors. Table 2 . Acquisition times for a complete scan of the arrays to be tested using a clock of 16 µs for the UNO board and the lowest possible clock (around 3 µs) for the DUE board. Table 2 . Acquisition times for a complete scan of the arrays to be tested using a clock of 16 μs for the UNO board and the lowest possible clock (around 3 μs) for the DUE board. Table 3 . List of pins on the board with the CMOS OV7670 and the AL422B memory with the indications of those actually used and their functions.
19,418
2023-12-01T00:00:00.000
[ "Engineering", "Environmental Science", "Physics" ]
Integration Method of Local-global SVR and Parallel Time Variant PSO in Water Level Forecasting for Flood Early Warning System Flood is one type of natural disaster that can’t be predicted, one of the main causes of flooding is the continuous rain (natural events). In terms of meteorology, the cause of flood is come from high rainfall and the high tide of the sea, resulting in increased the water level. Rainfall and water level analysis in each period, still not ab le to solve the existing problems. Therefore in this study, the proposed integration method of Parallel Time Variant PSO (PTVPSO) and Local-Global Support Vector Regression (SVR) is used to forecast water level. Implementation in this study combine SVR as regression method for forecast the water level, Local-Global concept take the role for the minimization for the computing time, while PTVPSO used in the SVR to obtain maximum performance and higher accurate result by optimize the parameters of SVR. Hopefully this system will be able to solve the existing problems for flood early warning system due to erratic weather. flood detection by forecast the water level will be needed by the various parties to obtain a lot of weather conditions more quickly and accurately. Forecasting is the estimation process based on past historical data to estimate in the future [8]. Various forecasting methods have been developed nowadays. The approach developed include stochastic and deterministic approach. Forecasting models mostly using a deterministic approach, which is a condition when t or t-1 and t+1, with a daily period [9]. The usual method for predicting linear data such as exponential smoothing models, moving average (MA) and autoregressive integrated moving average (ARIMA). While the methods used to forecast the data is non-linear threshold autoregressive (TAR), autoregressive conditional heteroskedasticity (ARCH), artificial neural networks (ANN) and Support Vector Regression (SVR) [8]. In the previous studies, using comparison of two methods: multivariate adaptive regression splines (MARS) with support vector regression (SVR) to predict the sales of IT products, SVR gives the root mean squared error (RMSE) which is smaller in the amount of 9:36 in sales Notebook, compared with MARS method that gives RMSE value of 293.3 [10]. In addition, according to research [11], a comparative study of the K -Means method and SVR prove that on 6 opinions sector use of the internet hotspot, SVR gives much better results in 5 sectors than K-Means. In order to obtain optimal performance of SVR, many methods used to perform the optimization of SVR. Several methods of optimization variables are algorithms cross entropy (CE), bee colony, ant algorithm and particle swarm optimization (PSO). A study by the method of adaptive fuzzy k-nearest neighbor optimized by using parallel time variant particle swarm optimization for prediction of bankruptcy automatically shown to have higher accuracy, which amounted to 81.67% compared to the method FKNN in the amount of 78.75% [12], where PTVPSO approach aimed at search optimization coefficients in the calculation of SVR. Mentioned in his research that the hybrid SVR-PSO can choose the appropriate input discriminatory features, reducing the average time execution and improve the accuracy of forecasting [13]. Therefore, in this study the proposed integration method of parallel time variant PSO (PTVPSO) and support vector regression (SVR) is used to forecast water level. Implementation in this study combine SVR as regression method for forecast the water level, while PTVPSO used in the SVR to obtain maximum performance and higher acc urate result. Hopefully this system will be able to solve the existing problems for flood detection due to erratic weather. The structure of this paper is as follows. In section 2, some important definitions are given. Section 3 describes the details of our improvement on the SVR-PTVPSO algorithm for forecasting water level. Section 4 presents some simulation results and compares the improved method. The final section provides conclusion. Definitions and Preliminaries Kernel method is a class of algorithms for analysis or pattern recognition, which elements are known best are the SVM. General assignment pattern analysis is to find and study the common types of relationships such as clusters, classification, and ranking. Kernel methods mapped data into a higher dimensional space with the hope that in the higher space, the dimensions of the data can be structured. The most frequent type of kernel used is Gaussian's Kernel, which is one implementation of Radial Basis Function kernel [14]. SVM has been efficiently applied for classification and regression [15]. In this paper we used SVM for regression (SVR) [16]. One of the simplest methods used to obtain the line or field of optimal hyperplane SVM algorithm is very complex sequential method developed by [17]. The following steps: and and (6) 4. Go back to the third step, until the condition IterMax or ( (| |) and (| |) ) Support vector, if the data has value ( ) 5. The forecast function is: 6. End PSO (Particle Swarm Optimization) is one of optimization techniques based on the metaphor of social interaction and communication as groups of birds or fish. This model will be simulated in space with specific dimensions with a number of iterations so that in each iteration, the particle's position will increasingly lead to the intended target (minimizing or maximizing functions). This is done until the maximum iteration or other termination criteria [18]. PSO is added by inertia weight and acceleration coefficient to improves performance. Each particle has position and velocity, and updates that in every iterating [19]. Below is a translation of PSO approach [12]: Where in the vector Pi represents the best positions of the particles prior to i which gives the best value, which is known as pbest. Vector is the vector particles Pg best among all the particles in the population, which is known as gbest. r 1 and r 2 are random numbers generated between the values [0,1]. Variables that exist in SVR methods used for forecasting water level, at -where to get this purpose, PTVPSO (Parallel Time Variant PSO) continuously coupled to dynamically conducts parameter optimization and feature selection simultaneously. Features obtained from optimizing the use as input into the SVR models applied in parallel so that a method that is optimized SVR. For the inertia weight, which is used to balance the local exploitation and global exploration, a large inertia weight used for global search, while small inertia weight is used for local search [20], the inertia weight is update according to following equation: Where w min and w max predefined as maximum weight and minimum weight which initialize before training models, in the other side t max is maximum iterations of PSO, and t is current iterations. Usually the value of w min is 0.4 and the value of w max is 0.9 because the value provide a good results [12]. This equation is also-known-as time varying inertia weight (TVIW). In the other hand, c 1 and c 2 are coefficients acceleration that has function for balancing the global exploration and local exploitation for better performance of PSO, time varying acceleration coefficient (TVAC) is update according to following equation: Where c 1f , c 1i , c 2f , and c 2i are constants, t max is maximum iterations of PSO, and t is current iterations. The value which is used for c 1i is 2.5, c 1f is 0.5, c 2i is 0.5 and c 2f is 2.5 because the value provide with a good results [12]. To check the errors from the forecasting method, it can be determined by calculating the difference between the original value of the water level data with the value of forecasted data. Common methods used in the measurement accuracy of forecasting, there are several kinds of method. Many researcher prefer to use MAPE (mean absolute percentage error) because MAPE is easy to understand, MAPE use percentage value to show the error value [21]: Besides MAPE we also use MAE (mean absolute error) to show the error value [22], the formula of MAE is according to following equation: Where ̂ is the forecasted data of water level and is the actual data of water level. n is the amount of predicted data. LGSVR-PTVPSO Algorithm In this section, we discuss about SVR and PTVPSO, how the performance of these two methods and how these two method are integrated each other, where the SVR work as a basic method for forecasting and PTVPSO work as optimization methods. First things to do is, initialize and calculate PTVPSO parameters such as c 1 , c 2 , r 1 , r 2 and w, and determine the range for SVR parameter. Train those parameters using SVR model, so we can calculate the fitness using training data in SVR method. We can determine pbest and gbest, to change the reference for the next iterations. The parameter to be optimized in PSO is λ, Ɛ, cLR and C. RBF It can be said that for the training phase, the SVR methods are inside PTVPSO methods, but after we get the optimal parameters with the biggest fitness value for training data, we can use the parameters for testing phase. To do testing phase, we only use SVR method itself to forecast the water level. Kernel method has been used in this study is RBF Gaussian. Figure 1 shows the proposed method we used LGSVR-PTVPSO (Local Global SVR-PTVPSO), Global SVR can be interpreted in testing phase we use a large iterations to find the optimal parameters for SVR, in the other hand Local SVR can be interpreted in training phase we use a small amount of iterations to find the optimal parameters. Experiment And Analysis This section discussed the experiments have been performed as well as analysis of the experiment. Several trial test has been conducted for SVR features that used to optimize water level forecasting, where the variations of SVR feature are 3, 4 and 5 days feature. The data in Table 1 expressed in meters (m), we can clearly see to forecast the 1st day, it takes the value of F1 to F5 which is the actual value of the 5 days prior to day 1 which is amount of the Y, for forecasting the 2nd day. Actual value on the 1st day was used as the last feature or F5 to make the forecast, as well as for the next day. The visualization prediction experiment using 5 features or using 5 days in advance wi ll be presented in Table 1: The second experiment of SVR feature is using four features or by using the value of 4 days prior to actual days to forecast the water level. Figure 2 is the best result of training and testing using 4 days features of SVR, the best trials that have been performed, obtained MAPE value which is amount to 0.0459 and MAE value which is amount to 0.177882 for training phase and MAPE value which is amount to 0.03708 and MAE value which is amount to 0.14336 for testing phase. The third experiment of SVR feature is using five features or by using the value of 5 days prior to actual days to forecast the water level. Figure 3 is the best result of training and testing using 5 days features of SVR, the best trials that have been performed, obtained MAPE value which is amount to 0.04706 and MAE value which is amount to 0.18088 for training phase and MAPE value which is amount to 0.039335 and MAE value which is amount to 0.15208 for testing phase. Table 2 describes the comparison of execution time and error rate some variation of features SVR used in this experiment, for training result the best value obtained from using 4 days features of SVR or using the value of 4 days prior to actual days to forecast, we can clearly see the data on the result of the prediction is very precise with the actual data on the training phase, while for the testing phase the best value obtained from using 3 days features of SVR or using the value of 3 day prior to actual days to forecast, it appears that the predicted result and the actual data of water level is very precise using 3 days features of SVR, while the fastest execution time of the experiment using 3 days features of SVR for training phase and 5 days features of SVR for testing phase, the factors that affect the speed of computation time/running time (RT) is the tabulated data used in this experiment, due to the data used in training phase more numerous compared with the data in the testing phase. It appears when the testing phase and training phase on the forecasting process which obtain the best value when using 3 days until 4 days in advance, it does give an indication that the water level ranging from 3 to 4 days give the significant influence on the actual day, either the water level are going to increase or decrease, but when using 5 days in advance the data may be too far away to predicted the actual days. Conclusion In this paper we propose the algorithm Local-Global SVR with Parallel Time Variant PSO which is successfully used for forecast the water level for flood early waring system. For SVR, input feature are crucial problems, we get 3 days feature optimum for training phase and 4 days feature optimum for testing phase. The results indicate that the more we used days for feature didn't made the forecasting more accurate, the water level was highly influenced by 3 to 4 days before the actual days. In this algorithm PTVPSO was us ed to optimize the parameters of SVR such as λ, ε, cLR and C and this method can obtain suitable parameters of SVR to improve the forecasting performance of the proposed model. In the future we will study other water level forecasting using some attributes that influenced this object.
3,243.8
2018-02-01T00:00:00.000
[ "Computer Science" ]
Graves' Disease Associated with Cerebrovascular Disease and Antiphospholipid Antibody Syndrome Thyroid disorders are commonly associated with coagulopathy. Patients with hyperthyroidism have increased risk for developing thromboembolic accidents, which are favoured by a simultaneous presence of antiphospholipid antibodies syndrome. in this paper, we describe the case of a patient with Graves' disease, who developed strokes with antiphospholipid antibodies syndrome. Introduction Hyperthyroidism is associated with an increased risk of thromboembolic complications. The association between Graves' disease and ischemic stroke is very rare. It might be due to different aetiologies. The authors wish that the discussion here in may have influence on treatment choice. Observation We report a case of 39-year-old man without familial history of thyroid dysfunction, autoimmune disease, or thrombosis. He presented with an acute right hemiplegia for which he was hospitalized in neurology department. Computed tomography (CT) scan had confirmed the presence of lacunar infarction in the internal capsule of the left cerebral hemisphere. He was referred to our endocrinology unit because of signs of thyrotoxicosis including palpitation, polydipsia, and significant weight loss (6 kilograms). Physical examination revealed a body mass index of 25 kg/m 2 , a blood pressure within normal range of 130/ 80 mm Hg, heart rate of 96 beats/min, bilateral exophthalmos, homogeneous goitre, and right hemiparesis. The electrocardiogram showed regular sinus rhythm without atrial fibrillation. Thyroid function studies revealed undetectable serum thyroid-stimulating hormone (TSH) (below 0.05 mUI/L) and positive antithyroid-stimulating hormone receptor antibodies confirming the diagnosis of Graves' disease (Table 1). Thyroid scan with technetium 99 m (Tc-99 m) showed an enlarged thyroid gland with diffuse increased uptake. Fasting blood glucose was 14.3 mmol/L and remained high in the subsequent assessment confirming the presence of diabetes, according to World Health Organization. The antibodies to glutamic acid décarboxylase (GAD) were negative. For the assessment of his cerebrovascular accident, other investigations were performed showing positive antiphospholipid (APL) antibodies with IgG anti-β2-Glycoprotien-I positive (β2GP-I) and IgM anticardiolipin antibody positive which remains positive 3 months later (Table 1). Thrombophilic factors including protein C activity, antithrombin III, protein S, and prothrombin time were within normal range. Antinuclear antibodies were negative. The diagnosis of Graves' disease associated with a primary antiphospholipid syndrome (APS) was confirmed. The patient was treated with Aspirin (250 mg/day) and benzyl thiouracil (25 mg) at the dose of 12 tablets/day, with 2 International Journal of Endocrinology progressive regression. Improvement was shown in clinical symptoms and laboratory studies; Glycaemia levels and glycated haemoglobin returned to normal without any antidiabetic treatment. Discussion The association between cerebrovascular disease and Graves' disease is very rare. Sometimes, the cerebral arterial thrombosis can be explained by rhythm disorders like atrial fibrillation that is frequent in Graves' disease. This disorder is present in 9% to 22% of the cases of hyperthyroidism compared with 0.4% in the general population. It can reveal the hyperthyroidism; and 15 percent of strokes occur in people with atrial fibrillation [1]. It is also being increased in cases of preexistent heart disorder or in preferential hypersecretion of T3 [2]. Our patient had normal sinus rhythm in electrocardiogram. The cerebral symptoms could be explained by autoimmune encephalopathy. But the patients are usually euthyroid or hypothyroid with high antibody titers. Patients show a mild to moderate elevation of cerebrospinal fluid protein levels; rarely findings are suggestive of demyelination, such as oligoclonal bands and myelin basic protein. The clinical picture is presented with variable symptoms from behavioral and cognitive changes, myoclonus, pyramidal tract dysfunction, and cerebellar signs to psychosis and coma, with relapsing and progressive course. The diagnosis is often overlooked at presentation but it is crucial [3]. In our case, we found right hemiplegia in the exam and left lacuna infarct in computed tomography without clinical and laboratory signs of autoimmune cerebral vasculitis. During the hyperthyroidism, the influence of thyroid hormone on the coagulation-fibrinolytic system is mediated by the interaction between the hormone and its receptors; various abnormalities have been described, ranging from subclinical abnormalities to major hemorrhages or fatal thromboembolic events. Various changes in the coagulationfibrinolytic system have been described in patients with an excess of thyroid hormones. An increased risk of thrombosis is found in hyperthyroidism [4,5]. The Carotid artery dissection is a cause of ischemic stroke in young people; the possibility that a disorder of immunity might have a role in the mechanism of inflammatory alterations has been recently suggested. The hypothesis of an association between carotid artery dissection and thyroid disease has been suggested in few case reports [6,7]. Our patient did not have headache or neck pain, and the neuroimaging did not show dissection of carotid vessels. The stroke in a young patient with Graves' hyperthyroidism can be caused by MoyaMoya disease. It caused multiple intracranial stenoses although this patient had a focal neurologic deficit and a single lacunar infarction on CT scan [8,9]. Thyrotoxicosis, probably through a factor VIII-mediated hypercoagulability, may be a predisposing factor for cerebral venous thrombosis [10]. The complex haemostatic balance can be influenced by autoimmune mechanisms such as APS, but this rarely occur. Graves' disease can be associated with APS. The APS occurs at the presence of at least one of the clinical criteria and one of the laboratory criteria that are present in Table 2 [11]. This syndrome may arise in autoimmune disease mainly in the systemic lupus erythematosus or may occur de novo in which case it is known as primary APS. Our patient had primary APS. These antibodies may lead to a number of clinical syndromes including venous and arterial thrombosis [12,13]. The first case is a 48-year-old woman having recurrent venous thrombosis due to primary APS associated with hyperthyroidism. We noted the presence of Human Leukocyte Antigen (HLA) DR7 and a high rate of anti-TSH receptors. As for her hyperthyroidism, she was treated with carbimazole and propranolol; As for the APS, she was given heparin rather than coumadin with recanalization of her thromboses [4]. The second is of a 33-year-old woman known as having Graves' disease; she subsequently developed a cerebrovascular disease which had been explained by the increase of APL antibodies [14]. On immunogenetic analyses of families presenting with auto-immune diseases and APS, we demonstrated that the presence of the class HLA DR4 and DR7 predisposes the patients having Graves' disease to develop APS [17]. Other possibilities are advanced to demonstrate the relation between the APS and Graves' disease. The first possibility is suggesting the existence of a crossing reaction between β2-glycoprotein-I and the epitope of TSH receptors. The second possibility is that anti-TSH receptors antibodies can generate antibodies to epitopes of antiphospholipids [17,18]. Conclusion The association between APS and Graves' disease is very rare. It may have a dangerous consequence like cerebrovascular disease. We suggest that patients with Graves' disease should be examined for the presence of APS, especially when there is thrombosis accident. Table 2: Clinical and laboratory criteria of antiphospholipid antibody syndrome [6]. Clinical criteria (1) Vascular thrombosis: one or more clinical episodes of arterial, venous, or small vessel thrombosis, in any tissue or organ (2) Pregnancy morbidity: one or more unexplained deaths of beyond the 10th week of gestation, one or more premature births of a morphologically normal neonate before the 34th week of gestation, or three or more unexplained consecutive spontaneous abortions before the 10th week of gestation Laboratory criteria (1) Lupus anticoagulant present in plasma, on two or more occasions at least 12 weeks apart (2) Anticardiolipin antibody of IgG and/or in high titer >40 GPL or MPL (3) Anti-β2-glycoprotein-I antibody of IgG and/or IgM isotype present on two or more occasions
1,750
2010-09-02T00:00:00.000
[ "Medicine", "Biology" ]
Molecular Mechanics Study of Flow and Surface Influence in Ligand–Protein Association Ligand–protein association is the first and critical step for many biological and chemical processes. This study investigated the molecular association processes under different environments. In biology, cells have different compartments where ligand–protein binding may occur on a membrane. In experiments involving ligand–protein binding, such as the surface plasmon resonance and continuous flow biosynthesis, a substrate flow and surface are required in experimental settings. As compared with a simple binding condition, which includes only the ligand, protein, and solvent, the association rate and processes may be affected by additional ligand transporting forces and other intermolecular interactions between the ligand and environmental objects. We evaluated these environmental factors by using a ligand xk263 binding to HIV protease (HIVp) with atomistic details. Using Brownian dynamics simulations, we modeled xk263 and HIVp association time and probability when a system has xk263 diffusion flux and a non-polar self-assembled monolayer surface. We also examined different protein orientations and accessible surfaces for xk263. To allow xk263 to access to the dimer interface of immobilized HIVp, we simulated the system by placing the protein 20Å above the surface because immobilizing HIVp on a surface prevented xk263 from contacting with the interface. The non-specific interactions increased the binding probability while the association time remained unchanged. When the xk263 diffusion flux increased, the effective xk263 concentration around HIVp, xk263–HIVp association time and binding probability decreased non-linearly regardless of interacting with the self-assembled monolayer surface or not. The work sheds light on the effects of the solvent flow and surface environment on ligand–protein associations and provides a perspective on experimental design. INTRODUCTION Molecular association is the first critical step in all chemical and biological processes such as the immune response, signal transduction, drug-protein binding, and chemical catalysis (Ozbabacan et al., 2010;Dill and Bromberg, 2012;Baron and McCammon, 2013;Lin et al., 2020). Simulation techniques play a crucial role in investigating the environment that may affect ligand-protein binding, which provides a fundamental understanding of molecular recognition and reduces the cost and time in molecular design. Although diffusion-controlled association rate constants may be approximated analytically, most ligand-protein systems have slower association rates than the diffusion-limited rate because the association event involves multiple steps (Di Cera, 2017;Pang and Zhou, 2017). Conformational rearrangement of both molecules largely determines their binding kinetics, but the two molecules must have an initial encounter first. This first step may be greatly affected by the environment, which can result in different measured association-rate constants. Although many ligand-protein bindings take place in a closed system without water flow such as in an experimental beaker or cell, ligand-protein association can also occur in a more dynamic environment. Different compartments within a cell also create various membrane environments when ligands and proteins associate and function (Zotter et al., 2017). For example, techniques using continuous flow biocatalysis have been developed recently to improve the efficiency of chemical synthesis, such as improved mixing, mass transfer, and automation (Planchestainer et al., 2017;Britton et al., 2018). Surface plasmon resonance (SPR), a powerful technique to measure molecular binding kinetics and thermodynamics, also utilizes flow chemistry and a continuous flow environment (Hinman et al., 2018;Prabowo et al., 2018;Ershov et al., 2020). However, how the flow may affect the ligand-protein association processes is unclear. Moreover, these methods need to immobilize one molecule on a surface that may have intermolecular interactions with the molecules to be tested. Therefore, the choice of surface and flow rate usually need to be optimized for various systems. In addition to experimentally measured values such as association rate constants, molecular simulation can reveal atomistic details of molecular association to further interpret experimental results and understand binding mechanisms. Molecular encounter usually involves two molecules searching for each other in a vast space and for longer than a nanosecond search time. Brownian dynamics (BD) simulations have been used for computational assessment of the encounter processes for several decades (Northrup et al., 1984;Huber and McCammon, 2019). Many software packages are available for studying a variety of problems related to bimolecular association. For example, MacroDox, UHBD, BrownDye Simulation of Diffusional Association (SDA), BD_BOX, BROMOC, ReaDDy, Smoldyn, SEEKR, and BDpack are used to probe ligand-protein associations with a flexible or rigid biomolecular model (Madura et al., 1995;Northrup et al., 1999;Huber and McCammon, 2010;Długosz et al., 2011;Schöneberg and Noé, 2013;De Biase et al., 2015;Martinez et al., 2015;Saadat and Khomami, 2015;Andrews, 2017;Votapka et al., 2017). Our group has been developing the GeomBD program, which focuses on using BD to investigate inter-enzyme intermediate transfer and substrate association on surface environments in biosensor and nanoenzyme structures Chang, 2015, 2016). Because of the diverse biosystems and binding environments, modifying an existing BD package to answer various questions is common practice. Here we used the HIV protease (HIVp) and xk263 as a model system to study the effect of flow and surface on ligand-protein associations. HIVp belongs to the class of aspartyl proteases and is essential for maturation and assembly of infectious virions (Kohl et al., 1988). The protein cleaves the large polyprotein precursors to mature viral proteins (Huang et al., 2014), an essential function for viral replication, and is a major drug target for AIDS treatment. This is a well-studied system with rich experimental binding data for many inhibitors and FDA approved drugs (Ghosh et al., 2016). The flexible flaps of HIVp can serve as a gate: its opening and closing affects ligand binding. In proteins, an open/closed rate of a gate related to the diffusion of their binding partners determines a fast or slow gating for ligand binding McCammon, 2011). Because of the complex gating behavior of HIVp and its importance in therapeutics, earlier work applied BD to study drug-HIVp binding and also developed a specialized coarse-grained model for HIVp to model the large-scale motions of the flaps (Tozzini and McCammon, 2005;Chang et al., 2006;Kang et al., 2011;Li et al., 2012;Bernetti et al., 2019). In this work, we used BD simulations to examine the xk263-HIVp association under the influence of ligand diffusion flux and a surface environment. The simulation applied rigid-body BD movements and considered atomistic details using precomputed grids when computing intermolecular interactions and driving forces. The HIVp was immobilized on a surface or artificially placed 20 Å above the surface. We introduced a self-assembled monolayer (SAM) with a CH 3 terminal group to model a hydrophobic surface environment (Cholko et al., 2019). We analyzed ligand association time and binding probability with different diffusion fluxes of xk263, surface and HIVp orientations. The non-polar SAM provided only weak intermolecular attractions with xk263, but the surface did not accelerate the xk263 encounter processes. The x-direction diffusion flux of xk263 significantly reduced weak intermolecular interactions and searching near the surface which shortened the association time but also significantly reduced the probability of successful binding. The concentration gradient of xk263 was affected by the xk263 diffusion flux as well. Our studies suggest that the flow may have noticeable effects on molecular encounter processes and provide insights into the differences in the ligandprotein association in diverse conditions, such as in static cells or a continuous flow biocatalysis environment. Model System The model system is a rectangular prism (400 × 400 × 220 Å 3 ), closed at the top, and consisting of HIVp, xk263, and a SAM surface ( Figure 1A). The crystal structure of HIVp and the xk263 inhibitor were obtained from the Protein Data Bank (codes 1HHP and 1HVR, respectively) (Spinelli et al., 1991;Lam et al., 1994). The HIVp flaps, two polypeptides that cover the active site (Huang et al., 2017). HIVp was kept fixed in the model ( Figure 1B). The SAM surface with HIVp at the center consisted of undecanethiol chains on a gold sheet of 1 atom thickness (Supplementary Methods) (Cholko et al., 2019). The size of the SAM was 400 × 400 Å 2 having a hexagonal packing pattern with a packing density in the order of 10 14 cm −2 , which gives an average chain separation of 4.98 Å. The system had a periodicity in the y-direction and termination boundary in the +x-direction as the ligand flows in the +x-direction. HIVp had two orientations (Figure 2). Simulation Details The in-house modified GeomBD2 program was used for all BD runs (Roberts and Chang, 2016). The program first creates three grid files: the exclusion volume, screened Coulumbic potential, and 12-6 Lennard-Jones potential for HIV and SAM. Intermolecular interactions between ligand xk263 and HIVp or SAM are grid-based calculations, and the forces are estimated numerically (Roberts and Chang, 2016). To speed up the calculations, these precalculated grids are extended up to 40 and 15 Å around HIVp or SAM for screened Coulumbic and 12-6 LJ potentials, respectively. The force field parameters were obtained from Amber-ff14SB for HIVp and CH 3 -SAM. The partial charges of xk263 and SAM were computed by using the AM1-BCC charge method with the antechamber program (Cholko et al., 2019). The motion of the ligand is a Brownian motion, governed by an overdamped Langevin equation, in implicit water (Northrup et al., 1984). where D i is the translational or rotational diffusion coefficient, k B is Boltzmann's constant, T is temperature, δE/δr i is the potential gradient computed numerically based on the potential grids, t is the time step, and R is the stationary Gaussian random number with a zero mean. To obtain the flow for the ligand, the overdamped Langevin equation is modified by adding N, a small fractional number, in the displacement of the x-direction. For each molecular system, we terminated the simulation when the runs generated at least 600 xk263-HIVp associates and the computed average association time was within the standard deviation of <3%. A successful xk263-HIVp association is defined as xk263 reaches within 11.5 Å ASP25, a residue in the active site of HIVp, and the association time is recorded. Notably, after generating 600 successful bindings, the computed average associating time for each system is within 3%. In the system, xk263 starts diffusing from the yz-plane at the -x-direction boundary ( Figure 1A). A run is ended when xk263 reaches the binding pocket or the +x-direction boundary, and another new run will be started by randomly placing xk263 in a new position on the yz-plane (Figure 3). The clock random number seed is used, so no two runs can be identical. In addition, we can save trajectories of our BD runs to visualize ligand diffusion and binding/unbinding events. Calculations for System Analysis For the calculations of diffusion flux, diffusion coefficients, CH 3 -SAM interaction, and xk263 distribution, we used the trajectories with no termination at the +x-direction boundary. So, the trajectories used for the calculations are continuous and 0.3-µs long for each case. xk263 Diffusion Flux The diffusion flux is the rate of molecules transferred across the plane per unit time per unit area, representing the flow of xk263. It was calculated by using the continuous BD trajectories and reported in units of molecules/s.m 2 . The plane is an imaginary plane at the +x-directional boundary perpendicular to the direction of flow. The calculated flux densities according to the N values (in Equation 2) are reported (Supplementary Table 1). Diffusion Coefficient The diffusion coefficient was calculated by Einstein's relation, <r 2 > = 2nDt, where r is the displacement in time t, n is the dimensionality, and D is the diffusion coefficient. To calculate the interaction energetics of xk263 with CH 3 -SAM, we used the selected portion of continuous trajectories with xk263 diffusions within 25 Å above the CH 3 -SAM. The interaction energy includes 12-6 Lennard-Jones potential and screened Coulombic potential. The cutoffs for vdW and electrostatic potential were 12 Å and 40 Å, respectively. The calculation of 12-6 Lennard-Jones potential is as follows: where r ij is the distance between atoms i and j, ǫ is the pairwise well-depth parameter and σ is the distance at which the atomic radii meet and where the potential changes sign. FIGURE 3 | Schematic top view of the simulation box. HIVp is placed on CH 3 -SAM, and two initial starting positions of xk263, A and B, are on the yz-plane. The simulation is terminated after xk263 binding to HIVp, and another new replica will start. If xk263 reaches the +x-direction wall, the simulation is also terminated, and a new replica will start in a different position on the yz-plane. Termination Reaching +x-direction wall or within 11.5 Å of ASP 25 ′ The calculation of screened Coulombic potential is as follows: where r ij is the distance between two point-charges i and j from the ligand and receptor, respectively, q i and q j are the partial electron charges of the ligand point charges i and receptor point charges j, k is the screening parameter, k e is the Coulomb constant, kb is the Boltzmann constant and ǫ is the solution dielectric constant. xk263 Distribution The distribution of xk263 in the system was calculated by using continuous BD trajectories. The volume of the system was divided into 11 slices of 20 Å height each. The total number of appearances of xk263 replicas was computed for each slice in 0.3 µs. The calculation of g(z), distribution, is according to Equation 8 g where N is the total number of appearances of xk263 replicas in a slice within a certain number of frames, M is the expected total number of appearances approximated from a random diffusional motion, n R is the number of xk263 replicas in the system, v is the volume of the slice, V is the total volume of the system, and f is the number of frames to be analyzed. RESULTS AND DISCUSSION Understanding ligand-receptor associations in various environments is of vital importance in drug binding in cells or in different experiments. In this study, we analyzed the effects of the SAM surface, the diffusion flux of ligand xk263 and the position of HIVp from the surface and in comparison with results with a static environment ( Table 1). HIVp has flexible flaps, which are mostly in closed conformations (Katoh et al., 2003;Kang et al., 2011). In addition, the orientation of HIVp with respect to the +x direction diffusion flux of xk263 can affect ligand association. Therefore, we examined two HIVp orientations, perpendicular and parallel with the ligand flux (Figure 2). We modeled 60 different systems (Figure 4). We chose the values of xk263 diffusion flux that gradually show the changes of the modeled values with the flux. Environmental Factors Involved in Ligand Diffusion and Distribution Intermolecular Interactions Between xk263 and CH 3 -SAM We first examined the interactions between xk263 and CH 3 -SAM when the ligand diffusion flux varies. Table 2 shows weak van der Waals (vdW) interactions between xk263 and CH 3 -SAM regardless of the flux, which resulted in no significant adsorption of xk263 on SAM ( Figure 6A). Because of the hydrophobic nature of CH 3 -SAM and xk263, it is not surprising that the electrostatic interaction was close to zero. Notably, our BD movement yielded the translational diffusion coefficient of xk263, D lig = 5.58 ± 0.33 × 10 −6 cm 2 /s when a system had no SAM and no ligand diffusion flux. The modeled ligand diffusion coefficient is in good agreement with analytical values, D analytical = 4.91 × 10 −6 cm 2 /s, which also validates our simulation setting and BD algorithm (Supplementary Methods). Although the intermolecular vdW was weak, the attractions still affected the diffusion of xk263 molecules, and the molecule diffused a little slower in the presence of SAM with or without xk263 diffusion flux (Figure 5, Supplementary Table 1). Because the flow rate is inversely proportional to the viscosity (Pfitzner, 1976), the diffusion coefficient of xk263 increased with increasing Frontiers in Molecular Biosciences | www.frontiersin.org xk263 diffusion flux, as anticipated. However, because the water flow rate was not equal to the xk263 diffusion flux, we did not see a simple linear relationship between the two values. Supplementary Table 1 for numerical data). Distribution of xk263 in the Simulation Tube We further investigated whether the environment may affect the concentration gradient of xk263 along the z-direction when the molecule diffuses in the square tube. The system was partitioned into slices of 20 Å each along z-direction, and Figure 6 shows the distribution of xk263 as a function of their distance above the surface. The distribution of xk263 for a slice, g(z), is the ratio of number of observed xk263 and number of expected xk263 approximated from a random diffusional motion. The systems include HIVp with one orientation shown in Figure 2A. When there was no SAM and the xk263 diffusion flux was small (green line in Figure 6A), the distribution of xk263 was the same throughout the z-direction (distribution ratio = ∼1). In contrast, when CH 3 -SAM was present (Figures 6A-C), xk263 was double that in the first slide (within 20 Å of the SAM) when diffusion flux of xk263 was <1.09 × 10 22 molecules/s m 2 . The results suggest that even if the intermolecular attraction between xk263 and CH 3 -SAM is weak, the concentration can be increased by 2-fold, which helps to increase the probability of a molecular encounter. Studies for various systems also showed that local molecular concentration can be effected by a surface; for example, surface catalyzed biomolecular reaction using nanoreactors, nucleotidase colocalization, and absorption of biomacromolecules on hydrogels (Roa et al., 2017;Pérez-Mas et al., 2018;Rahmaninejad et al., 2020). The SAM retains xk263 near the surface only when the diffusion flux of the ligand was small. When the flux increases, the local concentration of xk263 near the surface decreased, and xk263 had the highest concentration near the middle of the z-direction (see ∼100 Å in Figures 6D-G). The distribution of concentration gradient along the z-direction was the same with or without SAM, which suggests that the weak intermolecular interactions can be altered easily when xk263 has diffusion flux in the x-direction. Our results are consistent with experiments showing the quadratic curve of concentration gradients with flow . With larger diffusion flux, a significantly smaller amount of xk263 diffused near the surface where HIVp locates, and the reduced distribution also contributed toward probability of small xk263-HIVp association, discussed later. HIVp Immobilized on the Surface We modeled the average xk263-HIVp association time and the binding percentage when HIVp was 0 Å above the surface with SAM (Tables 3, 4 with SAM) or without SAM. Theoretically, if xk263 employs only 3-dimensional diffusion within a sphere, the average association time to bind to HIVp located in the center of the sphere is 2648.4 ns when the system has no external flux (Supplementary Methods) (Adam and Delbrück, 1968). The association time modeled by our simulation is 1491.55 ns is close to the theoretical value. The faster binding time than theory is due to use of a box, and xk263 did not need to search the whole sphere to associate with HIVp. Notably, existing association rate theories describe system without diffusion flux (Berg and Purcell, 1977;Szabo et al., 1980;Shoup and Szabo, FIGURE 7 | Plot of average association time vs. xk263 diffusion flux. HIVp is placed (A) perpendicular to the yz-plane and (B) parallel to the yz-plane; Similar data imply no effect of HIVp orientation on the association time. In each plot, coinciding lines (red, green, blue, and magenta) show no effect of hydrophobic CH 3 -SAM or height of the receptor from the surface. 1982) and further theoretical work in molecular association with buffer flowing is needed. Although the interactions between xk263 and CH 3 -SAM were small, to examine the environment effects, we turned off the intermolecular vdW and electrostatic interactions, so the system was in a confined space without intermolecular attractions or repulsion (without SAM). The xdirection diffusion flux of xk263 gradually increased from zero to 1.7 × 10 23 molecules/s m 2 , and the average xk263-HIVp association time decreased non-linearly. The correlation was observed in a previous study, showing that the association time for human immunoglobulin G (IgG) and Staphylococcus aureus protein A (SpA) decreased with increasing flow rate (Ogi et al., 2008). The average association time did not differ without or with SAM in the presence of diffusion flux ( Table 3). This is not surprising because the CH 3 -SAM provides only weak attractions to xk263, and the surface neither increased the local concentration of xk263 to assist ligand association nor retained the ligand from binding to HIVp (Figure 6) (Cholko et al., 2020). However, if the attraction between a ligand and surface is large, it may affect experimental results. As a result, existing experimental studies considered the choice of the surface for the sensor or SPR to optimize experimental sensitivity and accuracy. For example, phospholipid/alkane bilayers might be a better option for molecular binding in biomolecular systems using SPR (Plant et al., 1995). These phospholipid derivatized surfaces may bring non-specific interactions for xk263 which is absent in our current model, which can result in slightly longer association time. We report the inverse of association time per molarity and the association percentage (Table 4). When xk263 was freely diffused in the tube without diffusion flux, the association rate approximated using the average association time, 1/1491.55 ns per M, yielded k on = 7.8 × 10 9 1/Ms. Using the diffusion coefficient from the BD run (Supplementary Table 2), our simulated k on was approximately a half of the analytical value, k on = 4πRD = 1.2 × 10 10 1/Ms because the analytical formula does not consider molecular geometry in association. Our modeled k on was also slightly slower than that measured using SPR for another highly similar cyclic urea compound, 2.5 × 10 10 1/Ms (Markgren et al., 2002). The first phase in SPR utilizes buffer flowing and detects binding signals, which require successful collision to have the correct ligand and HIVp orientation and enough energy. The flow may influence successful collision, especially on biomolecular systems which usually have complex molecular geometry. Of note, the inverse association time increased linearly with xk263 diffusion flux (Supplementary Figure 1), which suggests that the association rate may also increase linearly with the flow. However, our modeled diffusion coefficients (Supplementary Table 2) show an exponential increase with the xk263 diffusion flux (Figure 5). When particles have flux moving along the +x direction, the ligand did not freely diffuse in all directions, and the equation for the diffusion-controlled association system, k on = 4πRD, was no longer suitable to approximate the association rate constants. Therefore, we cannot directly estimate k on from the increased diffusion coefficient. The additional +x direction drift velocity also reduced the initial encounter probability. When the flux was >3.43 × 10 22 molecules/s.m 2 , xk263 passed the tube without sufficient time to freely diffuse in the y and z directions, which resulted in reduced search space and tremendously decreased association percentage. Large diffusion flux also yielded fewer xk263 molecules staying near both the SAM and HIVp surface. Notably, xk263 may diffuse on the surface of HIVp before binding to the pocket. The xk263 diffusion flux in the +x direction perturbed the interactions between xk263 and HIVp, and the weakened intermolecular attractions also led to decreased association percentage. In typical experimental settings, an immobilized protein can have multiple orientations. Therefore, we chose two representative orientations-the binding pocket of HIVp perpendicular or parallel to the flux of xk263-to examine whether the orientation against the flow affects ligand binding. Tables 3, 4, Figures 7, 8 show no difference in average ligand association time and binding percentage with the two orientations. Because the binding pocket of HIVp is widely accessible for the ligand, the impact of the orientation was insignificant. However, some proteins exhibit a certain direction for ligands to enter the binding pocket, and their orientation against the flux of the ligands can affect the ligand-protein association. Artificially Localized HIVp 20 Å Above the Surface Although different HIVp orientations did not affect xk263 association time, existing studies showed that the dimer interface (the bottom part of HIVp) is the most popular region for ligands' initial encounter of HIVp (Roberts and Chang, 2016). After reaching the dimer interface, the ligand can undergo surface diffusion to reach the binding pocket. When we placed HIVp 0 Å above the surface, this bottom region became partially inaccessible to xk263. Therefore, immobilized HIVp was artificially placed 20 Å above the surface so that xk263 can easily reach the bottom region. When the whole HIVp surface was accessible to xk263, the association time was reduced by 10-20% when the xk263 diffusion flux was <1.09 × 10 22 molecules/s.m 2 , which suggests that the additionally available bottom region accelerates xk263 binding (Table 3) and slightly increases the association percentage (Table 4). Even when HIVp was placed 20 Å above the surface, the location was not far enough to eliminate the intermolecular xk263-SAM interactions. As a result, xk263 spent longer time near SAM when xk263 diffusion flux was small, thus resulting in longer time to associate with HIVp when SAM was present rather than absent. When the flux exists, the association time was the same when the SAM was present or absent, regardless the position of HIVp (Table 3). CONCLUSION Ligand-protein encounters can occur in any environment that may provide additional intermolecular interactions or the transporting forces to the ligand. For example, in experimental settings such as using SPR to study ligand-protein binding, the choice of the surface and the flow rate are all optimized for measurements. In this study, we used BD simulations to investigate the effect of the CH 3 -SAM surface on xk263-HIVp association. The non-polar surface provided weak intermolecular vdW attractions with xk263 to slightly increase the ligand binding time. The effects quickly vanished when xk263 had an x-direction diffusion flux, and the association time decreased when the diffusion flux increased regardless of the presence of SAM or only a special plane. With no diffusion flux, xk263 was twice more concentrated within 20 Å of the SAM surface because of the xk263-SAM interactions. The concentration gradient did not increase binding time but increased xk263-HIVp binding probability. When the xk263 diffusion flux increased, the middle region of the square tube had the highest xk263 concentration, which is the same as existing experiments showing a quadratic curve of concentration gradients with flow. The results also show that when HIVp was placed on the surface (0 Å above the surface), the bottom part of the protein, a known high-probability site of the xk263-HIVp first encounter, was not accessible to xk263. Because xk263 could bind nonspecifically on the HIVp surface and then utilize surface diffusion to reach the binding site, occluding this bottom region increased ligand binding time by 10-20% in the static environment. However, with large xk263 diffusion flux, all the weak intermolecular interactions and searching along the HIVp surface were eliminated, which resulted in a fast association time and small binding probability. This work brings insights into how ligand diffusion flux and the environment may affect ligand-protein association. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS SK ran and analyzed the simulations, produced figures, and wrote the manuscript. C-eC designed experiments, analyzed the simulation, and wrote the manuscript. Both authors contributed to the article and approved the submission version. FUNDING This study was supported by the US National Institutes of Health (GM-109045), US National Science Foundation (MCB-1932984), and NSF national supercomputer centers (TG-CHE130009).
6,403.6
2021-05-10T00:00:00.000
[ "Chemistry", "Physics", "Biology" ]
Conserved genomic organisation of Group B Sox genes in insects. Background Sox domain containing genes are important metazoan transcriptional regulators implicated in a wide rage of developmental processes. The vertebrate B subgroup contains the Sox1, Sox2 and Sox3 genes that have early functions in neural development. Previous studies show that Drosophila Group B genes have been functionally conserved since they play essential roles in early neural specification and mutations in the Drosophila Dichaete and SoxN genes can be rescued with mammalian Sox genes. Despite their importance, the extent and organisation of the Group B family in Drosophila has not been fully characterised, an important step in using Drosophila to examine conserved aspects of Group B Sox gene function. Results We have used the directed cDNA sequencing along with the output from the publicly-available genome sequencing projects to examine the structure of Group B Sox domain genes in Drosophila melanogaster, Drosophila pseudoobscura, Anopheles gambiae and Apis mellifora. All of the insect genomes contain four genes encoding Group B proteins, two of which are intronless, as is the case with vertebrate group B genes. As has been previously reported and unusually for Group B genes, two of the insect group B genes, Sox21a and Sox21b, contain introns within their DNA-binding domains. We find that the highly unusual multi-exon structure of the Sox21b gene is common to the insects. In addition, we find that three of the group B Sox genes are organised in a linked cluster in the insect genomes. By in situ hybridisation we show that the pattern of expression of each of the four group B genes during embryogenesis is conserved between D. melanogaster and D. pseudoobscura. Conclusion The DNA-binding domain sequences and genomic organisation of the group B genes have been conserved over 300 My of evolution since the last common ancestor of the Hymenoptera and the Diptera. Our analysis suggests insects have two Group B1 genes, SoxN and Dichaete, and two Group B2 genes. The genomic organisation of Dichaete and another two Group B genes in a cluster, suggests they may be under concerted regulatory control. Our analysis suggests a simple model for the evolution of group B Sox genes in insects that differs from the proposed evolution of vertebrate Group B genes. Background The family of Sox-domain containing proteins encompass a group of metazoan transcriptional regulators first identified by their similarity with the mammalian testis-determining factor SRY. Membership of the Sox family is conferred by the presence of an HMG1-type DNA-binding domain sharing greater than 60% amino-acid sequence identity to that of SRY [1]. Mammalian genome sequencing projects indicate that in humans and mice there are twenty Sox genes [2], divided into eight subgroups (A-H) on the basis of sequence identity within and outwith the HMG-domain. Aside from mammals, Sox genes have been identified in all metazoans examined to date, including birds, fish amphibians, basal chordates, insects and nematodes [3]. The B subgroup is of particular interest since members of this group are most closely related to SRY and appear to be functionally conserved during evolution. Sequence analysis and functional studies suggest that, in vertebrates, the five members of the B subgroup can be subdivided into two further groups; B1; Sox1, Sox2 and Sox3; [4] and B2; Sox14 and Sox21; [5]. It has been suggested from studies in the chick that the three group B1 proteins act as gene activators whereas the B2 proteins act as gene repressors [6]. In terms of genomic organization, all five of the group B genes are devoid of introns. Sox3 is located on the mammalian X chromosome and is believed to be the ancestor of Sry [7,8]. In humans, the remaining four autosomal group B genes are arranged in two pairs, each comprising one B1 gene and one B2 gene: Sox2 and Sox14 map together on chromosome 3 [9,10] and Sox1 and Sox21 map together on chromosome 13 [5,11]. This organization is conserved, at least in part, in other vertebrates with Sox2-Sox14 mapping together in the chick and the monotreme, O. anatinus, and Sox1-Sox21 mapping together in the chick [12,13]. There is, however, no linkage of Group B Sox genes in the mouse genome [14,15]. A model suggesting the evolution of group B genes and Sry from a single ancestor has been proposed, which suggests that pairs of B1 and B2 genes arose by a tandem duplication and then a chromosomal duplication [13]. The fruitfly, Drosophila melanogaster, has proved to be a tractable system for studying conserved aspects of eukaryotic gene function and, with the production of other insect genome sequences, a useful baseline for evolutionary studies of gene organisation [16]. Whole-genome sequence is now available for three insects, Drosophila melanogaster, Drosophila pseudoobscura (which diverged from melanogaster some 46 million years ago) and Anopheles gambiae, which diverged from melanogaster approximately 250 million years ago [17,18]. Sequencing and assembly of a further ten Drosophila species is currently underway [19] promising an unparalleled data source for evolution-ary studies. In addition to the diptera, the sequencing of the Hymenoptera, Apis mellifera (honey bee ~280 million years from Drosophila), is now well underway, allowing fragments of a fourth insect genome to be assessed. In functional terms, Drosophila is a useful model for studying SOX gene function due to its genetic tractability. For example, we have previously shown that, in the case of the Drosophila group B gene Dichaete, there is functional conservation between insect and mammalian genes [20]. In addition, we, and others, have demonstrated a degree of in vivo functional redundancy between Dichaete and SoxN [21,22] as had been proposed for the mammalian group B genes [23]. Of particular interest is the fact that the expression patterns and functional studies of group B genes suggest that they participate in the earliest events of CNS differentiation in all organisms that have been studied to date including Drosophila, Xenopus, chick, mouse, ascidians and hemichordates [24]. To further explore the relationship between group B Sox genes we examined the extent and organization of the family in insects. Our studies show that group B Sox gene organisation is similar in four different insects. We find conservation in the sequence and genome organization of the group B genes in D. melanogaster, D. pseudoobscura, A. gambiae and A. melifora. In contrast to mammals and in agreement with a previous report [25], we find that two group B2 genes contain introns and are organized as a single genomic cluster along with the intronless Dichaete gene. Our studies indicate a potentially different evolutionary path for members of the group B family in insects and vertebrates. Results To explore the structure of the group B Sox genes in insects we first accurately determined the extent and structure of the family in Drosophila melanogaster. The group B genes, Dichaete and SoxNeuro (SoxN) have already been well described in the literature [26][27][28]. Two other group B gene fragments have been identified [25], Sox21a and Sox21b, but their structure and genomic organisation have not been reported. Using a combination of database searching and DNA sequencing we characterised both of these genes in detail. We find no evidence for any other group B genes in Release 3.2 or Release 4 of the Drosophila genome sequence, indicating that there are a total of four in the D. melanogaster genome. Sox21a Blast searches of the Drosophila genome identified a group B HMG-domain interrupted by a 1655-bp intron in the 70D region of chromosome arm 3L. Using primers designed against each of these predicted exons we amplified a fragment of 1238 bp from the LD cDNA library produced by the BDGP [29]. For reasons that are as yet unclear, we have been unable to recover a clone from this or any of the several other cDNA libraries that we have screened. The fragment amplified from the library was sequenced in its entirety and found to contain a long open reading frame encoding a 389 amino acid Sox domain protein. The predicted polypeptide initiates with a methionine and probably contains the entire coding sequence for the gene. When aligned with the genome sequence we predict a gene with two exons spanning 2.8 kb. Blast searches with the predicted protein find over 90% identity with a range of group B Sox proteins in the HMG DNA-binding domain. The best scores are with the DNA-binding domains of the vertebrate Sox21 and Sox14 proteins; however, there is little significant similarity outside of the DNA-binding domain. The Sox21a gene has previously been reported as SoxB2-3 (CG7345) and it has been suggested that it may represent a pseudogene [25]. As we show below, RT-PCR and in situ hybridisation studies indicate that Sox21a is expressed in both D. melanogaster and D. pseudoobscura indicating that it is not a pseudogene. Sox21b Along with Sox21a, Blast searches indicated a second interrupted Sox domain in the same region of the genome. In this case, database searches found a potential cDNA clone from the BDGP (GH07353), which was obtained and sequenced in its entirety. The sequence of the clone revealed a long open reading frame, initiating with a methionine, encoding a predicted polypeptide of 571 amino acids. Alignment with the genome sequence indicates that the gene spans 19 kb of genomic DNA and is composed of 7 exons, the first of which is non-coding. The DNA-binding domain contains two introns; the first, 6388 bp in size, is in the middle of the DNA-binding domain and the second, of 59 bp, is in the same position and frame as the Sox21a intron described above. Blast searches with the predicted amino acid sequence find greater than 90% amino acid identity the DNA-binding domains of group B Sox proteins, the highest scores being with Dichaete. The sequence indicates that Sox21b corresponds to the SoxB2-2 gene fragment previously reported [25] and whole mount in situ hybridisation with probes derived from genomic DNA and the cDNA clone confirm the pattern of expression previously reported ( Figure 2). Thus, both Sox21a and Sox21b are expressed group B Sox genes that have their DNA binding domains interrupted by introns. To verify the gene predictions and gain some insight into their possible biological functions we determined the developmental expression of each of the four group B genes by RT-PCR using RNA templates isolated from different stages of the Drosophila lifecycle. In the case of Sox21a and Sox21b, we used primers to the Sox-domain encoding exons spanning a predicted intron. All RT-PCR reactions included a reverse transcriptase minus reaction and the amplified products were verified by sequencing. The results of this analysis are presented in Table 1 and can be summarised as follows: the expression profiles of Dichaete and SoxN are very similar, they are expressed during embryonic, larval and pupal stages of development, the level of expression reducing during the later stages of pupal development. Both genes are expressed in adult male as well as female flies, with bodies showing stronger expression than heads. Sox21a is expressed throughout development and in adults it is stronger in heads than in bodies. In contrast to the other group B genes, Sox21b has a more complex expression pattern during development. It is strongly expressed in embryos but is below detectable levels for much of larval and pupal life. After eclosion it is weakly expressed in male heads but not bodies. Thus, in common with mammalian group B genes, all four D. melanogaster group B Sox are expressed during embryogenesis and at other stages throughout development. Group B genes in other insects Our findings show that Drosophila melanogaster has four group B Sox genes compared to the five found in vertebrates and, unlike vertebrates, two of the genes contain introns. To investigate whether this particular organization is unique to D. melanogaster we searched the available genome sequence of other insects to find potential Sox domain genes. Using the Dichaete DNA-binding domain as a query, we searched the Drosophila pseudoobscura, Anopheles gambiae and Apis mellifera genome and EST sequence databases using Blast-P and Blast-N (see materials and methods for EST and genome scaffold accessions). In all three cases we found evidence for four Group B genes and were able to build gene models, from the genome sequence alone or with the addition of EST data where available. The initial characterization of the insect group B genes, based on the HMG-domain sequence, suggests that there is a single orthologue of each Drosophila gene in the other three species. SoxN The alignment presented in Figure 1a. shows the similarity between the insect SoxN proteins and mouse Sox1. As previously reported, conservation between vertebrate and invertebrate Sox proteins is mostly restricted to the DNAbinding domains [26,27]. Between the insect proteins there are more extensive regions of homology outwith the DNA-binding domain. The Drosophila SoxN sequences show over 90% sequence identity over their entire length and, as expected from the phylogenies based on rDNA and protein coding sequences, the other insect sequences are more diverged [18,30]. A. gambiae is overall 64% identical to the melanogaster sequence with particularly well conserved regions in the N-terminal 50 amino acids and more patchy conservation C-terminal to the DNA-binding domain. A. mellifera is further diverged (52% identity with Drosophila). Conserved regions outside the DNA-binding domains among all four sequences are restricted to a stretch of amino acids C-terminal that may represent conserved functional motifs important in transcriptional regulation. Dichaete The situation with Dichaete is similar to that observed with SoxN, and the figures for amino acid identity are virtually identical (Figure 1b). Outside of the DNA-binding domain the Dichaete sequences show even less similarity comparing the Drosophila species and the other two insects; conservation between all four being restricted to limited regions C-terminal to the DNA-binding domain. Interestingly, we have shown that the C-terminal region of D. melanogaster Dichaete contains sequences required for activity in a context-specific manner [20] and C-terminal regions of the mouse and chicken Sox2 protein are believed to be involved in aspects of correct Sox2 function [31]. Sox21a This gene is the least conserved between the four species and outside of the DNA-binding domain they show little similarity with vertebrate group B2 proteins ( Figure 1c). There is extensive homology between the two Drosophila species, however, the Anopheles and Apis sequences are very diverged outside of the DNA binding domain. As with D. melanogaster, there are no EST sequences available that support the structure of Sox21a in the other insects. Sox21b The predicted Drosophila Sox21b proteins are again very similar, over 88% identical over their length. The other insect sequences are less well conserved, although the Anopheles sequence has a block of conservation C-terminal to the DNA-binding domain, including a Glutamic acid-rich domain ( Figure 1d). The predicted Apis sequence is less well conserved, we note, however, that all four proteins are identical at the extreme C-terminus. With both the Anopheles and Apis proteins we cannot confidently predict the N-terminal exons and are unable to find any regions with amino acid similarity to the first 2 coding exons of the Drosophila sequences in the Anopheles or Apis genomic sequence between the end of Dichaete and the Sox21b Sox-domain encoding exons. Our current models are, however, supported by the available EST sequences for both species although the EST sequences are not fulllength. Therefore, the definitive structure of these two insect Sox21b genes will require further investigation. Nevertheless, it is clear from the available sequence that orthologues of Sox21b are present in other insects. Embryonic expression of group B genes in D. melanogaster and D. pseudoobscura To confirm the identification of four group B genes in both D. melanogaster and D. pseudoobscura, we performed whole-mount in situ hybridization to embryos of both species using exon-specific probes generated by PCR from genomic DNA. In all four cases we find very similar patterns of expression during embryogenesis. In the case of Dichaete, we find blastoderm expression including a broad central domain and a region of expression in the cephalic neuroectoderm (Figure 2A and 2A'). After gastrulation there is extensive expression in the developing CNS ( Figure 2B and 2B') including the midline (not shown). With SoxN we find conserved blastoderm expression, including an identical restriction from the ventral region of the embryo, followed by extensive expression throughout the developing CNS ( Figure 2C to 2D'). With Sox21a, we identified conserved expression in the anlage of the foregut and hindgut at stage 12 ( Figure 2E and 2E') with later expression in specific cells of the midline after stage 14 ( Figure 2F and 2F'). Sox21b shows conserved expression in abdominal epidermal stripes from stage 13 ( Figure 2G to 2H'). These observations indicate that all four group B genes have conserved expression patterns during embryogenesis. Genomic organisation of group B genes in Drosophila: the Dichaete complex In some vertebrates the two classes of group B genes, B1 and B2, are linked on the same chromosome. In contrast, with Drosophila a single gene, SoxN, maps to the second chromosome and the remaining three all map to chromosome 3. We examined the organisation of the group B genes in the other insect genomes and found that the situation was very similar to that observed in Drosophila. In melanogaster, SoxN is intronless and sits alone in the middle of an 80 Kb island with no flanking genes for 35 Kb proximal and 45 Kb distal, an unusual organisation for a Drosophila gene. We have previously shown that Dichaete is controlled by extensive 3' regulatory sequences, suggesting that perhaps the paucity of genes flanking SoxN may also indicate the presence of extensive regulatory sequences. In support of this, we find several clusters of predicted transcription factor binding sites from 35 kb upstream to 20 kb downstream of SoxN when we use a stringent search criteria with CisAnalyst analysis software [32,33] (data not shown). Similar searches with Dichaete find previously identified regulatory sequences, suggesting that SoxN may indeed be subject to complex regulation. Comparative analysis of the melanogaster and pseudoobscura genomes with the Vista genome alignment viewer [34,35] indicates that the genomic organization is very similar in the two species. The Ensemble annotation of the Anophelese genome indicates that the region around SoxN is also sparsely populated, with only 2 short stretches of EST homology in the 150 kb flanking SoxN. Therefore, it is possible that SoxN is subject to complex regulatory control in Anopheles. There is currently insufficient contiguous genomic sequence from Apis to assess the organization of the SoxN region. In the 70D region of Drosophila melanogaster chromosome arm 3L the remaining three group B genes are clustered within an 77 kb region (Fig 3). As we have previously reported, Dichaete is an intronless gene controlled by at least 30 Kb of regulatory sequence 3' to the transcription unit [36]. 16 kb further distal to these regulatory sequences we find the start of Sox21b and a further 28 kb distal to this the start of Sox21a. The region ends with the Fat Body Protein 1 (Fbp1) gene 6 kb downstream of Sox21a. The genomic organization of Sox21b is highly unusual for a group B Sox gene. It is split into seven exons, the first of which is non-coding and exons 3, 4 and 5 contain the DNA-binding domain. All of the predicted splice junctions have consensus GT-AG sequences. Sox21a, comprises 2 exons, each containing a portion of the DNAbinding domain. As we note above, the position of the intron, which has consensus splice junction sequences, is in the same position as the second DNA-binding domain intron of Sox21b (intron 4). In the case of D. pseudoobscura, the homology extends from upstream of the Dichaete coding region to at least the Fat body protein 1 gene downstream of Sox21a. The organization of the three Sox genes is virtually identical comparing the two species and we could construct gene models including all of the Sox21a and Sox21b exons. There is absolute conservation of the intron position between both Drosophila species, furthermore, the sizes of the introns is also similar, although nucleotide similarity is lower than in coding sequences ranging from 40 -75%. As with melanogaster, we find no evidence for additional genes in the intergenic region between Dichaete and Sox21b. We used the OWEN sequence alignment programme to plot the conservation between the Dichaete -Sox21b intergenic region in both species (Figure 4). Throughout the entire region we see that there is a high degree of sequence conservation, since we know that at least 30 kb of this region contains essential Dichaete regulatory sequences in melanogaster, we predict that The genomic organisation of the insect Dichaete regions arm 3L. Dichaete is intronless and Sox21b is located approximately 110 kb downstream of this. There are no other predicted genes in the region. The Sox21b has a similar structure to those of the Drosophilids, however, it is not identical. We have been unable to find a 5' non-coding exon and, as we note above, the second intron found in the DNA-binding domains of the Drosophila Sox21b genes is absent in Anopheles with exons 4 and 5 fused. The other introns are, however, conserved in position (figure 1d). With the Anopheles Sox21a gene, the single intron position is conserved with the Drosophila species, however, the intron is considerably larger and contains an insertion of a Q-class retrotransposon in the sequenced strain [37]. We find no evidence for an Fbp1 orthologue in the vicinity, the nearest similar sequence being some 5 Mb away on the same chromosome arm. The available sequence in the region is more fragmentary in the case of Apis. Here we find an intronless Dichaete gene and can define two sets of exons corresponding to the split DNA binding domains of Sox21a and Sox21b. Overall, the organization is similar to the other insects; like Anopheles, the intergenic region between Dichaete and Sox21b is large (~90 kb), however, unlike the other insects the distance between Sox21b and Sox21a is also large (~80 kb). In the case of Sox21b we have used EST sequence to support the gene model we have derived. The EST confirms the first four exons and we predict the terminal exon on the basis of homology with the other species, particularly the terminal 30 amino acids. As with Anopheles, the Apis Sox21b gene has a single DNA-binding domain intron in the same position of the first Drosophila DNA-binding domain intron. The intron immediately downstream of the DNA-binding domain is also conserved in all four insects, however, the remaining two intron positions differ between Apis and the other insects. Although the Apis assembly is preliminary in this region, with several gaps still present in the sequence, the fact that the gene models are very similar to the other insects and that Dichaete and Sox21b predictions are supported by EST data suggests that the gene models we propose are likely to be accurate for the majority of the coding sequence. We compared the Dichaete to Sox21b intergenic regions of Anopheles and Apis to the melanogaster sequence with the OWEN alignment tool and failed to detect any significant stretches of similarity, even at relatively low stringency. This suggests that if there is conservation in gene regulatory sequences between these diverged insects it may be difficult to detect or have undergone extensive rearrangement. Evolutionary perspective on insect group B genes To attempt a classification of the insect group B Sox genes, we performed a multiple sequence alignment with the DNA-binding domains of the predicted proteins along with representative group B-like sequences from other organisms ( Figure 5). The aligned ClustalX output suggests that the insect Sox domains may be subdivided into 3 classes. The first clearly groups the SoxN proteins from each of the insects with the mammalian Sox1, 2 and 3 proteins. Along with these we find representative sequences from nematodes (C. elegans, S. ratti, and W. bancrofti), hemichordate Acorn worms (S. kowalevski and P. flava) and the sea squirt (H. roretzi). Thus, together these are likely to represent a single class, orthologous to vertebrate group B1 proteins. The second class, the Sox21a proteins, have sequences similar to the mammalian group B2 proteins, Sox14 and 21 and may represent an insect group B2 protein. The third class, containing Dichaete and Sox21b, are clearly differentiated from all other group B proteins by the presence of a Leucine/Isoleucine residue at position 18, an Isoleucine residue at position 23 and a divergent set of C-terminal amino acids. These two insect proteins may represent an insect-specific group B family. This suggests that a single group B1 protein, represented by SoxN-Sox3 like sequences, was present in a common ancestor before the divergence of vertebrates and invertebrates. Similarly, the close association of the insect Sox21a proteins with nematode and vertebrate Sox14 proteins suggests that these were also present in a common ancestor. The alignments clearly highlight the distinction between the Dichaete-Sox21b pair and other group B proteins, emphasizing a distinct evolutionary history for these proteins in the insects. Taken together, the analysis presented here shows that the genomic organization and sequence of group B Sox genes have been conserved during insect evolution. Particularly striking is the clustering of three genes in a small region of the genome. The structure of these genes and their relationship with vertebrate Group B genes suggest that SoxN and Sox21a are homologous to vertebrate group B1 and B2 genes respectively, whereas Dichaete and Sox21b may represent insect-specific group B genes. Discussion The sequence alignments of the HMG DNA-binding domains from insect and mammalian group B Sox proteins suggests that the insect proteins may be separated into three distinct groups. The first, containing SoxN, aligns with the vertebrate Sox1, 2 and 3 proteins and most likely represents an orthologue of the vertebrate group B1 class. This conclusion, based on sequence, is supported by the functional analysis of group B1 proteins in vertebrates and Drosophila. In both cases, group B1 genes are expressed from the earliest stages of CNS development and are implicated in regulating early neural specification [21,22,38,39]. In addition, we have evidence that mammalian Sox1 genes can rescue SoxN phenotypes in the Dro-sophila CNS, supporting the view that these proteins are functionally conserved (P. Overton and S.R. unpublished observations). The group B sequences isolated from the basal chordates, acorn worm and sea squirt, have also been shown to be expressed early in the specification of the CNS [40,41]. Thus, it appears that all metazoans Group B Sox-domain alignment Figure 5 Group B Sox-domain alignment. Clustal X alignment of DNA-binding domain sequences from the insect proteins and representative group B proteins from other species. The insect sequences are highlighted in grey. studied to date have at least one group B gene with expression marking neural lineages early in development. Further studies of primitive invertebrates will determine whether group B Sox expression is a universal marker for CNS development. In a previously published phylogenetic studies it was suggested that Dichaete be classified as a Group B2 protein [3]. However, while the analysis clearly differentiates between the group B proteins and other fly Sox proteins it could not unambiguously resolve the relationship between each of the group B proteins. In terms of function and expression, the Dichaete gene behaves very much like a group B1 gene, it is expressed early during CNS development and is required for neural differentiation [20,42]. We have previously shown that the mouse Sox2 gene efficiently rescues Dichaete phenotypes, further supporting a functionally similarity between Dichaete and vertebrate group B1 genes [20,42]. In contrast to the conclusion based on functional studies, the sequence analysis suggests that insect Dichaete DNA-binding domain sequences are markedly different from other group B1 proteins and are more similar to group B2 proteins. The conservation of the insect sequences indicates that a Dichaete-like sequence was present at least 300 My years ago, when Apis and the Diptera last shared a common ancestor [18]. We believe that the functional evidence is more convincing than the arguments based on sequence alignments and therefore suggest that Dichaete represents a group B1 function that has diverged from the canonical group B1 sequence, presumably due to selection for insect-specific functions. For example, Dichaete is required for early segmentation in the Drosophila embryo, a highly derived function, and it may be that sequence changes in the HMG-domain have been selected for such a function while still allowing a role in CNS-specification. As with Drosophila, both Anopheles and Apis are long germ insects that share some aspects of early development such as the early appearance of striped domains of even skipped expression [43,44]. Thus it is possible that insect Dichaete genes have a common role in early patterning events. It will be of considerable interest to examine the complement of group B Sox genes in Coleoptera, Homoptera or Orthoptera to see if the HMG domain sequence and gene organisation is the same as the insects so far sequenced. To investigate this we used the Dichaete DNA-binding domain to search the available sequence of the silk moth Bombyx mori. [45] and found a single Group B gene that was clearly an orthologue of the Dichaete genes discussed here, containing the diagnostic Leucine and Isoleucine residues described here. As with vertebrate group B1 genes, SoxN and Dichaete are expressed in broadly overlapping domains and act partially redundantly in CNS specification [21,22]. The close similarity between the expression and function of SoxN and Dichaete in the CNS raises the possibility that they arose from a common ancestor by a duplication event and may thus share some common regulatory sequences. However, when we compared the sequences 5' or 3' to SoxN with the Dichaete 3' sequence we could not detect any sequence similarity indicating that any conservation in regulatory sequences is not visible at a large scale; this is not entirely surprising since we cannot detect any sequence similarity between the Dichaete regulatory sequences from Drosophila and Anopheles, while our analysis indicates the divergence of SoxN and Dichaete predates the Drosophila-Anopheles divergence. Based on the sequence alignment of insect Sox21a DNAbinding domains with those of vertebrate Sox14 proteins, it is possible that Sox21a may be an orthologue of the group B2 class. It has been suggested that in chicken Sox14 and Sox21 act as antagonists of group B1 function in a subset of the developing CNS [6]. The function of Sox21a in Drosophila is not known at present, however, Sox21a is expressed late in the development of the embryonic CNS midline, a site of SoxN and Dichaete expression, indicating there is the potential for the type of antagonistic interaction proposed for vertebrates. The Sox21b DNAbinding domain sequence indicates that it is closely related to Dichaete. Both these proteins have a set of unique residues in their DNA-binding domains that are not found in any other group B proteins identified to date. The Sox21b gene is conserved between the insects and its close similarity to Dichaete suggests that both genes arose from a common origin in the ancestor of the arthropods after their divergence from the nematodes since there is no close sequence in C. elegans or its relatives. In terms of expression, Sox21b is expressed in the large hindgut along with Dichaete, supporting the possibility that it may also antagonise the activity of Dichaete. In this respect then Sox21b may represent a group B2 function. It is therefore possible that insects contain 2 group B1 class activities, involved in early CNS development, and two B2 class genes. Again we emphasise that the functional assignment of the insect genes may contrast with the data derived from sequence analysis, which predicts a single group B1 gene and three group B2 genes. We suggest that the separation of group B Sox domains into a B1 class and B2 class based solely on sequence does not reflect meaningful functional differences in insects. We have initiated a functional analysis of Sox21a and Sox21b in the hope that we can clarify this issue. The genome organisation of the Dichaete cluster is unusual, not only are three genes clustered together in the genome but two of them, Sox21a and Sox21b, have introns within the HMG-domain. The single Sox21a intron is conserved in all four of the insect genes suggesting that it is ancestral to the insects. Sox21b is more complex, there are six introns in melanogaster and pseudoobscura, four of these are conserved in Anopheles and two are conserved in Apis. In the Drosophila species, there are two introns in the DNA-binding domain, the first of which is present in all four insects. The second intron, in an identical location to the Sox21a intron, is only found in the two Drosophila species. A simple model of a single intron loss is therefore unlikely to account for this since both Apis and Anopheles do not have the intron. It is possible that Apis and Anophelese lost the intron independently or, alternatively, that the common ancestor of the Drosophila species gained the intron, perhaps via a gene conversion event with Sox21a. Interestingly, the two group B genes from C. elegans also contain introns in the DNA-binding domain, in identical positions in both genes, but they are in different positions to the Sox21a and Sox21b introns. This suggests that the common ancestor of insects and nematodes did not contain DNA-binding domain introns and that these have been acquired independently in both lineages. The conservation of genome structure with the insect Dichaete cluster suggests that there may be functional constraints on the organisation. We suggest that this is likely to be a reflection of shared regulatory sequence since the region between Dichaete and Sox21b in melanogaster contains extensive regulatory sequences essential for correct Dichaete expression. We note that both Sox21a and Sox21b have expression domains that overlap with Dichaete, in the midline for Sox21a and the hindgut with Sox21b. These expression domains may therefore be controlled by common regulatory sequences and the need to maintain coordinated regulation of the three genes has maintained the integrity of the cluster in the insects. The conservation in expression between D. melanogaster and D. pseudoobscura is consistent with this view; it will be of interests to examine the expression of the all of the Sox genes in Anopheles to further explore this hypothesis. Conclusion Taking our observation together, we propose a simple model for the evolution of group B SOX genes ( Figure 6). We base our model on the proposal of Kirby et. al. [13] who suggest that a single group B gene underwent a duplication to generate two Sox3-like genes. We propose that these are represented by SoxN and Dichaete in the insects. A further tandem duplication of one of these genes generated linked group B1 and group B2 genes. We propose that this is represented by Dichaete duplicating to generate Sox21a. Sox21a would then acquire the sequence changes characteristic of the group B2 class of proteins. We suggest that these events predate the Protostome-Deuterostome divergence over 650 My ago [46] and provide the basal Sox gene complement of the Bilateria of three group B Sox genes. After the divergence of the lineages leading to ver-tebrates and invertebrates, Dichaete diverged from the canonical group B DNA-binding domain sequence and then underwent further duplication event, at least predating the divergence of the holometabolous insects, to generate Sox21b. An analysis of the group B family in other insects and basal chordates will be required to definitively describe the ancestral situation. Genome sequences The following sources were used to obtain genome sequence: D. melanogaster (Release 3.2, [47,48]) from Fly-Base [49] and Sox21b (ENSANGG00000009947). Anopheles EST sequences representing Dichaete (TC44994) and Sox21b (TC45155) were obtained from The Institute for Genome Research [53]. Apis mellifera Genome assembly Amel_1.1 was obtained from HGSC-BCM [54] and the following scaffolds used: for the Dichaete region, Group8.12 (Dichaete and Sox21b) was found to overlap by 4.5 kb with Grou-pUn.570 containing Sox21a and the sequences were combined into a single contig. SoxN was contained within Group17.6. In addition a search of the Honey Bee Brain EST project [55,56] uncovered two EST sequences corresponding to Dichaete (BB170009A10D01) and Sox21b (BB170011B20A11). These were used to verify the exon predictions from the genome sequence. Vertebrate group B sequences were obtained from Uniprot. Nematode sequences were recovered by Blast searches of the EST collections at Nematode.net (Genome Sequencing Center, Washington University, St Louis, [57]). Informatics tools Homology searching was performed using the Blast algorithm [58] at Sanger, HGSC-BCM and Berkeley Drosophila Genome Project [59] web sites. Genomic sequences were imported into Artemis v5 [60,61] and annotated manually using the Blast output as a guide. Multiple Sequence alignments were performed locally using ClustalXv1.8 [62] and graphically represented with BoxShade [63]. The alignment of intergenic regions was performed with OWEN [64]. Molecular biology A cDNA clone for Sox21b (GH07353) was obtained from the Drosophila gene collection [29] and sequenced on both strands using an ABI prism kit in the Genetics Department sequencing core. PCR and RT-PCR amplifications were carried out using minor modifications to standard techniques [65] using the following primer combinations: Dp-Sox21A Exon 2 R GACTTCACGCAGCCGTAGGAT Dp-Sox21B F CGTCTATCCACACACCTGTC Dp-Sox21B R GACGATGTCTGCTGCTGTT Whole-mount in situ hybridisation to Drosophila embryos was performed using minor modifications to a standard protocol [66]. All genetic nomenclature is according to FlyBase [49].
8,654.8
2005-05-19T00:00:00.000
[ "Biology" ]
PLA-PEG Nanoparticles Improve the Anti-Inflammatory Effect of Rosiglitazone on Macrophages by Enhancing Drug Uptake Compared to Free Rosiglitazone Among cardiovascular diseases, atherosclerosis remains the first cause of death in the United States of America and Europe, as it leads to myocardial infarction or stroke. The high prevalence of heart diseases is due to the difficulty in diagnosing atherosclerosis, since it can develop for decades before symptoms occur, and to the complexity of the treatment since targets are also important components of the host defenses. The antidiabetics thiazolidinediones, among which is rosiglitazone (RSG), have demonstrated anti-atherosclerotic effect in animal models, and are therefore promising candidates for the improvement of atherosclerosis management. Nevertheless, their administration is hindered by the insurgence of severe side effects. To overcome this limitation, rosiglitazone has been encapsulated into polymeric nanoparticles, which permit efficient delivery to its nuclear target, and selective delivery to the site of action, allowing the reduction of unwanted effects. In the present work, we describe nanoparticle formulation using polylactic acid (PLA) coupled to polyethylene glycol (PEG), their characterization, and their behavior on RAW264.7 macrophages, an important target in atherosclerosis treatment. RSG nanocarriers showed no toxicity on cells at all concentrations tested, an anti-inflammatory effect in a dose-dependent manner, up to 5 times more efficient than the free molecule, and an increased RSG uptake which is consistent with the effect shown. These biodegradable nanoparticles represent a valid tool to be further investigated for the treatment of atherosclerosis. Introduction Atherosclerosis is a chronic inflammatory disorder, the complications of which are the major cause of death among cardiovascular diseases [1]. The most severe aspect of atherosclerosis is the rupture of the plaques, which can result in myocardial infarction or stroke [2]. The formation of a plaque is due to the accumulation of lipids in a dysfunctional region of the arterial endothelium. Particularly, recruited monocytes that differentiate into macrophages and eventually give rise to foam cells play a key role in the development of inflammation [3]. Therefore, macrophage-targeting strategies represent a valid approach to selectively deliver anti-inflammatory drugs. Nanoparticles developed for medicine in the last decades could be an advantageous tool for two main reasons: at a cellular level, nanoparticles have been extensively demonstrated to be easily and efficiently taken up by circulating monocytes and macrophages without inducing any toxicity [4][5][6], thus improving the drug's bioavailability. From a systemic point of view, PLA-PEG Synthesis and Characterization To a two-neck round-bottom flask previously flame-dried and under a flow of inert gas, DL-Lactide, HO-PEG-OCH 3 , and stannous octoate (1.5 mol PEG/mol stannous octoate) were dissolved into 10 mL of anhydrous toluene, and placed into an oil bath at 125 • C. Polymerization was carried out under stirring for 55 min. The reaction was stopped by simply dipping the flask in cold water. After evaporation of toluene under vacuum, the polymer was dissolved into chloroform and precipitated into cold diethylether. Then, the polymer was resuspended in tetrahydrofurane and precipitated into water before freeze drying for 24 h using an Alpha-1-2 LD apparatus (Christ, Coueron, France), leading to a white powder. The molecular weight was estimated by 1 H NMR spectroscopy and the dispersity was evaluated by size exclusion chromatography (SEC). 1 H NMR spectroscopy was performed in 5 mm diameter tubes in CDCl 3 on an Avance-300 (300 MHz) spectrometer (Bruker, Billerica, MA, USA). SEC was performed at 30 • C with two columns from Polymer Laboratories (PL-gel MIXED-D; 300 × 7.5 mm) and a differential refractive index detector (Spectrasystem RI-150, Thermo Electron Corp., Waltham, MA, USA), using chloroform as an eluent, a Waters 515 pump at a flow rate of 1 mL/min, and toluene as a flow-rate marker. The calibration curve was based on poly(methyl methacrylate) (PMMA) standards from Polymer Laboratories. Lactide conversion ≥ 95% ( 1 H NMR). 1 H NMR (300 MHz, CDCl 3 , δ, ppm): 5.10-5.28 (CHCH 3 COO), 3.64 (OCH 2 CH 2 ), 1.52-1.61 (CHCH 3 COO). Nanoparticle Formulation and Characterization Nanoparticles were formulated following two known protocols, namely emulsion-evaporation or nanoprecipitation. For the emulsion-evaporation method, 100 mg of PLA-PEG and 10 mg of rosiglitazone were dissolved in 4 mL acetone: dicholoromethane (1:1) or ethyl acetate:dichloromethane (1:1). The organic phase was then emulsified with 20 mL of 1.5% sodium cholate solution using a vortex for 1 min followed by sonication (Branson Sonifier, Danbury, CT, USA) for 1 min at 30% amplitude over ice. The solvent was evaporated by magnetic stirring at 300 rpm in a thermostated bath at 20 • C for 4 h. For the nanoprecipitation method, 100 mg of PLA-PEG and different amounts of rosiglitazone (4, 7 and 10 mg) were dissolved in 10 mL of acetone or acetonitrile (ACN). Twenty mL of 1.5% sodium cholate solution were added with a syringe and needle (21G) using a syringe pump (11 Plus, Harvard Apparatus, Les Ulis, France), at 7 mL/min under vigorous stirring. The solvent was evaporated under reduced pressure in a bath at 30 • C using a rotary evaporator (Büchi, Flawil, Switzerland) cooled at −10 • C. All nanoparticles by nanoprecipitation have been prepared also on smaller scales, namely starting with 50 or 25 mg polymer. For both techniques, nanoparticles were filtered through 0.45 µm PVDF filters and purified from free drug and polymer by ultracentrifugation at 27,400× g for 1 h at 4 • C. Pellets were resuspended in water during 10 min of vortex. Nanoparticle size and polydispersity index were determined in triplicate by dynamic light scattering using a NANO ZS (Malvern Instruments, Malvern, UK) at a 173 • scattering angle. Prior to size measurements, nanoparticles were diluted 1:6 in water to obtain a suspension at 0.83 mg/mL polymer. Measurements were performed at 20 • C for 60 s. The surface charge of nanoparticles was evaluated by measuring the zeta potential on the same instrument by previously diluting the samples 1:3 in a 1 mM NaCl solution. Encapsulation Efficiency The amount of RSG encapsulated into the nanoparticles was determined through UV-Vis measurements on a LS25 spectrophotometer (Perkin Elmer, Villebon-sur-Yvette, France). Nanoparticles were separated from free molecules through ultracentrifugation (50,400× g for 1 h at 4 • C) and then dissolved by diluting them 1:10 in acetonitrile. A calibration curve of RSG in the same solvent ratio was previously performed, by measuring the intensity of the peak at λ = 246 nm. Encapsulation efficiency (EE) was then obtained as the percentage of the difference between the current concentration and the one at the beginning of the preparation. Drug loading (DL) corresponds to the weight of RSG with respect to the total weight of the nanoparticle (polymer+RSG) expressed as a percentage. Release Studies Radioactive nanoparticles with a drug loading of 2.1% (w/w) were prepared by nanoprecipitation by adding 12 µCi of radiolabelled rosiglitazone to the organic solution before nanoparticle formation. After purification, nanoparticles were diluted to yield a rosiglitazone concentration 10 times lower than its solubility in the release medium (PBS) (1.5 µg/mL, sink conditions). Nanoparticle aliquots were incubated under agitation at 37 • C and withdrawn after scheduled times. Nanoparticles and release medium were separated through ultracentrifugation (50,400× g for 1 h at 4 • C) and their radioactivity content was determined separately after addition of the liquid scintillation cocktail Ultima Gold (Perkin-Elmer, Villebon-sur-Yvette, France), using a Beckman Coulter (LS 6500 Multi-Purpose Scintillation Counter, Villepinte, France) instrument. Nanoparticles toxicity towards RAW264.7 cells was determined by measuring mitochondrial activity using the 3-[4,5-dimethylthiazol-2-yl]-3,5-diphenyltetrazolium bromide (MTT) assay [27]. Cells were seeded in a 96-well plate at a density of 8000 cells/well and incubated for 24 h. Nanoparticles at concentrations ranging from 0.8 µg/mL to 2.4 mg/mL were added on the cells for 24 h and subsequently replaced by a 0.5 mg/mL MTT solution for 2 h. MTT was removed and DMSO was added to permeabilize the cells and dissolve the crystals. Absorbances were measured using a Labsystems Multiskan MS plate reader at 570 nm. Inflammation of RAW264.7 cells was evaluated by the detection of 4 cytokines (MCP-1, TNF-α, IL-10, IL-6) using the BD Cytometric Bead Array (CBA) Mouse Inflammation Kit (BD). Cells were pre-incubated for 24 h and then exposed for 3 h to lipopolysaccharides (LPS) at 0.1 µg/mL to induce inflammation. Nanoparticles or free RSG at different concentrations were then added to the cells and incubated for 24 h, after which the supernatants were collected for cytokines measurement. The samples were prepared following manufacturer instructions and analyzed using a BD Accuri C6 flow cytometer (Becton Dickinson, Rungis, France). Statistical analysis was performed using a unilateral Student's t-test as provided by Microsoft Excel. Nanoparticle uptake by macrophages was evaluated by using radiolabeled RSG. Free RSG and nanoparticles at 3 different concentrations (7.6, 59, and 227 µM of RSG) were added for 24 h to the RAW264.7 cells after 3 h incubation with LPS. Subsequently, supernatants and cells were separated, cells were rinsed with PBS and their radioactivity content was measured separately. Results and Discussion The aim of the present study was to optimize the encapsulation of RSG into PLA-PEG nanoparticles and to evaluate their capacity to improve drug cellular uptake and anti-inflammatory activity of RAW264.7 macrophages. Nanoparticle Formulation Nanoparticles were first obtained using the emulsion-evaporation technique. To achieve the simultaneous solubilization of the polymer and RSG, 1:1 mixtures of dichloromethane (DCM):acetone or DCM:ethyl acetate were used. Nanoparticle diameter was measured immediately after centrifugation to separate them from free drug. Nanoparticles obtained after purification display a diameter around 125 nm, a polydispersity index (PDI) of 0.15 and are negatively charged (−18 mV) independently of the solvent mixture used (Table 1). Nanoparticles were also obtained by the nanoprecipitation technique using either acetonitrile (ACN) or acetone to solubilize the polymer-drug mixture. Consistently with literature [28], smaller nanoparticles were obtained with this method: around 115 nm when using ACN (Table 2) and surprisingly smaller (around 80 nm) when they were prepared with acetone (Table 3). It is generally known that the use of more polar solvents like acetone results in smaller nanoparticles, due to the easy diffusion of the solvent in the aqueous phase. Yet, ACN has a higher polarity index than acetone [29] Materials 2018, 11, 1845 5 of 11 and should therefore diffuse more rapidly in the water phase. Nevertheless, other parameters affect the final nanoparticle size, the affinity between the solvent and the polymer, for instance [9]. This is the reason why in some cases like the present one smaller nanoparticles can be obtained with acetone [9,30]. Table 1. Size, PDI, and zeta potential of nanoparticles prepared by emulsion-evaporation technique (10 mg RSG starting amount). Solvent Mixture Size (nm) ± SD Polydispersity Index ± SD Zeta Potential (mV) Table 3. Size, PDI, and zeta potential of nanoparticles prepared by nanoprecipitation technique using acetone. Drug Amount Size (nm) ± SD Polydispersity Index ± SD Zeta Potential (mV) For further encapsulation efficiency studies, different starting amounts of RSG were tested (4, 7, and 10 mg) without significantly affecting nanoparticles characteristics (Tables 2 and 3). For both solvents, less debris or aggregates were observed with nanoprecipitation as compared to the solvent evaporation technique. Encapsulation Efficiency Encapsulation efficiency was determined for all formulations. Nanoparticles were separated from free drug by ultracentrifugation and dissolved in acetonitrile to assay the entrapped RSG. Results are expressed as encapsulation efficiency and drug loading ( Figure 1 and Table 4). Drug loading is calculated as the ratio between the actual amount of RSG entrapped and the sum of the initial amount of polymer plus the actual amount of RSG entrapped. Actual values might be higher as possibly not the totality of the polymer is finally involved in the nanoparticle formation, especially in the case of emulsion-evaporation, for which a lot of debris was removed by filtration as evidenced by the difference in size before and after filtration ( Table 1). As shown in Figure 1, the encapsulation of RSG in nanoparticles obtained by nanoprecipitation using ACN (A) or acetone (B) as solvent present a similar trend, reaching a drug loading of around 2% in both cases. For ACN nanoparticles, the drug loading seems to level off at 2% already with a starting amount RSG of 7 mg, which corresponds to encapsulation efficiencies of 20-30%. In the case of acetone, the drug loading seems to increase with increasing starting amount, reaching slightly levels above 2% with a starting amount of RSG of 10 mg. These relatively low drug loadings achieved are nevertheless in agreement with literature as they strongly depend on drug-polymer interactions [9,20,23,30]. Materials 2018, 11, x FOR PEER REVIEW 6 of 11 As shown in Figure 1, the encapsulation of RSG in nanoparticles obtained by nanoprecipitation using ACN (A) or acetone (B) as solvent present a similar trend, reaching a drug loading of around 2% in both cases. For ACN nanoparticles, the drug loading seems to level off at 2% already with a starting amount RSG of 7 mg, which corresponds to encapsulation efficiencies of 20-30%. In the case of acetone, the drug loading seems to increase with increasing starting amount, reaching slightly levels above 2% with a starting amount of RSG of 10 mg. These relatively low drug loadings achieved are nevertheless in agreement with literature as they strongly depend on drug-polymer interactions [9,20,23,30]. As it can be clearly seen from Table 4, nanoprecipitation leads to higher encapsulation efficiency and drug loading compared to emulsion-evaporation. Therefore, nanoparticles obtained by nanoprecipitation were selected for further experiments. The two solvents were very similar in terms of EE and DL, so the acetone formulation was chosen for safety reasons, being a class 3 solvent whereas acetonitrile is a class 2 (International Council for Harmonisation (ICH) guideline Q3C (R6) on impurities: guideline for residual solvents EMA/CHMP/ICH/82260/2006, 6 December 2016). Ten mg RSG starting amount was used in the following experiments since it results in the highest DL. RSG Release from Nanoparticles Release experiments were then performed to evaluate the behavior of the drug and assess the extent of the burst effect. In other words, it was important to analyze how much of the drug would be released before the nanoparticle reached the target. PBS was chosen as typical release medium that mimics physiological conditions. Indeed, nanoparticles presented a burst release effect of almost 70% during the first hour of incubation (Figure 2), most likely due to surface-associated molecules [31]. This first part of the curve corresponds to a Fickian diffusion as suggested by the calculated release exponent value of the Korsmeyer-Peppas equation (n = 0.41; 0.43 corresponds to Fickian diffusion from spheres). Subsequently, the release is slower, reaching a sort of plateau around 80% for the following hours, possibly due to the diffusion of the drug encapsulated in the inner core of the nanoparticles. This release behavior is commonly observed and in agreement with literature about polyester nanoparticles [10]. As it can be clearly seen from Table 4, nanoprecipitation leads to higher encapsulation efficiency and drug loading compared to emulsion-evaporation. Therefore, nanoparticles obtained by nanoprecipitation were selected for further experiments. The two solvents were very similar in terms of EE and DL, so the acetone formulation was chosen for safety reasons, being a class 3 solvent whereas acetonitrile is a class 2 (International Council for Harmonisation (ICH) guideline Q3C (R6) on impurities: guideline for residual solvents EMA/CHMP/ICH/82260/2006, 6 December 2016). Ten mg RSG starting amount was used in the following experiments since it results in the highest DL. RSG Release from Nanoparticles Release experiments were then performed to evaluate the behavior of the drug and assess the extent of the burst effect. In other words, it was important to analyze how much of the drug would be released before the nanoparticle reached the target. PBS was chosen as typical release medium that mimics physiological conditions. Indeed, nanoparticles presented a burst release effect of almost 70% during the first hour of incubation (Figure 2), most likely due to surface-associated molecules [31]. This first part of the curve corresponds to a Fickian diffusion as suggested by the calculated release exponent value of the Korsmeyer-Peppas equation (n = 0.41; 0.43 corresponds to Fickian diffusion from spheres). Subsequently, the release is slower, reaching a sort of plateau around 80% for the following hours, possibly due to the diffusion of the drug encapsulated in the inner core of the nanoparticles. This release behavior is commonly observed and in agreement with literature about polyester nanoparticles [10]. [31]. This first part of the curve corresponds to a Fickian diffusion as suggested by the calculated release exponent value of the Korsmeyer-Peppas equation (n = 0.41; 0.43 corresponds to Fickian diffusion from spheres). Subsequently, the release is slower, reaching a sort of plateau around 80% for the following hours, possibly due to the diffusion of the drug encapsulated in the inner core of the nanoparticles. This release behavior is commonly observed and in agreement with literature about polyester nanoparticles [10]. Cell Viability in the Presence of RSG Nanoparticles In the context of the atherosclerotic disease, plaque macrophages represent a relevant target of anti-inflammatory drugs as they play a key role in many processes involved in the progression of the disease. Among others, they can undergo apoptosis and therefore contribute to the plaque necrotic core, are involved in the activation of immune responses, promote the platelet aggregation on the inflammation site, and produce metalloproteinases which are responsible for the thinning of the fibrous cap. Therefore, RAW264.7 murine macrophages have been chosen to test nanoparticles efficiency in reducing the inflammation. Preliminarily, it is pivotal to assess the nanoparticles' safety towards the chosen cell line. An MTT test was performed which evaluated the mitochondrial activity of treated cells. Cells were incubated 24 h with RSG nanoparticles and the corresponding empty nanoparticles at different concentrations. As depicted in Figure 3, no significant difference in cell viability was found among all concentrations tested. Both formulations grant a cell viability of around 70% for concentrations between 8 µg/mL to 2.4 mg/mL. These nanoparticles can be considered safe towards RAW264.7 cells even at relatively high concentrations and have therefore been tested further for their anti-inflammatory activity. Cell Viability in the Presence of RSG Nanoparticles In the context of the atherosclerotic disease, plaque macrophages represent a relevant target of anti-inflammatory drugs as they play a key role in many processes involved in the progression of the disease. Among others, they can undergo apoptosis and therefore contribute to the plaque necrotic core, are involved in the activation of immune responses, promote the platelet aggregation on the inflammation site, and produce metalloproteinases which are responsible for the thinning of the fibrous cap. Therefore, RAW264.7 murine macrophages have been chosen to test nanoparticles efficiency in reducing the inflammation. Preliminarily, it is pivotal to assess the nanoparticles' safety towards the chosen cell line. An MTT test was performed which evaluated the mitochondrial activity of treated cells. Cells were incubated 24 h with RSG nanoparticles and the corresponding empty nanoparticles at different concentrations. As depicted in Figure 3, no significant difference in cell viability was found among all concentrations tested. Both formulations grant a cell viability of around 70% for concentrations between 8 μg/mL to 2.4 mg/mL. These nanoparticles can be considered safe towards RAW264.7 cells even at relatively high concentrations and have therefore been tested further for their anti-inflammatory activity. Inflammation Evaluation in RAW264.7 Cells After having established the non-toxicity of PLA-PEG RSG nanoparticles towards RAW264.7 cells, their anti-inflammatory effect has been investigated and compared to that of the free molecule. Inflammation in cells was quantified as the amount of cytokines released in their supernatant, namely tumor necrosis factor (TNF-α), monocyte chemoattractant protein-1 (MCP-1), interleukin-6 (IL-6), and interleukin-10 (IL-10). Cells were incubated for 24 h with different concentrations of RSG and corresponding nanoparticles. Inflammation was previously developed by a 3 h incubation with 0.1 μg/mL LPS. As can be seen by the cytokines production (Figure 4), this protocol was adequate to produce inflammation in cells. At low RSG concentration ( Figure 4A,B), almost no difference can be observed among all the LPS-treated cells, whether they further received the addition of RSG or NPs or not. Only exception is IL-10, where NPs were able to reduce the production of this cytokine. As RSG concentration is increased ( Figure 4C,D), NPs effect can be seen as compared to non-treated cells (LPS+) as well as to free RSG-treated cells (RSG+). This effect is even more obvious and significant at the highest RSG concentration tested ( Figure 4E,F). In the case of IL-10, NPs can reduce Inflammation Evaluation in RAW264.7 Cells After having established the non-toxicity of PLA-PEG RSG nanoparticles towards RAW264.7 cells, their anti-inflammatory effect has been investigated and compared to that of the free molecule. Inflammation in cells was quantified as the amount of cytokines released in their supernatant, namely tumor necrosis factor (TNF-α), monocyte chemoattractant protein-1 (MCP-1), interleukin-6 (IL-6), and interleukin-10 (IL-10). Cells were incubated for 24 h with different concentrations of RSG and corresponding nanoparticles. Inflammation was previously developed by a 3 h incubation with 0.1 µg/mL LPS. As can be seen by the cytokines production (Figure 4), this protocol was adequate to produce inflammation in cells. At low RSG concentration ( Figure 4A,B), almost no difference can be observed among all the LPS-treated cells, whether they further received the addition of RSG or NPs or not. Only exception is IL-10, where NPs were able to reduce the production of this cytokine. As RSG concentration is increased ( Figure 4C,D), NPs effect can be seen as compared to non-treated cells (LPS+) as well as to free RSG-treated cells (RSG+). This effect is even more obvious and significant at the highest RSG concentration tested ( Figure 4E,F). In the case of IL-10, NPs can reduce its expression by 5 and 16 times more than the free RSG and the medium, respectively ( Figure 4F). Therefore, PLA-PEG RSG nanoparticles show potential in efficiently reducing cytokines levels in inflamed macrophages and can be considered for further in vivo evaluations. Free RSG and NP Uptake by RAW264.7 Cells To explain the mechanism of the improved nanoparticles' efficiency in reducing the inflammation as compared to the free molecule, an uptake study has been performed for the evaluation of intracellular levels of RSG in both cases. Radioactive RSG has been used as a tracer and the same protocol as for the cytokines detection has been applied. As Figure 5 shows, no internalization difference between free RSG and NPs can be seen for the smallest RSG concentration. However, the difference in the internalization increases as the RSG concentration does, up to 40% NP RSG uptake as compared to 2% for the free molecule, when the highest concentration was tested. This corresponds to 30 mmol of RSG delivered per million of cells ( Figure 5B). The increased RSG uptake in the case of NPs correlates with their superior efficiency in reducing the inflammation, due to the greater amount of molecule delivered inside the cells at its site of action. Furthermore, the presence or absence of LPS does not affect the cell's capability to take up nanoparticles. Free RSG and NP Uptake by RAW264.7 Cells To explain the mechanism of the improved nanoparticles' efficiency in reducing the inflammation as compared to the free molecule, an uptake study has been performed for the evaluation of intracellular levels of RSG in both cases. Radioactive RSG has been used as a tracer and the same protocol as for the cytokines detection has been applied. As Figure 5 shows, no internalization difference between free RSG and NPs can be seen for the smallest RSG concentration. However, the difference in the internalization increases as the RSG concentration does, up to 40% NP RSG uptake as compared to 2% for the free molecule, when the highest concentration was tested. This corresponds to 30 mmol of RSG delivered per million of cells ( Figure 5B). The increased RSG uptake in the case of NPs correlates with their superior efficiency in reducing the inflammation, due to the greater amount of molecule delivered inside the cells at its site of action. Furthermore, the presence or absence of LPS does not affect the cell's capability to take up nanoparticles. Conclusions In this paper, we presented the formulation of rosiglitazone-loaded PLA-PEG nanoparticles prepared with different techniques and conditions. All nanoparticles have been characterized in terms of size, surface charge, and encapsulation efficiency. Nanoparticles prepared by nanoprecipitation in the presence of acetone were selected as the best formulation for further investigation. These nanoparticles showed no toxicity on RAW264.7 cells even at high concentrations as proved by the MTT test and were able to reduce cytokine production by 16 times at a lower level than the control. Finally, the enhanced anti-inflammatory effect could be explained by increased drug uptake into the cells when rosiglitazone is encapsulated into nanoparticles, as compared to the free molecule. Therefore, these delivery systems showed their potential as carriers for anti-inflammatory drugs, which could be further developed and thus lead to an improvement in the current atherosclerosis treatment. Author Contributions: G.G. technically realized the work, L.M. and H.C. provided technical assistance, N.T. cosupervised the work, and E.F. was the principal investigator. Funding: This research received support from Laboratory of Excellence LERMIT thanks to an ANR grant (ANR-10-LABX-33). Conclusions In this paper, we presented the formulation of rosiglitazone-loaded PLA-PEG nanoparticles prepared with different techniques and conditions. All nanoparticles have been characterized in terms of size, surface charge, and encapsulation efficiency. Nanoparticles prepared by nanoprecipitation in the presence of acetone were selected as the best formulation for further investigation. These nanoparticles showed no toxicity on RAW264.7 cells even at high concentrations as proved by the MTT test and were able to reduce cytokine production by 16 times at a lower level than the control. Finally, the enhanced anti-inflammatory effect could be explained by increased drug uptake into the cells when rosiglitazone is encapsulated into nanoparticles, as compared to the free molecule. Therefore, these delivery systems showed their potential as carriers for anti-inflammatory drugs, which could be further developed and thus lead to an improvement in the current atherosclerosis treatment. Author Contributions: G.G. technically realized the work, L.M. and H.C. provided technical assistance, N.T. cosupervised the work, and E.F. was the principal investigator. Funding: This research received support from Laboratory of Excellence LERMIT thanks to an ANR grant (ANR-10-LABX-33).
6,108.2
2018-09-27T00:00:00.000
[ "Medicine", "Materials Science" ]
Semiflow selection for the compressible Navier–Stokes system Although the existence of dissipative weak solutions for the compressible Navier–Stokes system has already been established for any finite energy initial data, uniqueness is still an open problem. The idea is then to select a solution satisfying the semigroup property, an important feature of systems with uniqueness. More precisely, we are going to prove the existence of a semiflow selection in terms of the three state variables: the density, the momentum, and the energy. Finally, we will show that it is possible to introduce a new selection defined only in terms of the initial density and momentum; however, the price to pay is that the semigroup property will hold almost everywhere in time. Introduction Consider the compressible Navier-Stokes system where = (t, x) denotes the density, u = u(t, x) the velocity, p = p( ) the pressure, and S = S(∇ x u) the viscous stress. We will consider the system on the set (t, x) ∈ (0, ∞) × , where ⊂ R N , N = 2, 3 is a bounded domain with ∂ of class C 2+ν for a certain ν > 0. As our goal is to handle a potentially ill-posed problem, we have deliberately omitted the case N = 1, for which the problem is known to be well posed, see Kazhikhov [8]. Finally, we assume a barotropic pressure p ∈ C[0, ∞)∩C 1 (0, ∞) such that p(0) = 0 and for certain constants a 1 > 0, a 2 and b, with γ > N 2 the adiabatic exponent, and the viscous stress tensor to be a linear function of the velocity gradient, more specifically to satisfy the Newton's rheological law with μ > 0 and λ ≥ 0. We would like to point out that (5) allows the pressure to be a general non-monotone function of the density, besides the standard case p( ) = a γ with a > 0 and γ > N 2 . Still, as we shall see below, the problem admits global-intime weak solutions and retains other fundamental properties of the system, notably the weak-strong uniqueness, see [5]. We will consider dissipative weak solutions, i.e., solutions satisfying Equations (1) and (2) in a distributional sense along with the energy inequality, see Sect. 1.1. Although the existence of global in time solutions has already been established for any finite energy initial data, see, e.g., [10] and [6], uniqueness is still an open task. Then, a natural question is whether it is possible or not to select a solution satisfying at least the semiflow property, an important feature of systems with uniqueness: letting the system run from time 0 to time s and then restarting and letting it run from time s to time t gives the same outcome as letting it run directly from time 0 to time t. The result presented in this manuscript can be seen as the deterministic version of the stochastic paper done by Breit, Feireisl and Hofmanová [2]. The construction of the semigroup arises from the theory of Markov selection in order to study the well-posedness of certain systems; it was first developed by Krylov [9] and later adapted by Flandoli and Romito [7], Cardona and Kapitanski [4] in the context of the incompressible Navier-Stokes system. Breit,Feireisl,and Hofmanová [3] used the deterministic version motivated by [4] to show the existence of the semiflow selection for dissipative measure-valued solutions of the isentropic Euler system. Following the same strategy, we will establish the existence of a semiflow selection for the compressible Navier-Stokes system (1)- (6). Specifically, introducing the momentum m = u, we show the existence of a measurable mapping satisfying the semigroup property: where [ , m = u] represents a dissipative weak solution to (1)- (6). At this stage, we would like to point out the main essential difference between the present paper and [3]. The semigroup constructed for the Euler system in [3] contains the total energy as one of the state variables. This may be seen as a kind of drawback as the energy should be determined in terms of the basic state variables [ , m]. This is, however, a delicate issue for the Euler flow as the energy contains also the defect due to possible concentrations and/or oscillations. Such a problem does not occur for the Navier-Stokes system, where the energy is indeed a function of [ , u] at least for a.a. t ∈ [0, ∞), cf. (7). The paper is organized as follows: The remaining part of this section contains the definitions of a dissipative weak solution and admissibility. In Sect. 2, we fix the topologies on the space of the initial data and the trajectory space, and we introduce the concept of a semiflow selection in terms of the three state variables: the density 0 , the momentum m 0 , and the energy E 0 . In Sect. 3, we analyze the properties (compactness, non-emptiness, the shift invariance, and continuation properties) of the solution set for a given initial data, while Sect. 4 is devoted to the proof of the existence of a semiflow selection. Finally, in Sect. 5, we study a new selection defined only in terms of the initial density 0 and the momentum m 0 . Dissipative weak solution Following [6], we can give the definition of a dissipative solution to the compressible Navier-Stokes system. Definition 1.1. The pair of functions , u is called dissipative weak solution of the Navier-Stokes system (1)-(6) with the total energy E and initial data if the following holds: (i) regularity class: with ≥ 0; (ii) weak formulation of the renormalized continuity equation: for any τ > 0 and any functions holds for any ϕ ∈ C 1 c ([0, ∞) × ); (iii) weak formulation of the balance of momentum: for any τ > 0 the integral identity holds for any ϕ ∈ C 1 c ([0, ∞) × ; R N ), where ( u)(0, ·) = ( u) 0 ; (iv) energy inequality: for a.e. τ ≥ 0 we have where the pressure potential P is chosen as a solution of we also require E = E(τ ) to be a non-increasing function of τ : Remark 1.2. Condition (ii) can be considered as a simple rescaling of the state variables in the continuity equation (1); it is necessary in order to prove the weak sequential stability and the existence of dissipative weak solutions. In particular, choosing B(z) = z we get the standard weak formulation of the continuity equation. Remark 1.3. At this stage, similarly to [3], the total energy is considered as an additional phase variable-a non-increasing function of time possessing one sided limits at any time. In contrast with [3], the energy can be determined in terms of and u, see (10), with the exception of a zero measure set of times. Remark 1.4. The condition E(0−) = E 0 comes naturally from the assumption that the total energy is bounded at the initial time t = 0, specifically Admissible solution From now on, it is more convenient to work with the momentum m = u. Following [3], we introduce a subclass of dissipative solutions that reflect the physical principle of minimization of the total energy. At the present state, we retain the total energy E as an integral part of the solution so we work with the triples [ , m, E]. Finally, in Sect. 5, we pass to the natural state variables [ , m]. To this end, let [ i , m i , E i ], i = 1, 2, be two dissipative solutions starting from the same initial data [ 0 , m 0 , E 0 ]. We introduce the relation In particular, such selection criterion guarantees that equilibrium states belong to the class of dissipative weak solutions (see [3], Section 6.3). Setup We start by introducing suitable topologies on the space of initial data and the space of dissipative weak solutions. Fix > N 2 + 1 and for simplicity consider the Hilbert space together with its subset containing the initial data Here, the convex function [ , m] → |m| 2 is defined for ≥ 0, m ∈ R N as applying Hölder inequality, we can also deduce that 0 ∈ L γ ( ) and m 0 ∈ L 2γ γ +1 ( ; R N ); accordingly, the set of the data can be seen as a closed so that it coincides with the epigraph of the function f : Since f is lower semi-continuous and convex, we obtain that its epigraph is closed and convex. We consider the trajectory space which is a separable space. Dissipative weak solutions [ , m, E], as defined in Definition 1.1, belong to this class-since > N 2 , we have that L p with p ≥ 1 is compactly embedded in W − ,2 so in particular it holds for p = γ and p = 2γ γ +1 -while Equations (8) and (9) give an information on the time regularity of the density and the momentum. Moreover, for any initial data [ 0 , m 0 , E 0 ] ∈ D, it follows that a dissipative weak solution [ , m, E] evaluated at every t ≥ 0 also belongs to the set D, due to the energy inequality. Finally, for initial data [ 0 , m 0 , E 0 ] ∈ D, we introduce the solution set Semiflow selection: main result We are now ready to define a semiflow selection to (1)-(6). Definition 2.1. A semiflow selection in the class of dissipative weak solutions for the compressible Navier-Stokes system (1)-(6) is a mapping enjoying the following properties: We are now ready to state our main result; the proof is postponed to Sect. 4.1. Existence and sequential stability First of all, we aim to show: 2. weak sequential stability of the solution set, meaning has closed graph; in particular, this implies that it is continuous with respect to the Hausdorff complementary topology and thus Borel measurable, see [3] for details. Both results can be found in [6], see Theorems 6.1, 6.2 and 7.1 for details. Moreover, we assume that the initial densities converge strongly Then, at least for suitable subsequences, Shift and continuation operations In order to construct the semiflow, two main ingredients are needed: the shift invariance property and the continuation. For ω ∈ Q, we define the positive shift operator Since the energy is non-increasing, we can choose every E ≥ E(T +) as initial energy; indeed, everything will be well defined For ω 1 , ω 2 ∈ Q we define the continuation operator ω 1 ∪ T ω 2 by for some E ≤ E 1 (T −). Then, Proof. We are simply pasting two solutions together at the time T , letting the second start from the point reached by the first one at the time T ; thus, the integral identities remain satisfied. Choosing the initial energy for [ 2 , m 2 , E 2 ] less or equal E 1 (T −), the energy of the solution [ 1 , m 1 , E 1 ] ∪ T [ 2 , m 2 , E 2 ] remains non-increasing on (0, ∞). General ansatz Summarizing the results achieved in this section, we have shown the existence of a set-valued mapping enjoying the following properties. We remark that the value E(T −) in (A3) and (A4) can be replaced by where η ∈ [0, 1] is given, since in this case E(T +) ≤ E ≤ E(T −). Semiflow selection The arguments presented below are the same as in [3]. We repeat them here in detail for reader's convenience. We consider the family of functionals Given I λ,F and a set-valued mapping U we define a selection mapping I λ,F • U, by In other words, the selection is choosing minima of the functional I λ,F . Note that a minimum exists since I λ,F is continuous on Q and the set U[ 0 , ( u) 0 , E 0 ] is compact in Q. We obtain the following result for the set I λ,F • U. In other words, let d H be the Hausdorff metric on the subspace K ⊂ 2 Q of all the compact subsets of Q: where V ε (A) is the ε-neighborhood of the set A in the topology of Q; then, it is enough to show that the mapping defined for all K ∈ K as is continuous as a mapping on K endowed with the Hausdorff metric d H . In particular, we want to show that if K n d H −→ K with K n , K ∈ K then for n → ∞. More precisely, it is enough to show that for every ε > 0 there exists n 0 = n 0 (ε) such that for all n ≥ n 0 . First of all, notice that by the continuity of I λ,F we have We start proving the first inclusion of (12). By contradiction, suppose that exists a sequence {z n } n∈N such that in particular, I λ,F (z) > min K I λ,F . By the continuity of I λ,F we have but this contradicts (13). Interchanging the roles of K n and K we get the opposite inclusion in (12). We get the claim. (A3) We want to prove the shift invariance: for and since U satisfies property (A4), we get From the choice of [ , m, E], which minimize I λ,F on U[ 0 , m 0 , E 0 ], we obtain Hence, using (14) Using the shift invariance for U, we obtain Hence, using (15) in the fourth line Using the continuation property for U, we have that and since [ 1 , m 1 , E 1 ] is a minimum of I λ,F , we must have Vol. 21 (2021) Semiflow selection for the compressible Navier-Stokes system 289 Selection sequence First of all, we will select only those solutions that are admissible, meaning minimal with respect to the relation ≺ introduced in Definition 1.5. To this end we consider the functional I 1,β with β( , m, E) = β(E), β : R → R smooth, bounded, and strictly increasing. Proof. We proceed by contradiction. Using the monotonicity of the integral, we obtain The only possibility is to have the equality in both the integral relations above and thus β(E(t)) = β(Ẽ(t)) for a.e. t ∈ (0, ∞); since β is strictly increasing, this implies E =Ẽ a.e. in (0, ∞). We will need this topological result, which is a variation of the Cantor's intersection theorem. Theorem 4.3. Let S be a Hausdorff space. A decreasing nested sequence of non-empty compact subsets of S is non-empty. In other words, supposing {C k } k∈N is a sequence of non-empty compact subsets of S satisfying Proof. By contradiction, assume k∈N C k = ∅. For each n, let U n = C 0 \ C n ; since and n∈N C n = ∅, we obtain n∈N U n = C 0 . Since C 0 ⊂ S is compact and {U n } n∈N is an open cover (on C 0 ) of C 0 , we can extract a finite cover {U n 1 , . . . , U n m }. Let U k be the largest set of this cover (C k the correspondent smallest set), which exists by the ordering hypothesis on the collection {C n } n∈N . Then, Then, C k = C 0 \ U k = ∅, a contradiction since every set of the sequence {C n } n∈N is non-empty by hypothesis. We are now ready to prove Theorem 2.2. Proof of Theorem 2.2. Selecting I 1,β • U from U we know from Lemma 4.2 that the new selection contains only admissible solutions for any [ 0 , m 0 , E 0 ] ∈ D. Next, we choose a countable basis {e n } n∈N in L 2 ( ; R N ), and a countable set {λ k } k∈N which is dense in (0, ∞). We consider a countable family of functionals, We define The set-valued mapping enjoys the properties (A1)-(A4). Indeed: (A1) first, notice that for every fixed initial data [ 0 , m 0 , E 0 ] ∈ D the sets U j [ 0 , m 0 , E 0 ] are nested: By Proposition 4.1 we can deduce that I 1,β • U[ 0 , m 0 , E 0 ] is compact, and iterating this procedure we obtain that all U j [ 0 , m 0 , E 0 ] are compact. Since Q is a Hausdorff space, every compact set is also closed and a countable intersection of closed set is closed. for every j. By Proposition 4.1, we can deduce that I 1,β • U satisfies the continuation property, and iterating this procedure we obtain that this holds for every U j . This implies for all j and all T > 0. Thus, We claim that for every [ 0 , m 0 , E 0 ] ∈ D the set U ∞ is a singleton, meaning To verify this, we observe that of the functions We can apply Lerch's theorem: if a function F has the inverse Laplace transform f , then f is uniquely determined (considering functions which differ from each other only on a point set having Lebesgue measure zero as the same). Then, we get that for all n ∈ N and for a.e. t ∈ (0, ∞). As β is strictly increasing, we must in particular have E 1 (t) = E 2 (t), m 1 (t, ·); e n L 2 ( ;R N ) = m 2 (t, ·); e n L 2 ( ;R N ) , for all n ∈ N and for a.e. t ∈ (0, ∞). Since {e n } n∈N form a basis in L 2 ( ; R N ), we conclude m 1 = m 2 , and E 1 = E 2 a.e. on (0, ∞). It remains to prove that U is a semiflow selection: measurability follows from (A2), while the semigroup property follows from (A3): for t 1 , t 2 ≥ 0 it holds This completes the proof. Remark 4.4. In comparison with [3], it is worth noticing that the family of functionals defined in (17) do not depend on the density : they are not necessary since, once the uniqueness of the momenta is achieved, the one of the densities easily follows from the continuity equation (1). Restriction to semigroup acting only on the initial data As a matter of fact, the semiflow selection U = U { 0 , m 0 , E 0 } is determined in terms of the three state variables: the density 0 , the momentum m 0 , and the energy E 0 . Introduction of the energy might be superfluous; indeed, as pointed out in (10) E(τ ) = 1 2 |m| 2 + P( ) (τ, ·)dx for a.e. τ ≥ 0. The point is that the equality holds with the exception of a zero measure set of times. More specifically, the energy E(τ ) is a non-increasing function with well-defined right and left limits E(τ ±), while where equality holds with the exception of a set of times of measure zero. We may introduce a new selection defined only in terms of the initial data 0 , m 0 ; however, the price to pay is that the semigroup property will hold almost everywhere in time. More specifically, we can state this final result.
4,641
2020-04-30T00:00:00.000
[ "Mathematics" ]
Valley–spin Seebeck effect in heavy group-IV monolayers Akin to electron spin, the valley has become another highly valued degree of freedom in modern electronics, specifically after tremendous studies on monolayers of group-IV materials, i.e. graphene, silicene, germanene and stanene. Except for graphene, the other heavy group-IV monolayers have observable intrinsic spin–orbit interactions due to their buckled structures. Distinct from the usual electric or optical control of valley and spin, we here employ a temperature difference to drive electron motion in ferromagnetic heavy group-IV monolayers via designing a caloritronic device locally modulated by an interlayer electric (Ez) field. A unique valley–spin Seebeck (VSS) effect is discovered, with the current contributed only by one (the other) valley and one (the other) spin moving along one (the opposite) direction. This effect is suggested to be detected below the critical temperature about 18 K for silicene, 200 K for germanene and 400 K for stanene, arising from the characteristic valley–spin nondegenerate band structures tuned by the Ez field, but cannot be driven in graphene without spin–orbit interaction. Above the critical temperature, the VSS effect is broken by overlarge temperature broadening. Besides the temperature, it is also found that the Ez field can drive a transition between the VSS effect and the normal spin Seebeck effect. Further calculations indicate that the VSS effect is robust against many realistic perturbations. Our research represents a conceptually but substantially major step towards the study of the Seebeck effect. These findings provide a platform for encoding information simultaneously by the valley and spin quantum numbers of electrons in future thermal-logic circuits and energy-saving devices. Introduction The study of Dirac electrons, as inspired by the rise of graphene since its fabrication in 2004, has made fruitful achievements in the most recent decade [1][2][3][4][5]. Particularly, two-dimensional (2D) heavy group-IV materials as graphene-like materials, including silicene, germanene and stanene, have attracted considerable research in the last five years owing to their controllable Dirac nature [6][7][8][9][10][11][12][13][14]. Distinct from graphene, electrons in the other 2D heavy group-IV monolayers have an observable intrinsic spin-orbit interaction because of their buckled structures [9][10][11]15]. Significantly, the low-energy Dirac electrons in these materials can feel multiple degrees of freedom, involving charge, spin, valley and sublattice-pseudospin. Here, the 'valley' benefits from the fact that the Dirac electrons locate around two inequivalent points in the first Brillouin region. Up to now, many spinvalley related phenomena have been explored theoretically in heavy group-IV monolayers, such as the hightemperature quantum spin Hall effect [6], valley-polarized quantum anomalous Hall effect [11], topological superconducting effect [13], spin-valley filtering effect [16][17][18] and bipolar spin-valley diode effect [19]. Any of these predictions, if verified in experiment, can provide giant opportunities not only for the development of physics and material science but also for future technology applications. In spite of many previous findings related to spin and valley, we notice that no one has ever proved whether the spin Seebeck effect and valley Seebeck effect could coexist in a real condensed-matter system. By studying the temperature-driven valley-spin transport in a designed ferromagnetic device locally modulated by an interlayer electric (E z ) field, we here discover a unique phenomenon we call the valley-spin Seebeck (VSS) effect, for which the current is contributed only by one (the other) valley and one (the other) spin moving along one (the opposite) direction. This effect surely reflects the coexistence of the spin Seebeck effect and valley Seebeck effect, and is suggested to be detected below the critical temperature 18K for silicene, 200K for germanene and 400K for stanene, attributed to the specific valley-spin nondegenerate band structures, but cannot be driven in graphene without spin-orbit interaction. Above the critical temperature, the VSS effect disappears because of overlarge source-temperature broadening, independent of temperature difference. Besides the temperature, the VSS effect can also be turned on and off by the E z field. It is further demonstrated that the VSS effect is robust against many realistic perturbations. Our findings pave the way for 2D heavy group-IV materials applied in future thermal-logic circuits and energy-saving devices. Design principle for the valley-spin Seebeck effect For any temperature-driven caloritronic device, two heat reservoirs are requisite: one is a high-temperature source (S), the other is a low-temperature drain (D). We here use ( ) T S D to describe the temperature of S (D). We define the direction of the current from S to D as positive. Because electrons in heavy group-IV monolayers have valley and spin degrees of freedom, the current should be valley-spin dependent. Below we use h ( ) s to denote the valley (spin) index. According to the generalized Laudauer-Büttiker approach [18,[29][30][31][32] .  h s represents the number of valley-spin dependent transmission modes, depending on the detailed device structure. Because silicene, germanene and stanene are semiconductors with small spin-orbit band gaps less than 0.2eV, the temperature primarily plays the role of exciting thermoelectrons here [31][32][33], different from that of exciting magnons in ferromagnetic insulators [41]. From the distribution differencef f S D in equation (1), we judge that electrons with energy higher than the Fermi energy e F flow from S to D on account of - . This happens in undoped pristine graphene and its derivatives with electron-hole symmetry. To create electron-hole asymmetry near the Fermi energy it is a necessary condition to generate nonzero current, but needs to far from enough to effectively drive the VSS effect in heavy group-IV monolayers. Based on band theory, we conclude that the detailed form of  h s in equation (1) depends on the band-to-band tunneling mechanism, and dictates whether the valley or spin Seebeck effect happens. On the one hand, to drive the normal spin Seebeck (NSS) effect, the transmission should be electron-dominated for one spin but holedominated for the other spin. This requires that the band structure near the Fermi energy must be nondegenerate for two spins besides electron-hole asymmetry. This is relatively easy to realize in the ferromagnetic state of semiconductors [10,19,31]. On the other hand, to generate the valley Seebeck effect, one must ensure that the transmission near the Fermi energy is electron-dominated for one valley but holedominated for the other valley [29,30]. This condition cannot be satisfied in a sample of silicene, germanene or stanene only in the presence of the E z field, although the band structure is both valley-polarized and spinpolarized [10]. Conceivably, to realize the VSS effect, the transmission should be electron-dominated for one spin in one valley but hole-dominated for the other spin in the other valley. In this case, there might exist a particular energy region, inside which one valley is insulating for two spins but the other valley is conductive for only one spin. Luckily, we notice that, for heavy group-IV monolayers, the coexistence of ferromagnetic field and E z field could satisfy this criterion [29]. Additionally, to observe the VSS effect, the energy broadening induced by the source temperature cannot exceed the particular insulating region for two spins, or otherwise the deep-energy bands outside the band gap can break the transport feature of the VSS effect. Here, we need to make clear how to obtain the transmission modes  h s in equation (1). For simplicity, the reduced Planck constant ÿ is set as 1, and the Fermi velocity u F (5.5 10 5 m s −1 for silicene,4.6 10 5 m s −1 for germanene,4.9 10 5 m s −1 for stanene [10,15]) is set as 1. For a sample of width W, the total transmission modes are generally obtained by [19,29]  where k, f are the modulus and angle of the wave-vector, represents the reduced constant with the dimension of density of states, and e f h ( ) T , s describes the angle-dependent transmission for each mode. Then the spin-valley dependent current is calculated from equation (1) as Realization of the valley-spin Seebeck effect At the top of figure 1(a), we show our proposed caloritronic field effect transistor device, including S, D and G (the central gate region controlled by an E z field). A ferromagnetic field with strength M is uniformly applied in the whole sample. From previous first-principles calculations, it is known that the low-energy electrons in heavy group-IV monolayers can be described by an effective Hamiltonian which reads [20][21][22][23], where λ (3.9 meV for silicene, 43 meV for germanene, 100 meV for stanene) represents the spin-orbit coupling, h = + - describe the Pauli matrices of sublattice pseudospin and electron real-spin respectively. Although Rashba-type spin-orbit couplings as perturbations in these heavy group-IV monolayers exist in principle, they play little role even under external fields in studying the band structures and transport properties [6,10]. Thus the low-energy effective Hamiltonian here can be expressed as where the second term indicates the ferromagnetic field, the third term represents the E z field with = Q -Q - Herein, Q( ) x is the Heaviside function, and ℓ denotes the interlayer distance. After matrix diagonalization, the energy bands are solved as The energies at valleys . In regions S and D, it is found that the transport-forbidden regions for spin-up and spin-down electronic states read, respectively, , identical for two different valleys. In region G, the valley degeneracy is destroyed by locally applying ℓE z , and the band gap regions for spin-up and spin-down bands are obtained, respectively, as l h l l h l -- To obtain information on h I s , it is still needed to solve h T s in equation (3). Based on quantum mechanics, we derive the transmission amplitude by matching the wave functions at the interfaces = x L 0, as The mutual insulating region for two spins is . The wave-vector modulus in regions S, D and G are obtained respectively as . 7 s s 2 Note that, for uniformity, the parameters l = M and L=50nm are used below to carry out the calculations and plot figures without special instructions. In figures 1(b)-(d), we plot the current h I s versus the source temperature T S for silicene, germanene and stanene, respectively, at l = ℓE 2 z . The temperature difference is fixed at = T 20 SD K, except for the inset in figure 1 Two different phases are found, one is marked as I-VSS and the other is marked as II-NSS. It can be seen that the VSS effect can be observed below the critical temperature 18K for silicene, 200K for germanene and 400K for stanene. The increasing critical temperature from silicene to stanene is consistent with the increasing spin-orbit interaction. The critical temperature T c could be approximately determined by , where k B is the Boltzmann constant. Above the critical temperature, the NSS effect is detectable. As is known, the index-change relationship of the Fermi-Dirac distribution function determines that only electronic states near the Fermi energy could contribute to the current. Consequently, the thermodynamical transition from VSS phase to NSS phase is attributed to the fact that, only if the energy broadening k T 5 S B induced by the source temperature does not exceed the gap size l 2 , high-energy subbands cannot break the transport feature of the VSS effect. For the phase I-VSS, it is seen that only spin-up (down) states from valley ¢ ( ) K K contribute to the positive (negative) current, in agreement with the band structures in figure 1(a), where hole (electron) states contribute most to  ¢  ( ) I I K K . As T S increases, the hole-dominated current ¢  I K and the electron-dominated current  I K become nonzero due to enhanced temperature broadening. As T S becomes much larger than 400K, all the values of h | | I s in figures 1(c) and (d) tend to stabilize due to convergence of the integral in equation (3), just as calculated in figure 1(b). In the driving force tends to zero and thus all the currents h I s tend to zero. Owing to  T T S SD , except for the inset in figure 2(a), the nonzero h I s curves basically increase linearly as T SD increases. Also, such linear relationships can be judged from equation (3) In the low-T S case, the linear relationships disappear although the curves h ( ) I T s SD vary monotonically, as seen from the inset in figure 2(a). In comparison with figures 1(b)-(d), T S and ℓE z here do not change, and hence we have reason to conclude that the presence of the VSS effect is basically independent of T SD , because no new phase is induced by increasing T SD . For germanene, the VSS effect could also be detected if T S is fixed at a small value less than the critical temperature 200K, just as shown by the inset in figure 2(a). In The main result here is that the VSS and NSS effects could be switched to each other by tuning the E z field. To understand this phenomenon, we need to analyze the variation of band structures in region G induced by the E z field. As ℓE z varies, the mutual insulating region for two spins l h l l h l - I s cross due to the valley degeneracy. Now one may wonder whether inelastic scattering at higher temperatures affects our ballistic results in figures 1-3. It is true that electron-phonon interactions should be enhanced as temperature increases due to more excited phonon modes, and thus electron mobility naturally decreases. For electron-phonon scattering, we divide it into two types: intravalley scattering and intervalley scattering. For intravalley scattering, increasing temperature cannot drive the jump between different states labeled by different valley-spin indexes due to the absence of the spin-valley flipping mechanism, but only reduces the magnitude of h I s . This indicates that the qualitative conclusions in the ballistic regime remain the same. For intervalley scattering, recent first-principles calculations have demonstrated that only electrons with energy more than several tens of meV or larger in silicene and germanene could feel this scattering induced by the stronger electron-phonon coupling [48], even near room temperature. Consequently, it is reasonable to believe that inelastic scattering has little effect on the detectable properties of the VSS effect in our considered range of temperatures. Additionally, note that the net current contributed by spin-up and spin-down electrons always flows in opposite directions (pure spin current) in figures 1-3 and indeed the spin polarizability is 100%, benefiting from the completely spin-polarized subbands near the Fermi energy (see figure 1(a)). Thus it is not necessary to define the degree of spin polarization here. Nevertheless, one can notice that the NSS in figure 2(b) for gemanene is different from that in figure 2(a) for silicene, resulting from different degrees of valley polarization r v . Further considering that the total current contributed by two opposite directions is always zero in figures 1-3 due to symmetry, it is difficult to define an overall effective r v for bidirectional transport, but one can still treat r v separately for positive and negative currents via a definition r = å Robustness of the valley-spin Seebeck effect Since we have obtained information for how to realize the VSS effect in silicene, germanene and stanene, it is still necessary to discuss the robustness of the VSS effect in some real complex experimental conditions. Below, we take stanene for instance and study the influence of the ferromagnetic field and the Fermi energy on the VSS effect near room temperature. The fixed parameters read = T 340 Note that the conclusions for stanene here are universally valid for germanene and silicene under lower T S . Firstly, we show the influence of ferromagnetic field M on the VSS effect in figure 4(a). Our results indicate that the VSS effect is always robust at  l M . This result depends on such a fact that the ferromagnetic field does not essentially change the electron-hole asymmetry in figure 1(a) in spite of band variations. Further considering that the ferromagnetic field has been successfully introduced in graphene by the proximity effect of the ferromagnetic insulating substrate [49], it is reasonable to believe that the ferromagnetic field could also be applied in silicene, germanene and stanene. In this sense, one can easily observe the VSS effect in a large range of M-strength. In addition, provably, the length L of region G does also not change the realization of the VSS effect. Secondly, we consider a stanene sample that is uniformly doped e ¹ 0 , and thus the NSF effect is observed. Lastly, we discuss the influence of some complex conditions on the VSS effect. Firstly, in terms of band changes, the Rashba interactions do not change the valley-spin dependent band gap although there is spin splitting outside the valley points [13][14][15][16][17]. Actually, for silicene, germanene and stanene, the spin splitting induced by Rashba perturbations plays little role in affecting the spin-valley transport, as proved by previous first-principles calculations [10]. Secondly, note that our concerned VSS effect is based on bulk states of the materials but not edge states [47] and thus could be detected in experiments more easily due to the stronger signal, although this VSS effect has some common transport features to valley-polarized quantum spin Hall edge states [28]. Thirdly, a real sample of heavy group-IV monolayers also inevitably contains atomic defects in the bulk, such as bits of vacancies [12,49], which induce variations of supercell bands. To realize the VSS effect, the defect ratio cannot exceed 8%, otherwise the VSS effect is broken by defect states that contribute to the current [50]. Fourthly, to avoid intervalley scattering, local gates are suggested to deplete edge carriers and form gatedefined sample edges, near which the variations of edge potential do not change the delectability of the VSS effect owing to the negligible proportion of edge atoms [19]. In short, although there has been little experimental study about the temperature-driven electron motion in heavy group-IV monolayers so far, we still have sufficient reason to believe that the VSS effect discussed here is detectable by developing the latest experimental detection techniques on valley or spin in graphene [44][45][46][47]. To be sure, our results here advance our previous work about the realization of NSS or valley Seebeck effects as well as confirm the rough prediction of the coexistence of these two effects [51]. Conclusion In summary, based on monolayers of heavy group-IV materials, we have designed a caloritronic device locally modulated by an interlayer electric field to realize a unique valley-spin Seebeck (VSS) effect, which supports bidirectional transport with each net current direction locked by both valley and spin. Associated with the strength of the spin-orbit band gap, such a VSS effect is suggested to be detectable using current experimental technology below the critical temperature 18K for silicene, 200K for germanene and 400K for stanene, arising from the specific valley-spin band structures. We have also found that above the critical temperature, the VSS effect may be broken by overlarge temperature broadening, basically independent of temperature difference. Besides the temperature, the local electric field can also drive a transition between the VSS and normal spin Seepbeck effects by tuning the band structures. Furthermore, we have proved that the VSS effect found here is robust against many realistic perturbations. These findings provide new insights to apply two-dimensional heavy group-IV materials in future information procession (encoding information simultaneously by valley and spin) in thermal logic circuits and low-dissipation devices.
4,545.8
2017-06-05T00:00:00.000
[ "Physics" ]
Whole-transcriptome analysis of rat cavernosum and identification of circRNA-miRNA-mRNA networks to investigate nerve injury erectile dysfunction pathogenesis ABSTRACT There is growing evidence that circular RNAs (circRNAs) play a vital role in many kinds of diseases, including erectile dysfunction (ED). Nevertheless, the role of circRNAs in cavernous nerve-damaging ED (CNI-ED) is unknown. Here, we aimed to discover novel circRNAs, probed their potential role in the CNI-ED, and construct a ceRNA network of circRNAs. Twelve male Sprague Dawley rats were randomly divided into 2 groups by us: bilateral cavernous nerve crush (BCNC) and control groups. Four weeks after surgery, the spongy smooth muscle tissue of the rat penis was sequenced using high-throughput full transcriptome sequencing. We analyzed the expression of circRNAs, miRNAs, and mRNAs in the two groups. Twenty circRNAs with significantly different expressions were selected for RT-qPCR. CeRNA network of circRNAs was established using Cytoscape. GO and KEGG analysis was done by R package. Sequencing showed that 4,587 circRNAs, 762 miRNAs, and 21,661 mRNAs were dysregulated in the BCNC group. The top 20 differentially expressed circRNAs were further verified via RT-qPCR. The ceRNA network contained ten circRNAs, six miRNAs, and 227 mRNAs, including 23 circRNA-miRNA pairs and 227 miRNA-mRNA pairs. GO and KEGG analysis suggested that these ten circRNAs could main regulate energy metabolism processes. A protein‐protein interaction network was constructed with the mRNAs in ceRNA network, and five hub genes were identified. Our study revealed a potential link between circRNAs, miRNAs, and mRNAs in CNI-ED, suggesting that circRNAs may contribute to the occurrence of ED by regulating the cellular energy metabolism in CNI-ED. Introduction Prostate cancer is one of the most common malignancy in men worldwide. Recent studies indicate that prostate cancer makes up more than one in five new cancer cases is in the United States [1]. Radical prostatectomy (Rp) is the major treatment modality to reduce cancer mortality in patients with localized prostate cancer [2]. Unfortunately, due to the location of the prostate, men receiving RP treatment may suffer side effects, which can have a long-term impact. Even though various neurosparing methods have been promoted in recent years, the incidence of erectile dysfunction (ED) after RP is still high at about 11-87% [3,4]. The main cause of ED is surgical damage to the cavernosal nerve in patients with prostate cancer. After nerve injury, the nerve control of the cavernous body of the penis is lost, and a variety of pathological and physiological changes occur, such as smooth muscle diastolic dysfunction of the cavernous body of the penis and the occurrence of penile fibrosis, which lead to conventional treatment such as oral PDE5 inhibitors, vacuum negative pressure suction, and local injection of vasoactive drugs into the cavernous bod have poor efficacy, so ED after RP is still a long-term problem for most patients [2,[5][6][7]. Therefore, finding the underlying mechanism is very critical for the treatment of post-RP ED. Circular RNAs (circRNAs) are evolutionarily conserved transcripts, making up a mass class of non-coding RNAs, which are produced by backsplicing [8]. For the past few years, mounting evidence has demonstrated the critical role of circRNAs in biological processes by modulating protein function, acting as miRNA or protein inhibitors (sponges), or by encoding proteins themselves [9][10][11]. In addition, circRNAs are closely related a variety of diseases, such as diabetes mellitus and its complications, cardiovascular disease, neurological disorders, osteoporosis and cancers [12][13][14]. Recently, circRNAs as miRNA sponges regulating biological functions have been gradually understood, and a large number of studies have shown that miRNAs play a vital role in the pathogenesis and treatment of ED [15,16]. However, how circRNAs regulate miRNAs and ultimately regulate cavernous nerve injury erectile dysfunction (CNI-ED) remains unknown [17]. The above research suggested one possibility that a comprehensive study of the function of competing endogenous RNA (ceRNA) network of circRNAs and the ability of circRNA to encode proteins is an effective strategy for understanding CNI-ED. In this study, we designed to build a ceRNA network of circRNAs and discover the potential of circRNAs to encode proteins. Illumination of the potential link between CNI-ED and ceRNA network of circRNAs could suggest new strategies for treating this disease. The study was designed to identify the differentially expressed circRNAs and miRNAs and mRNAs by whole-transcriptome sequencing of CNI-ED and normal rats. A ceRNA network was subsequently constructed to reveal the potential role of circRNAs in CNI-ED. We found that circRNAs lead to CNI-ED by regulating energy metabolism, in which Ccna2, Cxcl10, Pld1, Mapk11, and Mboat2 may play an important role.The workflow of the our current research is shown in Figure 1. Animals Experimental animal ethics was approved by the Laboratory Animal Center of Zhejiang University of Traditional Chinese Medicine. All rats were raised in the Experimental Animal Center of Zhejiang Chinese Medicine University in a special sterile environment: water and food were freely available, and a 12-hour cycle of day and night and maintained at 20-24°C. Twelve male SD rats weighed between 300 and 350 g and were 8 weeks old (SIPPER-BK Laboratory Animals, Shanghai, China) were randomly divided into experimental group and control group and done as follows. All surgical procedures and operations were followed the animal ethics guidelines of the National Health and Research Institutes. The rat were anesthetized with 3% pentobarbital sodium (1.5 × 10 −5 L/100 g). In the model group (n = 6), only bilateral cavernosal nerves(CN) were isolated but not damaged. In the bilateral cavernous nerve crush injury (BCNC) group (n = 6), the damaged bilateral cavernous nerves of rats were pinched with vascular forceps twice for 60 seconds [18,19]. Collection of sequencing tissues 28 days after the operation, utilizing 3% pentobarbital sodium (1.5 × 10 −5 L/100 g) anesthesia in rats, after waiting for anesthesia effect, carefully separate the rat penis, then remove the penis sponge tissue, rinsing with pre-cooled PBS liquid, carefully remove other tissues from the penis. The rats were then euthanized in carbon dioxide. Finally, all the sponge tissue after marking, placed in the −80°C refrigerator for further histologic studies. RNA library construction and sequencing Transcriptome sequencing was performed on 3 normal group rats and 3 model group rats. The whole transcriptome sequencing work was completed by Hangzhou Lianchuan Biotechnology Co., LTD. The specific operations are as follows: total RNA in the cavernous tissue of the rat was isolated and purified using Trizol reagent (Invitrogen, Carlsbad, CA, USA). The RNA concentration and purity we extracted were tested by using NanoDrop ND-1000 (NanoDrop, Wilmington, DE, USA). The RNA integrity was assessed by Agilent 2100 with RIN number >7.0. The ribosomal RNA was depleted by approximately 5ug of total RNA through the Ribo-Zero™ rRNA Removal Kit (Illumina, San Diego, USA). When eliminating ribosomal RNAs the left RNAs were sliced up at high temperatures. The cleaved RNAs were then reverse transcribed into cDNA, which were labeled with U-labeled to synthesize 2-strand DNA. An A-base was added to the blunt ends of each strand, preparing them for ligation to the indexed adapters. Single-or dual-index adapters are ligated to the fragments, and size selection was performed with AMPureXP beads. The two strands of DNA labeled by U were treated with the refractory UDG enzyme for PCR operation under the following operating conditions: 95°C for 3 min; 8 cycles at 98°C for 15 seconds, 60°C for 15 sec, and 72°C for 30 seconds; and then 72°C for 5 min. The final PCR cDNA library was about 250-350 bp in size. In the end, The paired-end sequencing was performed by us according to the protocol from Illumina Hiseq 4000 (LC Bio, China) [20]. RNA extraction and RT-qPCR validation Twenty significantly different circrnas were selected for qPCR validation. Total RNA was extracted using the TRlzol method. RT-qPCR primers (Table 1) were designed by primer 5.0. The RNA concentration and purity we extracted were tested by using NanoDrop ND-1000 (NanoDrop, Wilmington, DE, USA). The RNA integrity was assessed by Agilent 2100 with RIN number >7.0. The OD260/280 absorbance ratios of all samples ranged from 1.8 to 2.0. Reverse transcription of cDNA was completed with the HiScript II Q Select RT SuperMix for qPCR (R233) (Vazyme, China), and the qPCR procedure was performed according to the StepOne Plus Real-Time PCR system (Applied Biosystems, USA). GADPH was used as an endogenous control to normalize each sample. PCR amplifification effificiencies were established by means of calibration curves. Construction of circRNA-miRNA-mRNA ceRNA network The differential expression between the normal and model groups was further analyzed using the lima package. The DE circRNAs and mRNAs were screened according to the following criteria: Log2 (fold change) > 1 and P ≤ 0.05 [21], The miRNAs screening was as follows: P ≤ 0.1. Utilizing the Circular RNA Interactome (http://circinterac tome.nia.nih.gov) database to predict miRNAs that circular RNA may play a spongy role. We obtained what we needed miRNAs by intersecting the meaningfully differentially expressed miRNAs(DEmiRNAs) with the predicted miRNAs. The downstream target genes of miRNA were predicted from the miRanda and TargetScan. Differentially expressed mRNAs (DEmRs) were obtained by intersecting predicted mRNAs and sequenced mRNAs. Finally, the network was constructed by Cytoscape 3.71. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses CircRNAs act as miRNA sponges because of containing corresponding miRNA binding sites [9]. Translation of mRNA was inhibited by miRNA binding to the 3 non-coding region of mRNAs [22]. Subsequently, ceRNA networks with significantly differentially expressed circRNAs were constructed by us. Moreover, We used Cytoscape 3.7.1 (http://cytoscape.org/) and the edgeR package to visualize the network. Then, we performed functional enrichment analysis of RNAs in the circular RNAs network. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses was performed on the online analysis platform of Lianchuan Biotechnology Co., Ltd(https://www. omicstudio.cn/). GO and KEGG analyses were performed using the ClusterProfler package that identified biological processes and pathways. Subsequently we used ClueGO to analyze the path correlation of the gene enrichment pathways in the ceRNA network. We used STRING to analyze mRNAs in the network and establish a protein-protein interaction (PPI) network to look for potential connections. Finally, CircAtlas 2.0 was used to predict circRNA conservation and protein-coding ability 22. Statistical analysis SPSS 17.0 statistical software was used for data analysis. T test was used for measurement data between two groups. A value of P < 0.05 indicated that the difference was statistically significant, P > 0.05 indicates that the differences are not statistically significant. Results In this study, we aimed to identify the differentially expressed circRNAs, miRNAs, and mRNAs by sequencing the CNI-ED rats and normal rats, and then constructed a ceRNA network centered on circRNAs. Through GO and KEGG analyses of related genes in the ceRNA network revealed that circRNAs regulate CNI-ED through energy metabolism process, suggesting that circRNAs can serve as treatment targets for CNI-ED. Validation of DEcircRNA by RT-qPCR To further ascertain circRNAs associated with CNI-ED, the 20 dysregulated circRNA were The two-tailed Student t-test was used to assess statistical significance: *P < 0.05 and **P < 0.01 represent differential expression between the model group and normal group. Prediction of interactions between circRNAs, miRNAs, and mRNAs To determine whether the previously selected differential circRNAs play a similar role in CNI-ED and predict their target miRNAs, the online database CircInteractome was used. We then obtained overlapping miRNAsby intersecting predicted and detected miRNAs (Figure 4a). Ultimately, 23 circRNA-miRNA interactions were established, including ten circRNAs (seven upregulated and three downregulated) and six miRNAs (five upregulated and one downregulated miRNAs). TargetScan and miRWalk 3.0 were used to predict the mRNAs that miRNA might bind to, and the overlapping mRNAs were obtained by intersecting these DEmRNAs. These 227 mRNAs (Figure 4b) may play an important role in CNI-ED. Construction of ceRNA networks of circRNAs In this study, we established a ceRNA networks of circRNAs showing the upregulated (Figure 5a) and downregulated (Figure 5b) circRNAs, to reveal the underlying relationship between between circRNAs, miRNAs, and mRNAs. This network contained ten circRNAs, six miRNAs, and 227 mRNAs, including 23 circRNA-miRNA pairs and 227 miRNA-mRNA pairs. Go and KEGG analysis of mRNAs in the ceRNA network We performed KEGG and GO analysis of 227 DEmRNAs obtained. The top 20, 15, and ten most meaningful GO terms in biological Figure 4. The connections between circRNA, miRNA and mRNA (a) Blue represents miRNAs that are capable of spongy circRNAs predicted by CircInteractome, red represents the differential miRNAs obtained by RNA sequencing (p ≤ 0.1), and the overlaps represent the differentially expressed miRNAs that we ultimately need. (b) The red circle represents the predicted mRNA that can target all miRNAs, and the other colored circles represent the intersection of each miRNA's predicted and actual sequencing results. Finally, we obtained 50 target genes of rno-let-7b-5p, 45 target genes of rno-miR-129-2-3p, 37 target genes of rno-let-7a-2-3p, 34 target genes of rno-miR-27b-5p_R-2, 12 target genes of rno-miR-99b-3p_R + 1, and 12 target genes of rno-miR-345-5p_R + 1. Figure 6a. GO analysis results showed that DEmRNAs mainly participated in oxidation-reduction processes, multicellular organism development, signal transduction, lipid metabolis, apoptosis, proteolysis, intracellular signal transduction, protein binding, metal ion binding, ATP binding. processes(BP), cellular component(CC), and molecular function(MF) are shown in According to the KEGG analysis, the main KEGG categories included genetic information processing metabolism, hippo signaling pathway (ID:rno04390), glycerophospholipid metabolism (ID: rno00564), sphingolipid metabolism (ID: rno00600), tight junctions (ID: rno04530), signaling pathways regulating stem cell pluripotency (ID:rno04550), fatty acid biosynthesis (ID: rno00061), and cellular senescence (ID: rno04218), which were significantly (p ≤ 0.01) enriched (Figure 6b). The pathways correlation in the network is shown in Figure 6c. The number of genes contained in each pathway of interest are shown in Figure 6d. PPI network analysis of mRNAs in ceRNA network A total of 134 mRNAs were found to interact and 332 protein pairs were included in the PPI network (Figure 7a). The number of relative pairs per protein are shown in Figure 7b. The results suggested that Ccna2, Cxcl10, Pld1, Mapk11, and Mboat2 might be the core genes of the regulatory network. The ability of DE circRNAs encoding protein and the conservation of DE circRNAs Significant differentially expressed circRNAs were screened using circRNADb [23], revealing 12 circRNAs with internal ribosomal entry sites and open reading frames, suggesting that they might code proteins, and we also found that three circRNAs were conserved in humans and rats ( Table 2). Discussion For the current study, we aimed to explore the roles of circRNAs of CNI-ED. Therefore, the expression level of RNA in CNI-ED was determined by second-generation high-throughput sequencing. Whole transcriptome sequencing showed that 4,587 circRNAs, 762 miRNA, and 21,661 mRNAs were dysregulated in the BCNC group. Twenty circRNAs with significant differences were verified by PT- qPCR. We identified eleven circRNAs that were significantly different between the normal control group and the model group by qPCR. Finally, circRNA945, ciRNA109, ciRNA171, circRNA2981, circRNA2984, circRNA2986, circRNA3009, ciRNA351, circRNA2982, and circRNA2983 were selected to construct the ceRNA network of circRNAs to predict their functions. ED is the most common long-term complication after RP [4]. Currently, PDE-5 inhibitors are mainly used in the treatment of CNI-ED, but clinical studies have shown that sildenafil is not efficient in the treatment of CNI-ED, being effective in only 35% of patients [24]. The possible reason is that a series of complex pathophyseologic changes such as cavernosum fibrosis and cavernosum smooth muscle atrophy occurred after the loss of nerve control in the penis [2,[5][6][7]. Therefore, it is of great practical significance to find a new treatment regime for CNI-ED. Erectile function is regulated by the diastolic and contractile functions of corpus cavernous smooth muscle cells (CCSMCs). After the loss of innervation of the cavernous body in patients with CNI-ED, CCSMCs gradually deteriorate, mainly manifested as increased intracellular muscle protein degradation, significantly reduced muscle content, inhibited cell proliferation, and apoptosis [25][26][27], which eventually leads to ED. CircRNAs are a new kind of endogenous noncoding RNAs, produced via head-to-tail backsplicing of one or more exons [11]. There is growing evidence that circRNAs play an important role in biological functions by acting as miRNA sponges [10]. A growing body of research shows that circRNAs are rich in species, stable in structure and conserved in sequence. Also, an increasing number of reports have shown that circRNAs affected RNA translation and expression by competitively binding miRNAs. CircRNAs can also regulate mRNA expression. In recent years, with the development of second-generation highthroughput RNA sequencing technology, many new circRNAs and their new functions have been discovered, such as adsorbing and regulating miRNA activity as miRNA sponges, binding to transcriptional regulatory elements or interacting with proteins to regulate gene transcription and translation [28][29][30]. For example, circRNA-HRCR regulates the progression of heart failure by targeting miRNA-233 [31], and the down-regulation of circRNA-DMNT3B leads to the occurrence of diabetic eye disease by targeting miRNA-20b-5p [14]. In addition, Guo et al. found the protective effect of RNA3503 on osteoarthritis by overexpressing the circRNA3503 in the extracellular vesicles of synovival mesmesymal stem cells [32].Existing research has also shown that circRNAs play a crucial role in muscle atrophy [33]; however, direct evidence about circRNAs in CNI-ED is still absent. We performed GO analysis for mRNAs in the network, and the results showed that they were enriched mainly in oxidation-reduction processes, multicellular organism development, signal transduction, lipid metabolism, apoptosis, and proteolysis. These results suggested that energy metabolism, proteolysis, and apoptosis play important roles in the network of circRNAs regulating CNI-ED. Furthermore, KEGG pathway enrichment analysis showed that the mRNA in the ceRNA network of circRNAs was mainly concentrated in the hippo signaling pathway, glycerophospholipid metabolism, sphingolipid metabolism, tight junctions, signaling pathways regulating stem cell pluripotency, fatty acid biosynthesis, and cellular senescence. After cavernous nerve injury, a series of pathophysiological reactions, such as penile ischemia and hypoxia, cell phenotype transformation, increased apoptosis, and cavernous tissue fibrosis occur gradually due to the weakened stimulation of the muscle [5]. Furthermore, circRNAs may cause abnormalities in the energy metabolism through miRNA sponging and eventually lead to the occurrence of ED. This could be a potential treatment for CNI-ED. Abnormal energy metabolism is closely related to the occurrence of various diseases. Mitochondria are the energy factories of cells, but their role in ED nerve injury is still unclear. We previously found that the PI3K/Akt pathway is affected in CCSMCs induced by hypoxia [34]. Hence, whether circRNAs affect the mitochondrial function in CCSMCs through the regulation of PI3K/Akt pathway and lead to the occurrence of ED still warrants further research. Cxcl10 is one of the hub genes in the ceRNA network of circRNAs established here, and KEGG analysis showed that Cxcl10 regulates a variety of energy metabolism processes, such as glycerophospholipid metabolism, Ras signaling pathway, etc. Interestingly, we found that Cxcl10 induces mitochondrial dysfunction of pancreatic cells, then leads to energy metabolism disorders, finally leading to apoptosis [35]. Recent studies have shown that miRNA-27b-5p targets Cxcl10 to regulate its effects [36]. Another study suggested that miRNA-27b regulates mitochondrial production in the myocardium [37], while in the present study, miRNA-27b-5p expression was downregulated and Cxcl10 expression was upregulated; therefore, Cxcl10 may be regulated by miRNA-27b-5p in CNI-ED. The analysis using the Circular RNA Interactome database showed that the upregulated circRNA2981, circRNA2984, circRNA3986, and circ3009 were likely to be sponges of miRNA-27b-5p. Therefore, circRNAs may cause mitochondrial dysfunction of CCSMCs by targeting Cxcl10 through sponging miRNA-27b-5p, and ultimately leading to ED. The mTOR signaling pathway plays a crucial role in muscle atrophy [38]. Existing evidence proves that phospholipase D (PLD), through its product phosphatidic acid combined with mTOR, plays the most useful role and ultimately stimulates the mTOR signaling pathway [39]. PLD1, as the well-known member of the PLD family, has been widely studied. Rami Jaafar found that PLD1 can simultaneously activate mTORC1 and mTORC2 which can reverse muscle atrophy and have a positive effect on muscle nutrition. In our sequencing results, the expression of Pld1 was increased, which may indicate that PLD1 also plays a protective role in CNI-ED. We used TargetScan and miRanda to conduct miRNA-mRNA interaction analysis, looking for miRNAs that might competitively bind to PLD1, and the results showed miRNA-let-7a and miRNA-129-2-3p as candidates. We then used the Circular RNA Interactome database to look for circRNAs that were likely to sponge miRNA-let-7a and miRNA-129-2-3p. Finally we found that ciRNA351, circRNA2981, circRNA2982, circRNA2983, circRNA2986, and circRNA3009 may be involved in the regulation of Pld1. There are still several shortcomings in our study. Although circRNAs are highly conserved among species, the differentially expressed circRNAs discovered via sequencing still need to be confirmed in human tissues. In addition, the relationship between circRNAs/miRNAs/mRNAs predicted by the software has not been further verified via experiments. Besides, the model we used was a CNI-ED rat model; therefore, the expression of circRNAs in patients with ED is still unknown. Conclusions In a nutshell, the circRNAs with significantly different expressions were comprehensively analyzed and a circRNAs centered CERNA network was constructed. Three circRNAs with the ability to encode proteins were discovered. We identified Ccna2, Cxcl10, Pld1, Mapk11 and Mboat2 as potential therapeutic targets for CNI-ED. Our research on circular RNA represents a new insight into ED caused by nerve injury. However, further experiments are still needed to verify the relationships among circRNAs, miRNAs and mRNAs in the circRNA network. Research highlights: 1. The role of circRNAs in cavernous nerve-damaging ED was assessed. 2. A ceRNA network centered on circRNA about CNI-ED is constructed. 3. The occurrence of CNI-ED is related to the regulation of energy metabolism by circRNA. 4. We identified five hub gene in the ceRNA network that play important roles in CNI-ED.
5,003.6
2021-01-01T00:00:00.000
[ "Medicine", "Biology" ]
Epigenetic Regulation of MicroRNA Clusters and Families during Tumor Development Simple Summary In this review, the history of RNA interference discovery and current knowledge about microRNA biogenesis and post-transcriptional regulation of gene expression is summarized, with a special focus on microRNA clusters and families. Further, strong interplay between microRNAs and basic epigenetic mechanisms, such as DNA methylation and histone modifications, are introduced and associated with deregulated expression of microRNAs during tumor development. Finally, novel strategies for epigenetic-based therapies are discussed. Abstract MicroRNAs are small non-coding single-stranded RNA molecules regulating gene expression on a post-transcriptional level based on the seed sequence similarity. They are frequently clustered; thus, they are either simultaneously transcribed into a single polycistronic transcript or they may be transcribed independently. Importantly, microRNA families that contain the same seed region and thus target related signaling proteins, may be localized in one or more clusters, which are in a close relationship. MicroRNAs are involved in basic physiological processes, and their deregulation is associated with the origin of various pathologies, including solid tumors or hematologic malignancies. Recently, the interplay between the expression of microRNA clusters and families and epigenetic machinery was described, indicating aberrant DNA methylation or histone modifications as major mechanisms responsible for microRNA deregulation during cancerogenesis. In this review, the most studied microRNA clusters and families affected by hyper- or hypomethylation as well as by histone modifications are presented with the focus on particular mechanisms. Finally, the diagnostic and prognostic potential of microRNA clusters and families is discussed together with technologies currently used for epigenetic-based cancer therapies. History of microRNA Discovery Twenty years ago, the classical dogma of molecular biology that DNA is transcribed into RNA, which is subsequently translated into proteins, was well accepted. However, with the development of whole-genome and transcriptome sequencing technologies, it was determined that more than 90% of the genome is actively transcribed. Surprisingly, less than 2% of the genome encodes proteins, while the rest of the transcriptome shows an extensive non-coding RNA (ncRNA) expression [1]. Previously, these molecules were considered to be "junk", but it was found that most of them are involved in the majority of cellular processes and functions [2]. Today, microRNAs (miRNAs) belong to the well-documented small ncRNAs that are currently the subject of intensive translational research. The first described miRNA, lin-4, was discovered in Caenorhabditis elegans by Ambros et al. [3] and Ruvkun et al. [4] in 1993 as an essential regulator of control of developmental timing. In the December issue of Cell, both Ambros and Ruvkun reported that small and non-protein-coding transcript lin-4 regulates mRNA lin-14 through its 3 UTR region. Downregulation of LIN-14 protein was dependent on the transcription of lin-4, which is not translated into a protein [3,4]. However, it was only in 2000 when the second miRNA was identified by Ruvkun et al. [5]; it was let-7-a heterochronic gene of C. elegans which controls the transition of larval development through the binding to two closely spaced sites in lin-41 3 UTR. Importantly, the let-7 sequence is highly conserved across species from flies to humans [6]; the human let-7 family comprises 12 miRNAs that share complete sequence identity at the seed region, thus regulating the same targets [7]. These facts were significant for the study of miRNAs in other organisms. Biogenesis of microRNAs Genes for miRNAs are distributed on all chromosomes. Interestingly, the lowest or no miRNA precursors are observed on the Y chromosome, while there are several chromosomes with extremely high numbers of miRNA genes in all species, probably due to clustering of miRNAs. In humans, chromosomes 1, 19, X, and 2 have the highest number of miRNA genes and constitute almost 30% of all miRNAs [19]. Previously, it was shown that miRNA genes are frequently located at fragile sites of chromosomes, in close to human papilloma virus integration sites, inside or near homeobox clusters, and in cancer-associated genomic regions commonly affected by deletions, amplifications, or breakpoints [20]. Generally, miRNA genes may be divided into four subgroups: intergenic, intronic, exonic, and others (3 UTR, 5 UTR, and combinations of any two from intron, exon, 3 UTR, and 5 UTR) depending on their location in the genome [21]. It seems that while the intronic, exonic, and other miRNAs are the byproduct of transcription and post-transcriptional processing of corresponding host protein-coding genes, intergenic miRNAs have their own promoters and may be transcribed independently of protein-coding genes by RNA polymerase II [22]. They are transcribed as long primary transcripts (pri-miRNAs) that are capped and polyadenylated. The typical pri-miRNA consists of an imperfectly paired stem of about 33 base pairs (bp), with a terminal loop [23]. The pri-miRNA is subsequently recognized and cleaved by the so-called Microprocessor complex, which consists of the double-stranded RNase III enzyme Drosha and its essential cofactor, the double-stranded RNA-binding protein Di-George syndrome critical region 8 (DGCR8) [24]. This cleavage results in precursor miRNA (pre-miRNA) with a typical stem-loop structure, which is about 70 nt long. The pre-miRNA is exported to the cytoplasm by exportin 5 (XPO5)/RanGTP complex and further processed by Dicer (RNase III endonuclease). Dicer, together with its partners TRBP (HIV-1 trans-activating response RNA-binding protein) and PACT (protein kinase RNA activator), binds to the end of pre-miRNA and cuts both strands of the duplex, resulting in 18-25 nt long miRNA duplex with 2-nucleotide 3 overhangs [25]. One of the strands (guide strand) is bound by the Argonaute proteins 1-4 (AGO1-4) and retained in the miRISC (miRNA-induced silencing complex) to guide the complex to complementary target mRNAs for post-transcriptional gene silencing. The second strand (passenger strand) is cleaved by AGO2 (Figure 1) [26]. Canonical pathway-microRNA gene is transcribed by RNA polymerase II into primary microRNA (pri-miRNA), cleaved by microprocessor complex Drosha/DGCR8, and precursor microRNA (pre-miRNA) is exported from the nucleus to the cytoplasm by Exportin 5 (XPO5) and further processed by Dicer and its partners into 18-25 nucleotide long microRNA duplex with 2-nucleotide 3 overhangs. Guide strand is subsequently bound by the Argonaute proteins 1-4 (AGO1-4) and retained in the microRNA-induced silencing complex to target mRNAs for post-transcriptional silencing. (B) Mirtrons-generated through mRNA splicing independently of Drosha-mediated processing step. (C) Small nucleolar RNA-derived microRNAs-Drosha-independent pathway. (D) Exportin 5-independent transport of pre-miRNAs from the nucleus to the cytoplasm has been described in the case of miR-320 family. (E) Dicer-independent processing of miR-451-pre-miR-451 is directly loaded into AGO2, cleaved and trimmed by poly(A)-specific ribonuclease PARN to produce mature miR-451. The figure was created with BioRender.com. While the biogenesis of most miRNAs is dependent on Drosha and Dicer, several non-canonical pathways have been described thanks to the analysis of next-generation sequencing (NGS) data. Non-canonical miRNAs have diverse origins, including introns, endogenous short hairpin RNAs (endo-shRNAs), endogenous short interfering RNAs (endo-siRNAs), tRNA precursors, and small nucleolar RNA (snoRNAs) (Figure 1) [27]. The Drosha-independent pathway is a non-canonical pathway where the Drosha-mediated processing step is bypassed. This pathway produces so-called mirtrons (short hairpin introns) that are generated through mRNA splicing; the entire process was firstly described in 2007 by Okamura et al. [28] in Caenorhabditis elegans, Drosophila melanogaster, and mammals. Other types of miRNAs produced by Drosha-independent pathway are miRNAs derived from snoRNAs. The snoRNAs are a conserved group of small non-coding RNAs; they associate with specific small nucleolar ribonucleoprotein (snoRNP proteins)-the complex that guides the enzymatic modification of selected ribosomal RNA (rRNA) nucleotides. Some snoRNAs are both the component of snoRNP and the source of miRNAs [29]. Another type of non-canonical pathway is the Dicer-independent pathway. Currently, there is only one known miRNA, which undergoes this pathway-vertebrate-specific miR-451 regulating erythroid development [30]. Biogenesis of this miRNA does not require Dicer and instead involves the catalytic activity of AGO2. Drosha cleaves pre-miR-451, which is loaded directly into AGO2; it is responsible for 3 -end trimming and maturation of pre-miR-451. Poly(A)-specific ribonuclease (PARN) trims down the 3 end of AGO2-cleaved pre-miR-451 to produce mature miR-451 [31]. Finally, XPO5 independent transport of pre-miRNA from the nucleus to cytoplasm has been described in the case of miR-320 family, as these miRNAs use the complex of XPO1 and PHAX (phosphorylated adaptor for RNA export) proteins for their transport [32]. These results indicate high flexibility of the cells, enabling them to produce mature miRNAs from a broad spectrum of primary transcripts using diverse mechanisms independent of the key proteins of canonical biogenesis pathway ( Figure 1). Regulation of Gene Expression by microRNAs The regulation of gene expression on post-transcriptional level was first described in 1998 by Andrew Z. Fire, Craig C. Mello, and their colleagues in Caenorhabditis elegans, when they proved the potential of short double-stranded RNAs (dsRNAs) to suppress mRNA translation based on sequence complementarity [33]. This process was termed RNA interference (RNAi); in 2006, both scientists were awarded the Nobel Prize in Physiology or Medicine for their discovery [34]. Today, several different short ncRNAs are known to be involved in RNAi processes, including miRNAs. These molecules are able to modulate the expression of 30-60% of protein-coding genes based on their complementarity to mRNA [35]. Importantly, a seed region represented by a conserved heptametrical sequence mostly situated at position 2-7 from the miRNA 5 end is essential for this binding and commonly used for target predictions [36]. In addition, average miRNA has the potential to regulate approximately 100 target sites, while the expression of a particular gene may be affected by the binding of several different miRNAs [37]. Recent studies proved the potential of these molecules to bind not only the 3 UTR regions but also the 5 UTR regions [38] or coding sequences of target mRNAs [39]. Based on the degree of complementarity, the transcript undergoes degradation or its translation is silenced [40]. In 2010, Guo et al. [41] used ribosome profiling to measure the overall effects of miRNAs on protein production and mRNA levels. Their results showed that changes in mRNA levels closely reflected the impact of miRNAs on gene expression and indicated that destabilization of target mRNAs is the major mechanism responsible for reduced protein levels. Four years later, these results were confirmed by an independent study concluding that although translational repression is rapid, its effect is weak, while the effect of mRNA destabilization dominates [42]. Recently, Djuranovic et al. [43] proved that miRNA-mediated gene silencing happens during the initiation or early elongation phase of protein synthesis, and this inhibition is followed by mRNA deadenylation and decay. These findings could simplify the future of in vitro and in vivo studies analyzing the effects of miRNAs on mRNA and protein level changes. In addition to mRNA repression, miRNAs have been also reported to activate gene expression, especially during cell starvation or other stress conditions. Vasudevan et al. [44] identified two proteins-AGO2 and FXR1 (Fragile X mental retardation gene 1)-activating the mRNA translation during cell starvation in vitro. They bind to AU-rich elements at 3 UTR region of TNF-α (tumor necrosis factor α). Importantly, this binding is facilitated by miR-369-3. Similarly, let-7a and synthetic miRcxcr4 are able to activate the translation of specific mRNA in senescence cells, while the mRNA-miR-206 complex functions as a positive regulator of KLF4 (Krüppel-like factor 4) translation through its binding to translational control element (TCE) in RK3E epithelial cells [45]. In 2008, miR-373 was found to target the promoter of CDH1 (E-cadherin) and CSDC2 (cold-shock domaincontaining protein C2), thus inducing their expression [46]. Two years later, decoy activity of miRNAs was described, when miR-328 was found to bind the hnRNP (heterogeneous ribonucleoprotein) E2 independently of the seed region and thus prevent its interaction with CEBPA (CCAAT enhancer-binding protein alpha) mRNA [47]. These findings reveal new mechanisms by which miRNAs may regulate gene expression. Nevertheless, posttranscriptional upregulation by miRNAs seems to be selective, specific to the RNA sequence context, cell type, and condition and associated with miRNP (miRNA ribonucleoprotein) factors or other RNA-binding proteins [48]. MicroRNA Clusters and Families While protein-coding polycistronic transcripts in humans are rare, miRNAs are frequently clustered in genomes and transcribed as a single unit. Hence, multiple miRNAs may be produced from the same primary transcript, and their expression is regulated by the same factors, including epigenetic processes. Two or more miRNAs having close physical distance (less than 10 kb) are then called miRNA gene cluster [49]. Usually, two or three miRNAs are involved in a cluster, but larger clusters have also been described. According to the latest miRBase version (release 22.1, October 2018) [8], approximately 25% of human miRNAs (481 sequences) are located in 159 different clusters (GRCh38). The study of Guo et al. [49] from 2014 (135 clusters, 422 sequences) revealed that most miRNAs are prone to cluster on chromosomes 8, 17, and X, while the largest clusters may be found on chromosomes 19 (hsa-mir-512-1~1283-1 cluster, 46 members) and 14 (hsamir-379~656 cluster, 42 members). However, most of the clusters had 2-8 members, and 68% of them were composed of two miRNA genes (73% according to the newest data [8]). Interestingly, 19% of clusters were composed of multicopy miRNA genes. The multicopy pre-miRNAs could yield the same mature miRNAs, but they might be located in different genomic regions [49]. Importantly, clusters frequently contain representatives from different miRNA families; it has been proposed that mRNA transcripts coding for proteins that interact with each other are typically targeted by miRNAs from the same cluster [50]. As a large proportion of clustered miRNAs resides in close proximity to the fragile sites or cancer-associated genomic hotspots, their changed expression is commonly observed in various tumor types [51]. A miRNA gene family is defined as two or more miRNAs with high sequence similarity. It is supposed that the same family miRNAs are derived from identical ancestors in the phylogenetic tree [52]; at least a third of these families are highly conserved across species [53]. A miRNA family may be located in one or more clusters, and members of different families tend to be located in related clusters with close functional relationships [49]. In addition, miRNAs with a high degree of sequence homology differing in one or two nucleotides are annotated by adding a lowercase letter [54]. Currently, there are two major sources for miRNA family classification-miRBase [8] and Rfam database [11]. While miRBase classifies miRNAs into families based on sequence similarities in seed regions using BLAST [55] with manual adjustment, Rfam families are represented by multiple sequence alignment of known RNA sequences and a covariance model that can be used to search for additional family members [11]. Importantly, miRBase families have higher coverage but lower quality than Rfam miRNA families. In the current release, Rfam (14.4, December 2020) contains 526 human miRNA families [11], while miRBase (v.22.1, October 2018) annotated 2064 miRNA families [8]. Recently, several other classification approaches have emerged, including miRClassify [52], miRFam [56], or miROrtho [57]. Since mature miRNAs within one family share the same or very similar seed sequences, they may regulate nearly the same targets. It was also shown that miRNA families are involved in tumor development through the targeting of genes in cancer-related signaling pathways. Interestingly, Wuchty et al. [58] observed that miRNA families miR-27-3p, miR-181-5p, miR-124-3p.2/506-3p, and miR-200bc-3p/429 share similar pathway interaction patterns, whereas their seed sequences differ extensively. Importantly, as many predicted and literature-curated families are quite large, it is supposed that specific miRNA subsets with the seed sequence similarities rather than whole miRNA families contribute to the regulation of important signaling pathways associated with cancer development. The Interconnection between microRNAs and Epigenetics Epigenetics is the study of processes that alter gene activity without changing the DNA sequence; they may be heritable. Today, several different epigenetic processes are known, including DNA methylation, histone modifications, and regulation by ncRNAs, especially miRNAs. DNA methylation is one of the most described epigenetic processes in tumor development, which usually occurs in the promoter region of genes at the cytosine bases, which are converted to 5-methylcytosine by different members of DNA methyltransferase (DNMT) enzymes. In mammals, methylation was observed to be globally distributed throughout the whole genome except for CpG islands, where high CpG contents are found. Importantly, inappropriate methylation of these sequences may lead to the silenced expression of key tumor suppressor genes in cancer cells [59]. Once established, DNA methylation patterns are maintained through cell divisions. However, recent studies suggest that ten-eleven translocation methylcytosine dioxygenases (TET1-TET3) may remove these methylation marks [60]. Concerning histones, post-translational modifications of amino-terminal tails including acetylation, methylation, phosphorylation, ADP ribosylation, or ubiquitination are commonly observed. Histone acetylation is catalyzed by histone acetyltransferases (HATs) that act as transcriptional co-activators. On the contrary, histone deacetylases (HDACs), which remove acetyl group from acetyl-lysin, function as transcriptional co-repressors [61]. Besides that, polycomb repressive complexes 1 and 2 (PRC1 and PRC2) have been identified in mammals to regulate gene expression. While the PRC2 trimethylates histone H3 on lysine 27 (H3K27me3) using Enhancer of Zeste homolog 2 (EZH2) catalytic subunit, PRC1 ubiquitylates histone H2A and compact polynucleosomes by RING1A/B and a Polycomb group ring finger protein, such as BMI-1 [62]. Interestingly, long non-coding RNAs (lncRNAs), such as XIST [63], HOTAIR [64], or ANRIL [65] have been found to be involved in PRCs recruitment and transcriptional repression. Finally, the SWI/SNF (SWItch/Sucrose Non-Fermentable) complexes that use the energy of ATP hydrolysis to remodel nucleosomes play an essential role in chromatin remodeling. Their catalytic activity is associated with BRG1 (SMARCA4) or BRM (SMARCA2) proteins; they have been shown to interact with other chromatin regulators, including HDACs [66] and PRCs [67]. Importantly, mutations in members of SWI/SNF complexes were detected in nearly 25% of human cancers [68]. Genes for miRNAs may be epigenetically regulated by DNA methylation and/or histone modifications what results in their overexpression or downregulation in numerous pathologies, including cancer. According to recent studies, almost half of miRNA genes are associated with CpG islands and almost 12% of them were prone to epigenetic inactivation by methylation in 23 different types of tumors. In addition, the long arms of chromosomes 1, 7, 11, 14, and 19 were found to be enriched by such genes [69]. Further, methylation was found to be involved in the regulation of expression of miRNA biogenesis genes [70], while multiple miRNAs can be also deregulated as a result of aberrant expression of specific epigenetic regulators, such as HDACs or PRCs. Scott et al. [71] proved that inhibition of HDACs results in transcription changes in 40% of miRNAs in SKBr3 cells. In turn, several miRNAs called epi-miRNAs were recognized to target enzymes involved in epigenetic modulation. It was observed that epigenetic regulators are strongly enriched among miRNAs predicted targets, indicating the importance of these small molecules in pluripotency, development, and cell reprogramming [72]. Recent studies also suggest that miRNAs may have direct epigenetic functions by recruiting specific protein complexes to genomic DNA [73]. These observations indicate the existence of a regulatory circuit between miRNAs and epigenetic modulation. Its disruption contributes to various diseases, including cancer. Epigenetic Regulation of microRNA Clusters and Families during Tumor Development Global hypomethylation is a well-established epigenetic modification observed in various tumors. Researchers have reported hypomethylation of several important miRNA clusters, independently of non-specific global hypomethylation, associated with the reactivation of corresponding miRNAs [74][75][76]. On the contrary, site-specific DNA hypermethylation of promoter-associated CpG islands that leads to silencing of tumor-suppressive miRNA clusters is a hallmark of many human cancers [77]. Sometimes, the entire miRNA family is located in the same cluster, thus changed methylation pattern may result in deregulated expression of all target genes containing the corresponding seed region as miRNAs from such families. Thus, it is highly probable that aberrant DNA methylation is a major mechanism responsible for miRNA deregulation during tumor development. In addition, histone modifications have been previously described to be involved in epigenetic regulation of miRNAs expression ( Figure 2) [78][79][80]. In the following paragraphs, the most studied miRNA clusters and families affected by hyper-or hypomethylation as well as by histone modifications are presented with the focus on particular mechanisms. One of the largest miRNA families, let-7, including let-7a-g-5p, let-7i-5p, miR-98-5p, miR-4458, and miR-4500, is localized in six different clusters on chromosomes 9,11,19,21,22, and X in the human genome together with miR-99a, miR-99b, miR-100, miR-125a, miR-4763, and miR-10526. Interestingly, miR-99a, miR-99b, and miR-100 constitute another family as they contain the same seed region [8]. The first papers describing the role of DNA methylation in let-7 family expression were published in 2007. Lu et al. [81] found the association between the downregulated levels of tumor-suppressive let-7a-3 and its elevated methylation. This methylation further affected the expression of insulin-like growth factor-II (IGF-II) and the survival of ovarian cancer (OC) patients. Similarly, hypermethylation of promoter region of let-7a-3/let-7b cluster was confirmed in acute lymphoblastic leukemia (ALL) [82,83]. Using 5-aza-2'-deoxycytidine (5-Aza-CdR), the levels of let-7b increased significantly, inhibited proliferation of MOLT-4 cells and arrested the cells in G1 phase [83]. In contrast, hypomethylation of the same promoter was observed in lung adenocarcinomas [84] and myelodysplastic syndrome (MDS). Reactivation of the let-7a-3 expression was connected to oncogenic changes in lung transcription profiles [84] and shorter overall survival (OS) of MDS patients [85]. Concerning the let-7 family, let-7e is expressed from chromosome 19 in a cluster together with miR-99b and miR-125a. Hypermethylation of let-7e was detected in breast cancer (BC); decreased levels of this miRNA were associated with elevated cell proliferation, lower apoptosis, and poor prognosis [86]. Similarly, hypermethylation-associated silencing of miR-125a was observed in multiple myeloma (MM) [87], colorectal cancer (CRC) [88], or gastric cancer (GC) [89]. Interestingly, histone methyltransferase SUV39H1was identified as a target gene of this miRNA in GC; it was further confirmed that epigenetically silenced miR-125a-5p can be self-activated through targeting of this methyltransferase [89]. Finally, decreased levels of miR-98 in glioma tissues and cell lines were associated with increased DNA methylation and subsequently with a more aggressive tumor phenotype, increased invasion, and shorter survival of patients [90]. miR-34-5p/449-5p Family, miR-34b-5p/449c-5p Family Although the sequence of seed region for miR-34s and miR-449s differs only in one nucleotide, these miRNAs are categorized into two different families-one consisting of miR-34a, miR-34c, miR-449a, and miR-449b, while the second one includes miR-34b, miR-449c, and miR-2682. Interestingly, miR-449a-c are localized together in one cluster on chromosome 5, while miR-34b and miR-34c are clustered on chromosome 11 [8]. MiR-34s are considered to be important tumor-suppressive miRNAs; they are the most studied miRNAs in various tumors concerning epigenetic regulation [96]. In 2008, inactivation of miR-34a by CpG methylation was described in prostate, breast, lung, colon, kidney, and pancreatic carcinoma cell lines; re-expression of this miRNA induced senescence and cell cycle arrest by targeting cyclin-dependent kinase 6 (CDK6) [97]. Two years later, Chim et al. [98] studied the role of methylation of this miRNA in a large panel of hematologic malignancies, including acute myeloid leukemia (AML), ALL, chronic lymphocytic leukemia (CLL), chronic myeloid leukemia (CML), MM, and non-Hodgkin's lymphoma (NHL) using methylation-specific polymerase chain reaction (PCR). They found high methylation of miR-34a promoter in lymphoma (75%) and myeloma (37%) cell lines compared to normal controls and confirmed that hypomethylating treatment leads to re-expression of pri-miR-34a transcript in lymphoma cells with homozygous methylation. In primary samples at diagnosis, miR-34a methylation was detected in 4% of CLL, 5.5% of MM, and 18.8% of NHL samples, but no methylation was found in ALL, AML, and CML samples. Further, frequent concomitant inactivation of miR-34a and miR-34b/c by CpG methylation was observed in CRC, pancreatic cancer (PAC), BC, OC, urothelial carcinoma (UCA), and renal cell carcinoma (RCC). Interestingly, a statistically significant correlation of miR-34a methylation and the absence of p53 mutation in CRC was confirmed, suggesting that miR-34a inactivation may substitute for loss of p53 function in cancer [99]. To this day, the downregulation of miR-34s expression due to the elevated promoter methylation has been observed also in other tumors, including non-small cell lung carcinoma (NSCLC) [100], esophageal squamous cell carcinoma (ESSC) [101], malignant pleural mesothelioma (MPM) [102], HCC [103], laryngeal squamous cell carcinoma (LSCC) [104], bladder cancer (BLC) [105], or GC [106]. Importantly, 3,6-dihydroxyflavone was found to regulate the imbalance between DNA methylation and demethylation in BC by inhibiting DNMT1 activity and increasing TET1 expression. This effect led to demethylation of miR-34a promoter and increased levels of this tumor-suppressive miRNA [107]. Expression of miR-449a was found to be significantly silenced by DNA methylation in medulloblastoma (MB) [108] and NSCLC [109]. In addition, a negative correlation was observed between miR-449a levels and nicotinamide N-methyltransferase (NNMT) and knock-down of NNMT led to re-expression of miR-449a, inhibition of tumor growth, and activation of phosphatase and tensin homolog (PTEN) [109]. Concerning histone modifications, Lin et al. [110] proved that depletion of HDAC1 inhibits the metastatic abilities of GC cells by regulating the miR-34a/CD44 pathway. Similar results were observed also in cisplatin-resistant OC cell lines, where HDAC1 knockdown suppressed cell proliferation, increased apoptosis, and chemosensitivity by downregulating c-MYC and upregulating miR-34a [111]. Interestingly, expression of this miRNA may be negatively regulated also by lncRNA Linc-ROR that inhibits histone H3 acetylation in the miR-34a promoter [112]. Expression of miR-449a/b in tumors is epigenetically repressed through histone H3 Lys27 trimethylation; drug treatment targeting histone methylation resulted in strong induction of these miRNAs [113]. You et al. [114] indicated that this methylation is mediated through the zinc finger protein SUZ12, which is a part of the PRC2 complex. In HCC cells, upregulation of HDAC1-3 was detected and associated with the reduced levels of miR-449 [115]. Further, histone H3 lysine 27 acetylation (H3K27ac) was found to be altered during acquisition or resistance to anaplastic lymphoma kinase inhibitors in patients with ALK (anaplastic lymphoma receptor tyrosine kinase) fusionpositive LC; its decreased levels correlated with downregulation of tumor-suppressive miR-34a and miR-449a and increased proliferation [116] (Table 2). Roy et al. [127] investigated tobacco-specific methylation patterns in miRNA loci associated with the prognosis of oral cancer patients. They found a significant correlation between hypermethylation of miR-200a/b and worse 5-year survival. Recently, flap endonuclease 1 (FEN1) was proven to interact with DNMT3A through proliferating cell nuclear antigen (PCNA) to suppress miR-200a-5p expression mediated by methylation, which resulted in increased levels of hepatocyte growth factor (MET) and epidermal growth factor receptor (EGFR) and thus elevating proliferation of BC cells [128]. Similarly, Kindlin 2 [129] and MYC protein [130] may form a complex with DNMT3A in the cell nucleus to induce CpG methylation of mir-200b~429 cluster promoter in BC. On contrary, DNA demethylase TET family members may activate transcription of epigenetically silenced miR-200s in BC and thus inhibit stemness, EMT, and metastasis formation [131]. Similarly, TET-dependent DNA demethylation was found to be essential for miR-200s, miR-141, and miR-429 reactivation and subsequent mesenchymal-to-epithelial transition in somatic cell reprogramming [132]. Interestingly, Choi et al. [133] suggested the direct involvement of H. pylori infection in epigenetic silencing of miR-200a/b through CpG methylation in gastric carcinogenesis. Finally, hypomethylation of miR-200a/b was found in local/local advanced prostate cancer (PC) patients, while hypermethylation was detected in patients with metastatic disease [117]. In addition, unmethylated promoter of mir-200c~141 cluster was found in LNCaP, 22RV1, and DU145 PC cell lines, while its hypermethylation was observed in PC3 cells [134]. These data indicate that epigenetic regulation of miRNAs expression through CpG promoter methylation is context-dependent. The miR-200 family also controls the transition between cancer stem-cell-like and non-stem-cell-like phenotype of BC cells. While the mir-200c~141 cluster was silenced primarily by DNA methylation, mir-200b~429 cluster was repressed through polycomb group-mediated histone modifications [135]. Today, EZH2 is a well-known catalytic subunit of PRC2 complex responsible for H3K27 trimethylation. This enzyme is frequently upregulated in various human cancers, such as HCC [95], GC, or glioblastoma multiforme (GBM) [136] and is associated with epigenetic silencing of miRNAs expression, including miR-200 families. Importantly, EZH2 appeared to be essential for DNMT1 recruitment to the promoter region of mir-200b~429 cluster [136]. The mir-17~92a-1 cluster was found to be aberrantly overexpressed in mixed-lineage leukemia-rearranged acute leukemias due to the elevated acetylation of histone H3 and H3K4 trimethylation. It plays an important role in the development of the disease through inhibiting cell differentiation and apoptosis while promoting cell proliferation [155]. Zhang et al. [156] demonstrated that acetylation of AGO2 specifically increases the binding of miR-19b into miRISC complex, thus enhancing its maturation. In addition, high levels of both miR-19b and acetylated AGO2 were associated with the poor prognosis of LC patients. HDAC inhibitor Vorinostat was shown to reduce levels of BRCA1-associated RING domain 1 (BARD1) by increasing the expression of miR-19a/b in AML [157] (Table 4). Recently, HDAC3 inhibition by class I-specific HDAC inhibitor entinostat was described to decrease the activity of the chromatin remodeling enzyme SMARCA4, which in turn de-repressed miR-27a. Elevated levels of this miRNA led to PAX3:FOXO1 mRNA destabilization and sensitization to chemotherapy in rhabdomyosarcoma cells [183]. Similarly, inhibition of HDAC6 in diffuse large B-cell lymphoma resulted in increased miR-27b levels, inhibition of MET signaling pathway, and decreased tumor growth [184] (Table 6). miR-130-3p/301-3p/454-3p Family Hypermethylation of miR-130-3p was described in PC tissues as well as in drugresistant cell lines; its expression correlated with the level of methylation. Importantly, colony-stimulating factor 1 (CSF-1) was confirmed as a direct target of miR-130-3p responsible for decreased sensitivity to anticancer drugs. Thus, demethylation with 5-Aza-CdR led to reactivation of miRNA expression, downregulation of CSF-1, and decreased multidrug resistance [185]. Interestingly, miR-130b belongs to the same cluster as miR-301b, which is located on chromosome 22; these two miRNAs share target genes. Downregulation of this cluster mediated by elevated promoter methylation was detected in PC and associated with cell proliferation [186]. On contrary, Fort et al. [187] observed upregulation of these miRNAs in neoplastic and metastatic prostate tissue and described oncogenic role of this cluster. However, elevated levels were not due to decrease in DNA methylation, as the promoter of these genes was found to be lowly methylated in normal and neoplastic tissue. Concerning miR-454-3p, Bao et al. [188] analyzed the expression of this miRNA in chondrosarcoma tissues. They found downregulated levels of miR-454-3p and upregulated levels of lncRNA HOTAIR. Subsequently, they observed that HOTAIR is able to induce methylation of miR-454-3p by recruiting EZH2 and DNMT1 to promoter regions, which resulted in lower apoptosis and increased proliferation of cells. Thus, HOTAIR could serve as a promising therapeutic target for chondrosarcoma. Deregulated expression of miR-130a/b was observed also in endometrial cancer. Detailed analyses proved that 5-Aza-CdR and HDAC inhibitors may increase the levels of these miRNAs and inhibit the malignant behavior of cells [189] (Table 7). miR-29-3p Family The miR-29-3p family consists of three miRNAs: miR-29a-3p, miR-29b-3p, and miR-29c-3p that are clustered together on two different chromosomes, chromosome 1 (mir-29b-2~29c cluster) and chromosome 7 (mir-29b-1~29a cluster) [8]. These miRNAs are usually downregulated in various cancers, thus serving as tumor suppressors and important epi-miRNAs. However, only little is known about the epigenetic regulation of their expression. Mazzoccoli et al. [190] identified methylation of CpG sequences in promoter regions of both clusters; this methylation was decreased after decitabine treatment or DNMT3B inhibition by siRNA in Burkitt lymphoma cells. Interestingly, it was suggested that the expression of mir-29b-1~29a cluster may be suppressed by DNMT3B in a DNA-methylation-dependent manner, and in turn, miR-29a/b may suppress DNMT3B by binding to its 3'-UTR region. Deregulation of miRNAs as well as DNMT3A led to the epigenetic silencing of CDH1 and contributed to the metastasis formation in GC [191]. In addition, LASP1 was confirmed as a direct target of epigenetically silenced miR-29b responsible for invasive potential of GC cells [192]. These results were subsequently confirmed also in OC [193], and similar data were achieved for DNMT1 and miR-29b in PAC [194]. Interestingly, the expression of this miRNA may be downregulated through methylation by lncRNA DCST-AS1 in GBM [195]. Clinical Utility of microRNA Clusters and Families and Epigenetic-Based Therapeutics A growing number of studies have indicated deregulated expression of miRNAs in cancers and their clinical potential to serve as promising diagnostic, prognostic, and predictive biomarkers as well as novel therapeutic targets. Aberrant DNA hypermethylation of miRNA genes is commonly observed during tumor development and results in downregulation of tumor-suppressive miRNAs. Kunej et al. [69] revealed that miRNAs can be regulated by DNA methylation in only one cancer type, indicating they are cancer type-specific and may be used for better classification of carcinomas with unknown primary tissue of origin. On the other hand, miR-34b was reported to be silenced by DNA methylation in 24 cancer types [96]. Thus, this miRNA could serve as a general cancer biomarker. Importantly, it was shown that digital methylation-specific PCR assay [200] as well as droplet digital PCR [201] may be used to quantify miR-34b/c methylation in serum-circulating DNA of MPM patients, indicating that these approaches could be used for early detection of the disease. Similarly, the feasibility of using miR-34b/c methylation detection in bowel lavage fluid for CRC screening was analyzed. Unfortunately, sensitivity and specificity were not high enough for clinical use [202]. Recently, Ohtsubo et al. [203] analyzed methylation status of 16 tumor-suppressive miRNAs in bile from patients with pancreaticobiliary diseases. They found significantly higher methylation rates of miR-200a/b in patients with PAC compared to benign diseases indicating the clinical utility of this approach for distinguishing between malignant and benign disease. Further, the methylation status of two functional promoters of miR-200b was associated with hormone receptor status in BC patients. While the P1 was hypermethylated in metastatic lymph nodes compared to matched primary tumors, P2 hypermethylation was found in patients with loss of estrogen or progesterone receptor and its hypomethylation was associated with gain of HER2 and androgen receptor expression [121]. These data indicate a potential use of DNA methylation of miRNA promoters for better classification of tumor diseases. Since the epigenetic status of a tumor may influence its behavior, epigenetic-based therapeutics are tested in several clinical trials with the aim to reprogram cancer cells back to a normal state and overcome chemoresistance, which is one of the major challenges in cancer therapy. Currently, there are six epigenetic drugs approved for clinical use by the FDA, especially in hematologic malignancies [204]. In addition, small molecules inhibiting key enzymes of the epigenetic machinery are widely studied, including DNMT and HDAC inhibitors. Patients with chronic myelomonocytic leukemia (CMML) are frequently treated with hypomethylating agents (HMAs) azacitidine or decitabine. Using these drugs, Berg et al. [205] observed significant upregulation of miR-125a associated with anti-leukemic effects. Importantly, the data were validated in a clinical context, as miR-125a levels increased in CMML patients treated with HMAs, especially in cases showing clinical response to these drugs. Similarly, Chen et al. [168] found that HDACs inhibitors trichostatin A and sodium butyrate significantly upregulated the expression of miR-15a and miR-16 through the increase of histone acetylation in the region of DLEU2/miR-15a/16-1 promoter in LC cells. Subsequently, the expression of BCL-2 decreased, and cell proliferation was reduced. These results indicate that patients with low levels of miR-15a/16 or high levels of HDACs could benefit from HDACs inhibitor-based therapy. Concerning association with chemoresistance, it was shown that HDAC1 upregulation is a crucial event in drug resistance development in OC. Inhibition of this enzyme by siRNA reduced c-Myc expression, increased miR-34a levels, and sensitized cells to cisplatininduced apoptosis [111]. Unfortunately, targeting of DNMTs/HDACs is still very unspecific and may lead to severe changes in the whole genome. Since miRNA clusters contain multiple miRNA genes, their targeting may provide better therapeutic outcomes as multiple signaling pathways are affected. However, some miRNA clusters are known to contain miRNAs with dual functions, and activation of such clusters transcription may result in deregulation of several different proteins involved in tumor-suppressive as well as oncogenic signaling pathways. Thus, further research in this area is needed. Another strategy for epigenetic therapy includes miRNA-based approach. Firstly, miRNA mimics may be used to re-establish the levels of tumor-suppressive miRNAs. Recently, interesting results were published indicating the promising therapeutic potential of miR-489 in triple-negative BC. The authors chemically modified the guide strand of this miRNA by replacing uracil with 5-FU so that tumor-suppressive and DNA damaging components were combined together into a novel therapeutic agent. This drug showed superior effects over miR-489 or 5-FU in inhibition of tumor progression, suggesting its therapeutic efficacy [206]. Inversely, antagomiRs or miRNA sponges are applied to block the function of oncogenic miRNAs. Importantly, binding sites of the sponge are specific to the miRNA seed region, which allows them to block the whole family of related miRNAs [207]. Finally, 22nt long antisense oligonucleotides called miRNA masks are used to compete with miRNAs of interest for 3 -UTR binding sites of target genes. While antagomiRs or miRNA sponges are essential for studying the overall function of particular miRNA, miRNA masks are preferentially used for revealing the specific outcome of regulation of the target gene by the miRNA [208]. Currently, several companies are trying to develop successful miRNA-based cancer therapeutics. In 2013, MiRNA Therapeutics introduced MRX34, a miR-34a mimic, delivered by a liposomal agent Smarticles and tested its potential in patients with various advanced solid tumors. However, this firstin-human clinical trial of a miRNA-based therapy was halted by FDA due to severe immune-mediated toxicities in four patients [209]. Encouraging results were published in the case of Mesomir-1, a TargomiR drug, including miR-16 mimic, bacterially derived minicells, and antibody to EGFR. Recently, phase 1 of clinical trial was completed in MPM or NSCLC patients with acceptable safety profile; however, additional studies are needed [210]. Importantly, a subclass of miRNAs, epi-miRNAs, has been identified. They play an important role in the modulation of epigenome by regulating the expression of key enzymes of epigenetic machinery; thus, they are considered promising therapeutic targets in cancer. Amodio et al. [198] proved that miR-29b targets HDAC4 and highlighted that both molecules are involved in a functional loop. Firstly, silencing of HDAC4 by shRNAs inhibited cell survival and migration and induced apoptosis and autophagy of MM cells due to the downregulation of SP1 and MCL-1, direct targets of miR-29b. Subsequently, hyperacetylation of miR-29b accompanied by elevated levels of this miRNA was observed just as in the case of SAHA (Vorinostat) treatment. Importantly, overexpression of miR-29b potentiated SAHA activity in a murine xenograft model of human MM. These results indicate that miR-29b could serve as a promising epi-therapeutic approach in the treatment of this disease. During the several last years, advanced methods for studying the role of miRNA clusters and families in tumor development have been established. Using hierarchical cloning method, Wang et al. [211] constructed a synthetic miRNA cluster, which accommodated 18 different miRNA precursors and demonstrated that the maturation and function of individual precursors are independent of their position in the cluster. Further, genome editing methods, such as CRISPR/Cas9, are being introduced in miRNAs research as a powerful tool to delineate the function and regulation of miRNA clusters and families. In 2018, a study by González-Vallinas et al. [212] was published showing that simultaneous overexpression of miRNAs located on chromosome 14q32 by CRISPR, activating technology promoted migration and invasion of lung adenocarcinoma cells similarly to individual miRNA mimics, including miR-323b-3p, miR-487a-3p, and miR-539-5p. These results indicate that using CRISPR-based strategies, we will be able to further elucidate miRNA clusters' functionality, which will facilitate the development of novel targeted therapies. Last but not least, computational methods, as well as various databases, are being established to complement costly and time-consuming biological experiments. Although clustered and homologous miRNAs are expressed at various levels due to maturation and degradation processes, they are prone to present similar deregulation patterns in particular tumor types [213]. Using multiple types of data to calculate miRNA and disease similarity based on mutual information, adding miRNA family and cluster information to predict human disease-related miRNAs, a new computational method termed FCMDAP, was developed to improve the prediction accuracy of disease-related miRNAs [214]. Finally, EpimiR database collecting regulations between 19 kinds of epigenetic modifications and 617 miRNAs across seven species may be used to search coordinated regulation between miRNAs and epigenetics [215]. Conclusions Deregulated expression of miRNAs is a hallmark of a number of solid tumors as well as hematologic malignancies. Recently, the reciprocal regulation between miRNAs expression and epigenetic machinery has been indicated, and crucial epigenetic enzymes, such as DNMTs, HDACs, HATs, or TETs, were found to be strongly enriched among the epi-miRNAs targets indicating the involvement of miRNAs in key cellular processes, including cell differentiation, pluripotency, or chemoresistance. In this review, we have highlighted the association between aberrant DNA methylation, histone methylation or histone acetylation, and altered miRNAs expression. As miRNAs are frequently clustered in the genome, multiple miRNAs may be produced from the same primary transcript, and their expression is affected by the same epigenetic changes. In addition, miRNAs from the same family may be clustered together or they are expressed from different clusters with a close functional relationship. Importantly, aberrant methylation of serum-circulating miRNAs may be detected with modern and high-sensitive methods, thus epigenetically regulated miRNAs could serve as promising diagnostic, prognostic, and predictive biomarkers as well as novel therapeutic targets. Currently, different strategies for epigenetic-based therapies are being developed, including DNMT/HDAC inhibitors or miRNA-based approaches. However, more work needs to be done to improve specificity and reduce the side effects of these molecules. Further, improved systems for miRNA delivery to the target site must be designed. Finally, a combination of computational applications and laboratory-based experimental data will allow the gain of more detailed knowledge of complicated networks of feedback between miRNAs and epigenetic mechanisms, enabling further development of epigenetic anticancer drugs. Conflicts of Interest: The authors declare no conflict of interest.
9,608.2
2021-03-01T00:00:00.000
[ "Medicine", "Biology" ]
Feasibility Analyses of Real-Time Detection of Wildlife Using UAV-Derived Thermal and RGB Images : Wildlife monitoring is carried out for diverse reasons, and monitoring methods have gradually advanced through technological development. Direct field investigations have been replaced by remote monitoring methods, and unmanned aerial vehicles (UAVs) have recently become the most important tool for wildlife monitoring. Many previous studies on detecting wild animals have used RGB images acquired from UAVs, with most of the analyses depending on machine learning–deep learning (ML–DL) methods. These methods provide relatively accurate results, and when thermal sensors are used as a supplement, even more accurate detection results can be obtained through complementation with RGB images. However, because most previous analyses were based on ML–DL methods, a lot of time was required to generate training data and train detection models. This drawback makes ML–DL methods unsuitable for real-time detection in the field. To compensate for the disadvantages of the previous methods, this paper proposes a real-time animal detection method that generates a total of six applicable input images depending on the context and uses them for detection. The proposed method is based on the Sobel edge algorithm, which is simple but can detect edges quickly based on change values. The method can detect animals in a single image without training data. The fastest detection time per image was 0.033 s, and all frames of a thermal video could be analyzed. Furthermore, because of the synchronization of the properties of the thermal and RGB images, the performance of the method was above average in comparison with previous studies. With target images acquired at heights below 100 m, the maximum detection precision and detection recall of the most accurate input image were 0.804 and 0.699, respectively. However, the low resolution of the thermal sensor and its shooting height limitation were hindrances to wildlife detection. The aim of future research will be to develop a detection method that can improve these shortcomings. Introduction For wildlife detection and monitoring, traditional methods such as direct observation [1] and capture-recapture have been carried out for diverse purposes [2]. However, these methods require a large amount of time, considerable expense, and field-skilled experts [3,4] to obtain reliable results. Furthermore, performing a traditional field survey can result in dangerous situations, such as an encounter with wild animals. Remote monitoring methods, such as those based on camera trapping [5], GPS collars [6], and environmental DNA sampling [7], have been used more frequently, mostly replacing traditional survey methods, as the technologies have developed. Camera-trapping methods can track the life cycle of animals at the nest level. Camera networks can be created by installing multiple cameras, and high-quality data can be acquired across the region of interest [8]. However, This paper proposes a new method for detecting animals. There were three main objectives, to address the limitations of previous research: (1) Reduce the animal detection time The main limitation of previous animal detection methods is that they cannot not be applied in the field in real time. ML-DL-based methods need an enormous number of training images, and it takes a long time to train the detection model. Methods using thermal images require preprocessing to detect animals. To address these limitations, the proposed method can detect animals based on single images, and image preprocessing is simplified. (2) Enable detection in more environments ML-DL-based methods are only suitable for certain species and land cover types or environments. To improve detection versatility, the proposed method considers target size and surface temperature when detecting animals. Theoretically, the method can be adapted to all homeothermic animals if the body size and surface temperature are known. Here, we focused on detecting mid-sized animals (alpaca). (3) Use thermal and RGB images acquired from the same thermal camera When a detection method needs both thermal and RGB images, separate thermal and RGB cameras are used. However, any thermal camera can save thermal and RGB images simultaneously, and the centroid is the same because the shooting time is the same. Therefore, by modifying the distortion caused by focal length, shooting area, and spatial resolution, thermal and RGB images can be used simultaneously for research without the requirement of two cameras [35]. The main goal of this study was to develop an automated method for detecting animals using a thermal image dataset, to apply it under in situ conditions in real time, and to achieve similar detection ability to previous methods. The fastest detection time was 0.033 s, the maximum detection precision was 0.804, and the detection recall rate was 0.699. Study Site An animal farm (37.827 • N, 127.882 • E) in the middle of a natural forest in Hongcheon, Republic of Korea, was used as the study site for data collection ( Figure 1). To determine the animal species and their locations, the UAV operated over the entire farm. Through this process, the distribution of land cover was also confirmed. The major species on the farm is Vicugna pacos (alpaca), so these animals were mainly used to develop the detection and analysis method. The farm also has a few Cervus nippon (sika deer), Struthio camelus (ostrich), and Camelus bactrianus (camel). The barns for each species are located on grassland or bare land, and they are mainly moving on those land covers. The area of the farm is approximately 12.02 ha, and the main cover type is forest (50%), followed by grassland (35%). The remaining contributors to land cover comprise artificial structures such as roads and buildings, and bare land. The minimum and maximum elevations on the farm are 450.56 and 512.00 m, respectively. Data Acquisition UAV flights were conducted using a MATRICE 210 UAV (DJI, Shenzhen, China), and the thermal camera was a FLIR ZENMUSE XT2 (DJI). The thermal camera has both an RGB sensor and a thermal sensor, and images are captured by both sensors at different resolutions. Each RGB image contains 4000 × 3000 pixels, and each thermal image contains 640 × 512 pixels. The spatial resolution of each RGB image at 25 m above the ground is 0.59 cm/pixel, whereas the resolution of each thermal image is 2.24 cm/pixel. Due to the increased focal length of the thermal sensor, each thermal image covers a narrower region [36]. The data were acquired on 25 November 2020. In Korea, November is considered to fall within the winter season, and snow typically falls from the middle of November. Although snow cover provides advantages, in the sense that a lower land-surface temperature is beneficial in automated animal detection and photographs can show not only the animals but also their tracks, thereby improving detection rates [37], a lack of adequate snow cover can inhibit animal detection, requiring the images to be filmed again [38]. Therefore, the shooting date was selected to occur when the air and land surface temperatures were low and there was no snow cover. This decision maximized the temperature difference between the targets and land cover types and facilitated more accurate detection of animals. Furthermore, by shooting images around noon, the shadow size of individual targets was minimized, which reduced the possibility of error from shadows. After a programmed drone flight over the entire study site, the drone was controlled manually to capture the locations of the main target animals (alpaca). After finding a spot, 26 images were acquired from heights of 25-275 m above the ground at 10-m intervals to aid the development of a method to be used under various circumstances. The body lengths of the main target animals range from 80 to 100 cm when fully grown, and they have various fur colors, including black, gray, white, dark brown, and light brown. Based on the UAV results, the targets were sorted into four categories according to their visible condition. The category "isolated" indicated that the target stood alone, not touching any other target or obstacle. "Bordering" meant that two targets were touching each other, and "overlapping" meant that the targets' body parts were crossing each other's. "Partial" indicated that the target was partly visible at the edge of the image (Figure 2). Data Preprocessing As the outputs of the XT2 sensors have different pixel sizes, spatial resolutions, and coverage areas (Figure 3), they need to be modified to have the same properties. Furthermore, to acquire accurate results, temperature correction of the thermal images and masking of non-target regions are required. RGB Lens Distortion Correction and Clipping Due to the difference in focal length, the distortion in the images also differs [39]. The RGB sensor of XT2 has a focal length of 8 mm, but the thermal sensor has a focal length of 19 mm. When the focal length is shorter, the image is subject to barrel distortion compared with an image with longer focal length [40]. Therefore, to use the thermal and RGB images together, we had to correct the distortion in the RGB images. Python and the OpenCV2 library [41] were used for this purpose. After correction, the corrected RGB images were clipped and rescaled to have the same coverage as the thermal images ( Figure 4). Thermal Image Correction by Fur Color Although the body temperature of the target animals is the same across individuals, the surface temperature can differ because of the fur color. The surface temperatures of animals with brighter fur were lower [42] because of higher reflectance [43]. Surface temperature differences can cause errors in the detection process and must be corrected for. The pixel value of each RGB channel is needed to identify bright targets. Based on our measurements, we found that the surface temperature of white animals was approximately 25% lower than that of animals with darker fur. Therefore, the pixels of thermal images located at the same locations as white pixels from RGB images were adjusted to have higher values ( Figure 5). Unnatural Object Removal The principle of animal detection using thermal images is to locate spots where the temperature is different, because homeothermic animals always have the same body temperature and this consistency creates a temperature gap between animals and their surrounding environment. However, artificial structures, e.g., buildings and roads, have a much higher surface temperature compared with animals or natural surfaces. Therefore, when these types of artificial land cover are included in a thermal image, numerous errors in animal detection occur [31]. To eliminate this error, artificial structures should be masked. However, it is difficult to tell which parts of the image should be removed, since one of main purposes of this study was to develop a method that can be used for instant detection under in situ conditions, and pursuing this objective limited the time available to analyze images and locate artificial structures. Therefore, as an alternative to artificial cover detection, the unnatural color masking method was used. Fortunately, more than half of the artificial structures at the study site have unnatural colors, such as vivid red, vivid blue, and vivid orange ( Figure 6). As when correcting the temperature for fur color, for this step, temperature values were removed according to pixel color. Many possible errors can be prevented by removing these high-temperature artificial structures. Methods Our method requires both thermal and RGB images but especially thermal images, as these contain more useful information. The open-source programming language Python was used in the Google Colab [44] environment to develop the proposed method. Google Colab, a cloud service based on Jupyter Notebooks, executes Python code using both CPU and GPU resources, thus enabling quantitative analysis on a scale that exceeds the limitations of personal computers. The main functions of the proposed method are Sobel edge creation [45] and contour drawing. OpenCV2, an optimized computer vision library, was used for image processing. The automated detection results obtained using the proposed method were categorized based on shooting height and target shape, i.e., isolated, bordering, overlapping, or partial. Sobel Edge Detection and Contour Drawing Sobel edge creation refers to a method that finds edges simply. This gradient operator works vertically and horizontally [46]. When the difference in pixel value is larger, the Sobel edge has a higher value. By combining the vertical and horizontal Sobel edges, a biaxial Sobel edge can be made. This biaxial Sobel edge was used to draw the binary contours. After applying a threshold to the biaxial Sobel, the segmented image was used for contouring. At the same time as contours were drawn, the centroid point of each contour was marked on the images (Figure 7). The accuracy of the contours was high, but some were wrongly drawn around non-target objects, such as stones, wet soil, and artificial structures; therefore, to eliminate these false-positive results and obtain accurate results, the contours had to be sorted. Object Detection and Sorting To eliminate wrongly drawn contours, size-temperature filtering was used. The mean body length of the target animal was approximately 0.9 m, and the top-view area was approximately 4500 cm 2 . However, the body of the animal is fully covered with thick, curly fur, so its body heat is not shown clearly in the thermal image, making the animal look smaller than normal. Hence, the area filter was set to detect contours smaller than 3500 cm 2 and larger than 100 cm 2 . The minimum criterion was set much smaller than the common size of the target to find segmented body parts such as overlapping or partial targets. Additionally, the size of drawn contours can be small because of the animal's body shape. Therefore, to obtain a high probability of animal detection, the area filter was set with a large range. For contours sorted by the area filter, the centroid temperature filter was used again. The maximum and minimum body temperatures of the targets 25 m above the animal's body were nearly 20 • C and 10 • C, respectively. Therefore, the filtering option was set to find contours warmer than 9 • C to ensure that every target was filtered. In addition, the temperature also changes with changes in shooting height. To minimize this error, we corrected the temperature by height. The shooting height and maximum temperature of the targets are linearly related (Figure 8). Temperature filtering was adapted using Equation (1). The main target animal in this study was the alpaca. Therefore, size-temperature filtering was designed and adapted to this species. However, this object detection and sorting method can be adapted to target other species by changing the filter criteria. Input Images Generation As mentioned previously, six kinds of input images were used for the automated detection method (Figure 9). These input images were generated to enhance the detection ability, shorten the detection time, and determine which type of input image produces the most accurate detection performance. The six kinds of input images were corrected RGB images, original thermal images, thermal images corrected for fur color, thermal images with masked unnatural colors, corrected RGB images × original thermal images, and corrected RGB images × all correction-applied thermal images. The thermal and RGB images were processed using contour and centroid generation, size-temperature filtering, a target counting process after Sobel edge creation, and image binarization. These images were combined after image binarization, and each combined image could be used to generate contours corresponding to those of the two kinds of images. Therefore, combined images allowed for a more accurate detection ability. Results The automated detection results obtained using the proposed method were categorized based on shooting height and target shape, i.e., isolated, bordering, overlapping, or partial. The detection recall, precision, and time were also analyzed. The detection results were assessed based on the detection precision and detection recall rate (Figure 10). The detection precision was calculated as the number of real animals among the automatic detections divided by the total number of detections. The detection recall rate was the number of real animals among the automatic detections divided by the number of animals in the image. These two values have the same range, from 0 to 1, and higher values indicate higher detection ability. To compare the detection precision and detection recall rates of the six kinds of images, the number of targets in each image was counted manually (Figure 11), and targets were labeled according to their shape category (i.e., isolated, bordering, overlapping, or partial). The number of targets in each of the 26 individual original images was about 40. However, for every type of input image, the numbers of targets detected tended to decrease with increased shooting height. At shooting heights greater than 100 m, fewer than 10 targets could be detected in each type of image, and at heights greater than 125 m, fewer than five targets could be detected. Based on the results of manual and automatic counting, the detection precision and recall rate were evaluated. As the detection precision decreased dramatically above a height of 100 m, we focused on detection results at shooting heights lower than 100 m ( Table 1). The total number of targets was 316, consisting of 56 isolated targets, 243 bordering targets, 17 overlapping targets, and three partial targets. Of the 316 targets, 5 were ostriches. When only RGB images were used for detection, the detection recall rate was 0.367, and the detection precision was 0.013. RGB images cannot be subjected to temperature filtering. Therefore, false-positive detection results such as soil, rocks, roofs, and roads could not be eliminated. This uncertainty in sorting led to the poor detection recall result. When the input image contained thermal information, the detection precision and recall rate were higher. In particular, compared with the RGB-only detection results, the precision increased by at least 50-fold. The original thermal images and the two types of corrected thermal images also produced similar precision results of approximately 0.8. However, detection recall increased by approximately 20% when images corrected for fur color and temperature were used. Moreover, there were two types of combined thermal and RGB images. The first type was created by multiplying a corrected RGB image with the original thermal image, and the second type was obtained by multiplying a corrected RGB image with the all-corrections-applied thermal image. The detection recall rate using these images exceeding 0.6, and the detection precisions were 0.200 and 0.804, respectively. When corrected thermal images were used, the detection precision was approximately four-fold higher. Use of the six types of input images resulted in different detection times. To calculate the detection time of an individual image by image type, the total detection times of the 26 images shot for each height range were summed, and then the sum was divided by 26. After repeating this process 50 times, the average detection time was calculated (Table 2). Of the four processing methods used in Google Colab, parallel processing had the fastest detection time for all image input types. The image type associated with the fastest detection time was the thermal image with unnatural color removal, which was associated with a detection time of 0.033 s. The image type associated with the slowest detection time was the corrected RGB image × corrected thermal image combined image. This image type was associated with a detection time that was three times slower than the fastest time. In general, when the input image had RGB channels, more detection time was required. This occurred because the detection method had to consider more channels and because the large numbers of errors associated with RGB images prolonged the true-false decision-making time of the method. Converting the detection time to frames per second (FPS), the input images including RGB channels were acquired at 9 FPS. The other input images were acquired at 25-30 FPS. Detection Presicion and Recall For wildlife detection, detection precision and recall are fundamentally important. The 26 images shot at each height range were used to generate six kinds of input images. The same detection method was used for each of these input images, and it detected between one-and two-thirds of the targets. When the input image had a thermal channel, the maximum detection precision increased approximately two-fold. Additionally, a detection recall rate of 0.699 was obtained when using the corrected RGB image and corrected thermal image together. Previous studies have shown a diverse range of detection precision (Table 3). A method applied for hippopotamus detection performed best [30]. Studies of cattle [47], monkeys [48], and white-tailed deer [34] detected between 60% and 70% of their targets. Fur seal [49] and human [32] studies detected approximately 40% of their targets. Considering the differences in site environment, target size and shape, and thermal image shooting conditions, the detection method proposed here has above-average performance. Among the previous studies, only the white-tailed deer study [34] provided detection precision and recall results. As was found here, the previous study found very large numbers of false-positive detection results when RGB images were used as input images. This previous study used unsupervised pixel-based and object-based methods. When the unsupervised pixel-based classification method was used with RGB images, the detection precision was 0.046; when the object-based method was used, it was 1.0. However, the detection recall of the two methods had the same value of 0.484. Thus, according to this result, the object-based method did not detect more targets compared with the unsupervised pixel-based method. However, the method proposed here increased both the detection recall rate and detection precision by using different kinds of input images. Instant Detection Detection time is also a major factor in wildlife detection. To detect animals in realtime, detection time is a more important factor than detection precision or recall. The government of the United States limits the capture rate of thermal video equipment for export to 9 FPS, and most products have this capture rate [50] including the thermal camera used here. To apply our method to 9 FPS videos, the detection time should be less than 0.12 s. The full frame rate is 30 FPS. To be able to detect animals in real-time, the detection time should be less than 0.034 s. The methods of previous studies based on machine learning and deep learning are difficult to use in real-time, and the authors have discussed these limitations [51]. The studies listed in Table 3 did not provide detection times, and their methods require a preprocessing step so cannot be used in real-time. A study of koalas [52] provided detectiontime results. When shooting from altitudes of 20, 30, and 60 m, the detection times were 1.3, 1.6, and 2.1 s, respectively, but these times are insufficiently fast for real-time use. With parallel processing, the fastest detection time for a single input image using the method presented here was 0.033 s, and the slowest was 0.111 s. Converted to FPS, these times correspond to 30 and 9 FPS, respectively. When the input image had only a thermal channel, the FPS range was 25-30; when the input image had RGB channels, the rate was 9 FPS. Thus, all input image types can be used to analyze exported thermal videos. Single thermal channel images can detect almost every frame during real-time shooting. Furthermore, the sensor always shoots thermal and RGB images simultaneously, so both types of input image can be used according to preference. The best way to use the method developed here is to check for the presence of wildlife in a thermal image with unnatural colors removed. For the frames with a confirmed presence of wildlife, the corrected RGB image combined with the corrected thermal image can be used to clearly determine numbers and locations. Using the Proposed Method to Supplement Previous Methods The proposed method can detect animals regardless of color, shape, or size and does not need to generate a training dataset. This advantage reduces the total time needed for detection, and, at the same time, the method can be used to generate the training dataset itself. As a result of the automated detection process, our method marks the outline and centroid of each target. Then, instant target sorting can be used to form sets of images of detected animals, and this stacked result can be employed for ML-DL training, even while simultaneously conducting UAV surveys in the field. This quick and in-field detection method can be used to supplement the relatively precise and advanced existing methods. Not only is our method useful for creating a training dataset but also, when a trained model is used for detection, the region of interest in the RGB image can be minimized. This areal reduction can lead to time saving. Utility of Thermal Sensors The use of thermal sensors provides several benefits for wildlife detection, especially time saving in the detection process [53], enhanced detection performance [32,34], and wider application across many species. However, thermal sensors still have limitations and drawbacks. An important limitation is that thermal sensors cannot sense through obstacles such as tree canopies, hideouts, and bushes. RGB image-based detection methods also have this limitation. However, thermal images can be used to detect the body temperature of a camouflaged target and may have an advantage over RGB images. Another more critical drawback is the sparse resolution of the thermal camera. Compared with an RGB image, the image generated from a thermal sensor has approximately 40-fold fewer pixels and one-quarter of the coverage area. Furthermore, the spatial resolution at the same shooting height is approximately four times lower. This drawback limits the shooting height for obtaining images for use in detecting wildlife. At a height of 100 m, the pixel resolution is approximately 9 cm, and at a height of 200 m, the resolution is approximately 18 cm (Figure 12). If the shooting height is higher than 100 m, a target of the size in this paper will be represented by only a few pixels, and if multiple targets are in contact with each other or overlapped, their edges become more difficult to distinguish, and blurred targets are not detected properly. In this study, a height of 100 m seemed to be the maximum height for significant detection of wildlife. If the target size or shape was different or a high-quality thermal sensor could be used, the maximum height would be higher. Method Overview Our method overcomes three limitations of previous studies: it can detect target animals in real-time with minimal data preprocessing, use two types of images for advanced detection ability, and be applied in diverse situations. Size-temperature filtering enables our method to be applied to different species and land cover types. However, a lack of data means that further validation of its applicably to different land cover types is necessary. In addition, shooting altitude remains a limitation, similar to previous methods. Another drawback of the proposed method is that it cannot merge acquired images to minimize preprocessing and reduce detection time. However, although merging allows for more rapid detection of targets, partial targets might not be detected accurately. This could be overcome by using slower UAV flight speeds and higher frame rates. Conclusions This paper developed a new method for detecting animals using thermal and RGB images. The maximum detection precision was 0.804, and the recall rate was 0.699. The major improvement in detection time enables real-time usage. This method has two limitations. The environments and conditions, such as the detection target, shooting time, and land cover, were not diverse when the raw data were acquired, so the method was developed using only limited data. Nevertheless, the method might be applicable to many species and circumstances. The sparse resolution of the thermal sensor is another limitation that limits the shooting height. Nonetheless, if high-resolution thermal images are used, the method may be able to detect smaller targets and it may be possible to fly the UAV at a higher altitude. The focus of future work will be to diversify the target species and shooting conditions to clarify how versatile a thermal sensor-mounted UAV system would be in conducting wildlife surveys. Using these data, an advanced method that can detect targets at greater heights will be proposed.
6,497.4
2021-06-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
Contact interactions in complex fibrous metamaterials In this work, an extension of the strain energy for fibrous metamaterials composed of two families of parallel fibers lying on parallel planes and joined by connective elements is proposed. The suggested extension concerns the possibility that the constituent fibers come into contact and eventually scroll one with respect to the other with consequent dissipation due to friction. The fibers interact with each other in at least three different ways: indirectly, through microstructural connections that could allow a relative sliding between the two families of fibers; directly, as the fibers of a family can touch each other and can scroll introducing dissipation. From a mathematical point of view, these effects are modeled first by introducing two placement fields for the two fiber families and adding a coupling term to the strain energy and secondly by adding two other terms that take into account the interdistance between the parallel fibers and the Rayleigh dissipation potential (to account for friction). Introduction An important modeling issue in the design of metamaterials micro-architecture concerns the need of describing those phenomena occurring when parts of the afore-mentioned microstructure, initially distant, enter in contact because of deformation. This kind of phenomena occurs at micro-level especially in lightweight metamaterials when their microstructure occupies a small percentage of volume fraction and includes parts undergoing large relative displacements. Starting from the basics outlined in [1][2][3], we want to present a first approach that intends to give some research perspectives in the study of contact phenomena occurring in the micro-architecture of fibrous metamaterials. To this end, we will follow in the presentation of the results already available in the literature a criterion of simplicity so that in the following we can introduce more easily the new concepts that are discussed here for the first time. For seek of simplicity, we will start by addressing the problem of modeling contact interactions in a particular class of complex fibrous metamaterials, namely the pantographic architectured metamaterial. As it will be clear in the following, the topological arrangement of fibers in a metamaterial is of fundamental importance. Interesting results on this topic can be found in [4,5]. First of all, before we can define what a pantographic microstructure or metamaterial is (and there is a remarkable length-scale difference between these concepts), it is necessary to clarify the terminology used in the present paper. These terminologies are necessary to underline the role of the different scales that play a relevant role in the modeling process and that account for the hierarchical structural ordering that are the main reason of most of the problems that we will present (as in any problem of metamaterial synthesis, the hierarchy plays a fundamental role). The first question that needs to be answered if we want to deal effectively with the complexity of microstructural elements as the hinges, also known as pivots in the literature, is: which is the microstructure of a pantographic metamaterial? We can answer this question by describing the geometrical properties of the object or by showing a real sample like the one in Fig. 1b and c. From a geometrical point of view, as it is also clear observing Fig. 1a, a pantographic structure is constituted by two fiber layers that are on two parallel planes held together by mechanical hinges. The directions of the fibers, in the initial configuration, are orthogonal, and the hinges are located in correspondence of the points of intersection between the fiber directions. But to fully characterize the topic, it is better to adopt the typical approach of those who want to conceive a new metamaterial. This approach consists in specifying a priori the equations governing the metamaterial and from these to find backwards the microstructure that, once homogenized, produces a continuum governed by these equations. In this context, the real question to be answered to describe the pantographic metamaterial is: what are the mechanical properties of this metamaterial and which equations govern it? To answer this question, we can refer to [3]. In fact, in [3] the basic characteristics of the pantographic metamaterial are outlined. We can distinguish two basic aspects: the primary request is technological and implies, producing a secondary request, a property related to mathematical modeling. The primary demand is to obtain a material that can be stretched without any energy expense (or with a very low expense of it). The mathematical implication, as we will see below, is that the governing equations can be obtained considering a higher gradient continuum, specifically a second gradient one. In this context, the role of the pivots, we will see, is merely to ensure that the two fiber layers that make up the pantographic metamaterial can exhibit relative rotations at their points of contact. Section 2 deals with this problem. The role of the pivots can be generalized, and one can use them, for example, as actuators of deformation in case they are involved in flexoelectric or piezoelectric processes. This has been attempted in [6,7]. The role of perfect pivots in pantographic micro-architectures The earliest conception of a pantographic structure as a micro-model coupled with a second gradient macroscopic continuum model occurs in [1,2]. The key insight consisted in finding the micro-model that would lead, in the sense of some kind of homogenization procedure, to the simplest model characterized by a second gradient continuum. In fact, it was shown there how looks like a microstructure for which elongations require no strain energy and gradient of elongations are associated with a strain energy. The above-mentioned procedure can be qualified as a multiscale approach with levels organized according to a hierarchical classification. Mechanics provides several examples of multiscale approaches, mainly designed to define the interrelation between macro-models and micro-models. Highly important examples include those due to Maxwell [8] and Saint-Venant [9][10][11]. In the domain of multiscale approaches, a high efficiency approach involves asymptotic identification. When the micro-and macro-models have been postulated a priori, a kinematic matching among them can be conjectured and, successively, equivalence between the powers in the corresponding motions (micro and macro) is applied. If the kinematic matching has been sufficiently judicious, the range of applicability of macro-model covers a large class of micro-dynamic processes. In practice, the kinematical matching determines the range of applicability of calculated continuum limit. A pristine discussion of this point is due to Piola [12][13][14][15][16] so that, sometimes, this micro-macro identification is called "Piola identification." Using Piola identification, we can estimate the constitutive parameters of the macro-model equations with respect to the parameters of the fundamental micro-architecture elements. Besides, before presenting some peculiar features of the model used in [1,2], we would like to underline once more the main characteristic of Piola identification. In it, at first we postulate the macro-model as a second gradient continuum, and only then we look for a possible micro-model that produces the specified macro-model through homogenization. Such macro-theory-driven strategy has a very strong potential: we do not search for random microstructures trying to identify one that is adequate for our objectives, but instead, keeping the macro-theory in mind, we aim to design the appropriate microstructure in order to comply with the theoretical expectations. When pantographic structures have been introduced in the conception of new metamaterials, second gradient models were already available in the literature [17][18][19][20][21][22][23][24][25][26][27][28][29][30]. In this regard, we can refer to the theories of elastica studied by Euler, Bernoulli and Navier as the very first example of a second gradient model. The model presented by Euler is a 1D prototype. Almost a century later, it was proposed the first model of microstructured continuum by the Cosserat brothers [31]. As shown in [32], when a suitable constraint is introduced, Cosserat continua can be regarded as a generalization of an incomplete second gradient continuum model. It is, however, noteworthy that the genesis of the higher gradient 3D models-and also of the peridynamic generalized continua-can be attributed to Piola [12,13] or maybe to earlier authors (Lagrange, D'Alembert). With Germain [22][23][24]33], we refer to second gradient continua those media whose Lagrangian volumetric deformation energy depends both on the first and second gradient of the placement field (of course, these dependencies must be independent from the observer). We refer to an incomplete linear second gradient material if its strain energy only depends on ∇u and ∇ω(u), where ω(u) is the skew-symmetric part of the gradient of the displacement, ∇u, ω(u) = ∇u − ε(u) with ε(u) the symmetric part of ∇u. Complete 2D and 3D second gradient models can be found in the capillarity models or even in the damage and plasticity theory [34][35][36][37][38][39][40][41][42]. Some microstructure can be described by a second gradient model In the sixties and seventies, the problem of a microstructured medium was approached by Mindlin, Toupin and Germain in several papers. In their research, they discussed different perspectives of higher order models to treat materials with a microstructure, and pointed out how the mere existence of the microstructure, in some cases, can induce higher order terms in the equilibrium equations for the material under investigation. Unlike classical homogenization techniques, in this case equations are obtained that contain terms dependent on second or higher order derivatives of displacement, thus inducing higher gradient theories. According to Germain's work [22][23][24], we can demonstrate how the kinematics due to the presence of the microstructure generates a second gradient continuum at macroscopic level. In the classical representation, a continuum is composed of a continuous distribution of particles, geometrically represented by a point M and characterized by a velocity field, defined by its U i components. However, in a theory that takes into account the presence of a microstructure, each particle represented by a point M must be characterized by a more sophisticated kinematics. So, how can we describe the presence of the microstructure from a kinematic point of view? For this purpose, it is necessary to consider the continuum at microscopic level: each particle must be considered as a continuum of small extension itself. This small continuum replacing the adimensional point is addressed by Germain as P(M). Germain [22][23][24] demonstrates in detail how the previous hypothesis implies that the velocity field U i associated with the continuum P(M) and the field ϕ i j of relative velocity gradients (resulting in a second-order tensor) must be considered. This ultimate result makes it necessary to introduce the second relative velocity gradient v i jk , which is a third-order tensor. Therefore, in [22][23][24] it is clearly proven that the presence of a microstructure can be naturally described by including higher order terms in the considered continuum theory. This is the starting point in the study of pantographic structures. A certain microstructure is chosen to have a second gradient continuum as simple as possible and then homogenization techniques are used to determine an appropriate continuum model. A clear and elegant comparative analysis of the advantages obtained using the principle of virtual work (or the principle of minimum action) can be found in Hellinger [43,44]. The pantographic microstructure The pantographic architecture, see a planar 2D array design in Fig. 1a and an example of its practical realization in Fig. 1b and c, has been first introduced in the context of homogenized generalized media in [45][46][47][48][49][50][51][52][53][54][55][56][57][58][59][60]. In order to achieve the associated homogenized macro-model, it is required to examine a structure consisting of n pantographic elements and to analyze the behavior of the limit material when n tends to infinity. Certain formal asymptotic expansion techniques, formerly employed in [3,[61][62][63][64], are consistently explored in [65][66][67][68] for establishing the effective properties of periodically assembled linear elastic rods-based structures. Noticeably, when the flexural stiffness of homogeneous elastic bars forming the pantographic arrays and torsional stiffnesses of interconnecting cylinders are lower than the beams elongation rigidity [3], one achieves attractive macro-models. In searching for macro-models for micro-architectural metamaterials, there is generally a complex evaluation to be carried out, i.e., we have to establish if the energy associated with the second gradient terms is negligible with respect to the first gradient term. Longly, it has been assumed that the second gradient energetic terms were always irrelevant [16]. This assumption was demonstrated to be inadequate at the very beginning of the modern continuum mechanics by Gabrio Piola [12,13], albeit some scholars have longly ignored the opinion of Gabrio Piola in this context while praising some of his results [51]. In order to prove that the synthesis of second gradient metamaterials with non-negligible second gradient energetic terms is not only physically conceivable, but can be approached mathematically, it has been proven that [1,2]: i. pantographic architectures provide for the synthesis of Casal-type beams [69,70]; ii. it is possible to synthesize second gradient plates with two families of pantographic substructures: i.e., plates having strain energy depending on the in-plane second gradients of displacement [71]; iii. when perfect hinges are provided, the macro-strain energy of short beam pantographic structures does not incorporate at all the shear first gradient terms, and in the pantographic fabrics the so-called floppy-modes can be observed: i.e., homogeneous deformations that correspond to a null strain energy. A second gradient continuum proto-model for the pantographic metamaterial Consider, now, a reference configuration of a pantographic lattice (see Fig. 1a). As it should be now clear, it is constituted by two families of mutually orthogonal fibers which intersect one by one in a bidimensional array of regularly distributed material points. These points are modeling the hinges or pivots we have already introduced. Let be (D 1 , D 2 ) an orthogonal basis for the reference configuration. For simplicity, (D 1 , D 2 ) can be taken along the directions of the two fiber families which are, in the reference configuration, orthogonal. So, we will refer to the reference configurations of the two families of fibers by using the two unitary vectors D α with α = 1, 2. We now consider a 2D continuum whose reference shape is given by a rectangular domain Ω = [0, L] × [0, ] ⊂ R 2 , where L and can be identified as the lengths of the sides of the pantographic structure in Fig. 1a and very often, it is assumed that L = 3 . If we want to study only planar motions, then the current shape of the rectangle is mathematically described by a regular placement function χ : Ω → R 2 . In [3], it is shown how, by employing the so-called Piola's Ansatz and assuming that χ(·) is at least twice differentiable, one obtains the following macroscopic homogenized strain energy for the pantographic metamaterial where F denotes the deformation gradient ∇χ and αα . The first integral in Eq. 1 models the fiber elongation energy, while the second integral represents the fiber bending energy. We want to remark that the bending energy is written in terms of the gradient of the deformation tensor ∇ F corresponding to a second gradient of the placement function ∇ 2 χ. This is coherent with what we had previously stated, as the second gradient term is representative of the bending of the fibers. A mechanical diode As we have remarked, the continuum energy in Eq. 1 takes into account only bending and elongation of fibers: no connective microstructural elements deformation is considered. If this is the case, one can recall as an analogy an electric circuit component: the diode. In many applications of technological relevance, in fact, the voltage-current response of an ideal diode, under static conditions, can be approximated by a piecewise linear function. In this approximation, the current can be considered null if the voltage between anode and cathode is less than or equal to a certain threshold value V T ; if, on the contrary, the voltage is higher, the diode can be approximated to a voltage generator, whose current is imposed by the circuit to which it is related. In mechanics, pantographic metamaterial exhibits a behavior formally identical to the one exhibited by the diode in electrical circuits (this analogy has been studied in detail in [72,73]). As already explained in previous sections, the macroscopic deformation of this metamaterial results, at microscopic level, in the deformation of its basic constituents, i.e., fibers and pivots. If the pivots are perfect and their torsional stiffness is therefore equal to zero or negligible, so the overall deformation of the structure results in the sole deformation of the fibers. A bias extension test of a pantographic structure (with perfect pivots or not, see Fig. 2) consists in two successive stages: (i) the bending of the fibers (coupled to an extremely negligible elongation of the fibers), and then (ii) an elongation of fibers after the fibers in the central part of the structure have come into contact. Moreover, in the second stage of deformation, when the fibers are in contact, one should take into account also this fiber contact. We will deal with the modeling of the fiber contact interaction in the following sections. When the connective microstructural elements are not perfect hinges, then Eq. 1 has to be completed by adding a shear first gradient term. We will deal with this further term in the next Section. Deformation mechanisms of the connecting microstructural elements When the microstructural connections (we have called them 'hinges' or 'pivots') cannot be considered as perfect hinges, one has to deal with their deformation mechanisms and it is necessary to add two further terms to the strain energy in Eq. 1 . As it is shown in Fig. 3, if we model the microstructural connections as elastic cylinders, one must take into account the torsion and the shear of the cylinders. The torsion of the cylinders is, from a macroscopic point of view, related to the (macro) shear deformation, and it will provide for a new energy term. At this point, it is fundamental to underline that the shape of this term depends heavily on the microscopic mechanics of the cylinder used to mimic the hinges. So, we can have a perfect hinge, an elastic hinge, a plastic hinge and all the intermediate possibilities. A simple form of this energy term, which is adjustable with successful results in identification procedures with both elastic and plastic hinges, has been assumed (as shown in [3]) to depend on the angle between the interconnected fibers from the pivot to a certain power β: the latter parameter can be obtained from the fit of the experimental data and in general can also depend on the type of material by which the structure is manufactured. So, the proposed form for the shear energy is where K s is the shear stiffness and can be related to the torsional stiffness of the cylinders modeling the connective microstructural elements and the function arccos F D 1 ||F D 1 || · F D 2 ||F D 2 || − π 2 represents the variation of angle between two fibers in correspondence of a chosen pivot. Obviously, the shear energy can also be modeled in other ways. For example, a model that captures more accurately the phenomenology of a torqued cylinder is due to Ogden [74]. A version of Ogden's shear energy adapted to pantographic structures is the one shown in [75] Fig. 4 Sliding of two fibers. The pivot is not modeled as a rigid connection (as it was in previous models), but as a linear spring where ϑ is the variation of the angle between the fibers in correspondence of the interconnecting pivot and J is related to the deteminant of F, while ϑ 0 and J 0 are the values of these entities in some particular points of the deformation history. These values can be obtained by fitting the experimental data. Some problems related to the shear energy are discussed in [76]. An attempt to find a further form for shear energy, able to well reproduce the observed phenomenology, is being performed recently. The other deformation mechanism which has to be taken into account for properly modeling the connective elements is related to the (micro) shear deformation of the cylinders (see Fig. 3). From a macroscopic point of view, it is responsible for the relative displacement between the fibers of the two families which are connected by the considered pivot. From a microscopic point of view, it is pretty clear that this mechanism is strongly related to the ratio between the height and the radius of the cylinder. The slender the cylinder, the more visible this (micro) shear effect will be. The resulting energy term will be analyzed in the next Section. Sliding of fibers of one family with respect to the other one In Sect. 2, it has been presented the strain energy for the pantographic structure and its principal implications. The main assumption consists in the fact that the two families of fibers are described by using a single placement function χ. As it has been shortly discussed in the previous Section, for properly taking into account the whole deformation mechanisms of the connective microstructural elements, it is necessary to introduce and analyze the possibility to allow to the fibers also to slide one with respect to the other. To this aim, we have to modify the homogenized strain energy for the system as presented in Eq. 1 by using two independent placement functions χ α (each for a fiber family) and introducing an interaction term in the energy [77] We could imagine also a different interaction energy between the fiber families, for example by adding some suitably conceived extensional springs linking the pivots of different fibers. In this case, we would have a more complex interaction energy, depending on the first gradient of displacement The energy term that we have now presented does not complete the discussion about the interaction between the fibers. In fact, we are completely neglecting the interaction between the fibers of the same family (at this level, nothing prevents the fibers to interpenetrate each other when we deform the structure). We will have to introduce then a new term which takes into account this kind of interaction and our simplifying assumption is to write it using only the two placement fields χ α . Contact interaction between fibers of the same family We want now to account for the interactions between fibers of the same family. As it is well explained in Fig. 5, depending on the direction of the relative motion between two parallel fibers, we expect to have two possible interactions: frictional interaction (Fig. 5a) and contact interaction (Fig. 5b). The friction interaction can be mathematically modeled by exploiting a Rayleigh potential, and we will deal with it at the end of this Section. The contact interaction can be dealt with as it is described in the following. Without lack of generality, consider now just one of the two families of fibers (as shown in Fig. 6) and we will neglect in what follows the family index α in the placement function. In the contact energy term, we have to introduce the micro-level constraint of impenetrability between two fibers of the same family. Let consider two adjacent fibers, for example in the family oriented along D 1 in the reference configuration. By referring to Fig. 6, we calculate the change in distance between the fibers of the same family. In the deformed configuration, the distance between two adjacent fibers, once chosen the point χ (x 1 ,x 2 ), can be defined as the minimum for x 1 of the squared norm The problem consists in solving the following equation Having called v 1 = χ (x 1 ,x 2 + ε) − χ (x 1 ,x 2 ), we can rewrite Eq. 7 as 8 ) or, using the definition of F, as For solving this equation, we must expand the function χ (x 1 ,x 2 + ε) Finally, we can rewrite Eq. 7 as Now, by considering that x 1 −x 1 is of the same order as ε and neglecting terms of the order ε 2 we obtain By substituting this result in Eq. 11, we obtain an estimation for v 1 (and recalling the family index α) A similar discussion can be carried out for the other family of fibers. Similarly to what we have done before, by defining v 2 = χ ( From a phenomenological point of view, we expect the contact interaction energy term to behave as in Fig. 7, where δ is the interdistance between the centerlines of two adjacent fibers (see Fig. 6) and εδ 0 is the contact interdistance between two fibers. This interdistance is a parameter that has to be settled depending on the geometrical features of the considered pantographic structure and it can be simply computed as the thickness of the fiber. A mathematical expression of the behavior in Fig. 7 can be obtained by considering a piecewise continuous function, such as The energy terms accounting for contact of fibers can be then written as where f (u α ) is a generic monotonically increasing function of u α . Complete form of the strain energy Actually, we can formally write the complete strain energy of the pantographic structure. It is composed by Eq. 1 and by the new terms we wrote to take into account the effects of interaction between the fibers: (i) of different families (sliding mechanism) and (ii) of the same family (contact). So, we have Finally, we want to deal with dissipation. In fact, as it is pictorially explained in Fig. 5a, one of the possible relative motions which can arise between two parallel fibers consists in a parallel shifting. This shifting motion produces a friction, and it can be modeled by using a Rayleigh dissipation potential [78]. The discussion on how we can include dissipation in the modeling is presented in the next Section. Introducing dissipation: the Hamilton-Rayleigh Principle As it should be now clear, the approach to Mechanics employed in this paper is variational. The origins of this formulation can be found in Lagrange [79,80] and Piola [12,13], and it has been extended to continuum mechanics by Paul Germain and others (we have previously discussed this point). A big question, which is yet debated, consists in the possibility of including dissipation in the variational framework. A possibility is to employ the so-called Hamilton-Rayleigh Principle [48]. It can be summarized as follows: i. we separate the problem in two parts, one conservative and the other dissipative, and we describe only the conservative part using an action functional; ii. we calculate the first variation of the action functional; iii. we add suitable linear functionals to model dissipative phenomena (Rayleigh dissipation functionals); iv. the Hamilton-Rayleigh Principle can be formulated by summing the Lagrangian and the dissipative virtual works and postulating this sum to be vanishing for every admissible variation of motion. This is an energy balance; the share energy not conserved is dissipated indeed. Before discussing the mathematical formulation of the above explained procedure, we want to remark that it is debated about the real nature of dissipative systems (see, for example, [81]). In fact, it has been conjectured that a dissipative system can always be considered a system that approximates or simplifies a conservative one. This conjecture is supported by the fact that one can obtain a non-Lagrangian system from a Lagrangian one in at least two ways: (i) neglecting some terms in the Lagrangian action functional and therefore obtaining an approximation that is not Lagrangian; (ii) considering that a non-Lagrangian system is in effect a subsystem of a Lagrangian system with many more degrees of freedom. In the latter case, the dissipative phenomena would be only apparent: in fact, the energy which is "lost" by dissipation would simply transit to the hidden degrees of freedom and could even return to the visible ones, but only after the Poincaré time, which can be very long. In the following, we give the main points of the variational approach to dissipation and we eventually apply these concepts to the case studied in the present work for including dissipation in the study of pantographic structures. Consider a Lagrangian action functional where: ϕ α (X A ) is any set of n tensor fields on R m (α = 1, . . . , n and A = 1, . . . , m); T ⊂ R m ; dV is the Lebesgue measure and the coordinates X A are used to refer to the generic point X in T . The least action principle states that a real motion for the considered mechanical system minimizes the Lagrangian action functional 19. Moreover, such a motion has to be searched for among the motions for which the first variation of the action vanishes. For searching for an extremal of the action potential, it is necessary to compute its first variation. This computation brings to (20) where the following quantity is generally accounted for as virtual work so that the stationarity condition of the action functional can be written as The main result of this principle consists in a system of differential equations (and some boundary conditions) called Euler-Lagrange equations (here in a multi-dimensional space of dimension m, which can be 4 for the space-time) where ∂ T /∂ d T is the difference between ∂ T and ∂ d T (∂ d T ⊂ ∂ T is the part of the boundary of T where the functions constituting the action functional have an assigned value) and N A is the extern unit normal to ∂ T /∂ d T . We remark that the least action principle, when formulated for action functionals admitting first differentials, can be considered as a particular form of the principle of virtual work (see [48,78] for all the details). Moreover, it can be shown that the principle of least action implies the principle of virtual work. The principle of virtual work As it has been recalled in the previous subsection, the principle of virtual work states that the motion of a system is characterized by imposing that Following Lagrange and Piola and being inspired by some particular cases, we assume in general that the virtual work functional can be separated in three distinct functionals: inertial work (W iner (ϕ α ; t)); internal work (W int (ϕ α ; t)); external work (W ext (ϕ α ; t)). In the case of Lagrangian systems, i.e., when the expression of virtual work derives from the action functional expressed in terms of a Lagrangian density L, identifications in terms of the Lagrangian density for the virtual work functionals can be found (see [48] for more details). Using the above introduced inertial, external and internal virtual work functionals, the principle of virtual work stated in Eq. 25 can be rewritten as Hamilton-Rayleigh principle The main point of this approach consists in reformulating the Principle of Virtual Work in terms of an Action functional and a Rayleigh dissipation functional. Rayleigh dissipation functional R is defined as a linear functional on the set of velocities, not on the set of motions as the action A. The Rayleigh dissipation functional is a map that associates, at every instant t, a positive real number with the ordered set of fields ϕ α , ∂ϕ α ∂ X A , ∂ϕ α ∂t , ∂ where X A are Lagrangian coordinates. A general formulation of Hamilton-Rayleigh principle can then be written reformulating the principle of virtual work of Eq. 25 So the principle of virtual work reformulated following Hamilton-Rayleigh in the case of pantographic metamaterial has the form δW δχ α = ∂R ∂∇χ α δχ α (29) As the Rayleigh dissipation functional is defined on the set of velocities, for computing it we have to consider the velocities of the fibers in the family direction. From a mathematical point of view, we will have where μ α is a suitably chosen coefficient which has to be calibrated on the physical system. Conclusion We have proposed an extension to the strain energy for the pantographic metamaterial previously introduced in [3]. The constitutive micro-structural elements have been studied in detail for fully understanding their contribute to the global macroscopic deformation behavior of the structure. Specifically, we have examined the mechanics of pivots (as it has been already presented in [76,77]) and the interactions between the constituent fibers. In fact, in the available literature about pantographic structures the fibers of the two families have been always considered as non-interacting. We have shown, instead, that the fibers interact between them at least in three different ways: (i) indirectly, through the microstructural connections (the pivots) which could allow for relative sliding between the two fiber families; (ii) directly, as the fiber of one family can touch each other and (iii) can scroll introducing dissipation. From a mathematical point of view, these effects have been modeled firstly by introducing two placement fields for the two fiber families and adding a coupling term to strain energy and, secondly, by adding two further terms taking into account the interdistance between parallel fibers and a Rayleigh dissipation potential (for modeling the friction). The present work proposes, as said, a modification to the continuum model of pantographic structures. A further study must be carried out for providing verifications. More precisely, a first step will consist in implementing the new energy terms in the numerical codes available in the study of such structures. Preliminary results for the sliding mechanisms have been obtained in [77] using the continuum model and in [82,83] employing a "mesoscopic" description in which the pantographic structure is modeled as an assembly of Euler-Bernoulli nonlinear beams. For the complexity of the topic, it will be necessary to rely on powerful numerical tools. Many examples can be found in the literature [84][85][86][87][88][89][90][91][92][93][94][95][96], and some numerical strategies have been already successfully employed in the case of pantographic structures [5,[97][98][99][100][101][102][103][104]. A second stage will concern the experimental validation. Many experiments have been performed for testing the pantographic metamaterial [105][106][107][108][109][110][111]. Clearly, the model validation will be the more reliable the more accurate the experimental data and their analysis will be. A recent data analysis procedure that is very promising for its accuracy and benefits is Digital Image Correlation. Through this technique (whose basic skills can be found in [112][113][114]), it is possible to obtain an estimate of the displacement and deformation fields. Moreover, this technique can be applied in the calibration of the constitutive parameters of the model to be tested. Some results already available in the field of pantographic structures can be found in [101,115]. From the interaction between experiments and theory, mediated by numerical simulations, it is possible to design a pantographic structure with optimal mechanical properties. This has been performed in a preliminary way in [116].
8,083.6
2021-05-10T00:00:00.000
[ "Physics" ]
Rapid Detection of the Strawberry Foliar Nematode Aphelenchoides fragariae Using Recombinase Polymerase Amplification Assay with Lateral Flow Dipsticks Rapid and reliable diagnostic methods for plant-parasitic nematodes are critical for facilitating the selection of effective control measures. A diagnostic recombinase polymerase amplification (RPA) assay for Aphelenchoides fragariae using a TwistAmp® Basic Kit (TwistDx, Cambridge, UK) and AmplifyRP® Acceler8® Discovery Kit (Agdia, Elkhart, IN, USA) combined with lateral flow dipsticks (LF) has been developed. In this study, a LF-RPA assay was designed that targets the ITS rRNA gene of A. fragariae. This assay enables the specific detection of A. fragariae from crude nematode extracts without a DNA extraction step, and from DNA extracts of plant tissues infected with this nematode species. The LF-RPA assay showed reliable detection within 18–25 min with a sensitivity of 0.03 nematode per reaction tube for crude nematode extracts or 0.3 nematode per reaction tube using plant DNA extracts from 0.1 g of fresh leaves. The LF-RPA assay was developed and validated with a wide range of nematode and plant samples. Aphelenchoides fragariae was identified from seed samples in California. The LF-RPA assay has great potential for nematode diagnostics in the laboratory with minimal available equipment. Introduction The strawberry foliar nematode Aphelenchoides fragariae is an important pest of strawberry plants; it causes 'strawberry crimp' disease [1].Aphelenchoides fragariae was described by Ritzema Bos [2] from specimens extracted from strawberry plants sent to him from England.The neotype of this species proposed and described by Allen [3] came from strawberries in Escalon, CA, USA.The nematode attacks above-ground parts of the plant and causes malformations, including twisting of the shoots and puckered undersized leaves with crinkled edges, reddened petioles and discolored areas that have hard, rough surfaces.The nematode damage symptoms are easily confused with symptoms of powdery mildew or infections with plant pathogenic bacteria.Aphelenchoides fragariae can severely impact the yield of strawberries, as heavily infected plants do not grow normally or produce fruit [4,5].Over 250 plants in 47 families, including ornamental ferns and other ornamental plants, are also considered to be hosts of A. fragariae [3][4][5][6][7][8][9][10][11][12][13].In California, A. fragariae has been recorded on 27 plant species [14].This nematode is widely distributed in the USA, Canada, Japan and Europe and currently reported in 37 countries [7,15].Interestingly, the strawberry foliar nematode may be endo-or ectoparasitic, but it can also be mycetophagous [16], attributes that enhance its survival in the absence of a host crop and which limit and complicate management options. The two nematode species most associated with damage in California strawberries are the foliar nematode A. fragariae and the northern root-knot nematode Meloidogyne hapla.Strawberries are an important crop in the United States with about 90% of their national production occurring in California.Strawberries are California's third highest grossing crop, valued at USD 3.02 billion in 2021.Strawberry production in California occurs primarily along 500 miles of coastal regions between San Diego in the south to Monterey Bay in the central coastal region [17].Besides the survival of A. fragariae in the mycetophagus state, the western sword-fern Polystichum munitum, native to western North America from Alaska to Mexico, is often heavily infected by A. fragariae in California coastal forests.The high humidity of coastal areas provides very favorable conditions for a rapid increase in the foliar nematode population and its dispersal from infected natural areas to strawberry plantations. The strawberry foliar nematode is the subject of a regulatory program in California.Strawberry nursery stock should be inspected and tested for nematodes through the California Strawberry Registration and Certification Program run by the California Department of Food and Agriculture and the Foundation Plant Services of the University of California, Davis [18].The precise and rapid detection of regulated nematodes is the first and most essential step in the regulatory program. The morphological identification of A. fragariae is complex and requires the microscopic examination of adult nematodes.This species can be distinguished from all other Aphelenchoides species by its slender body, lateral field with two incisures and tail terminus with a single mucro [15,19].Distinguishing between foliar nematodes and fungal-feeding Aphelenchoides species based on morphological characteristics is quite problematic.Only molecular methods can provide reliable and rapid diagnostic tools for their differentiation. Sequences of the nuclear ribosomal genes 18S rRNA, ITS rRNA and D2-D3 of 28S rRNA are clearly differentiate A. fragariae from other nematodes [20][21][22][23][24][25][26][27].Ibrahim et al. [28] were the first to provide PCR-ITS-RFLP profiles for A. fragariae and other Aphelenchoides spp.McCuiston et al. [21] was the first to develop conventional PCR with species-specific primers designed on the ITS rRNA gene polymorphism for nematode detection in plant materials.Rybarczyk-Mydłowska et al. [22] designed a real-time PCR SYBR Green assay using species-specific primers, based on 18S rRNA gene polymorphism, for the quantitative detection of A. fragariae and other Aphelenchoides spp.Although these PCR assays are very reliable, new DNA amplification techniques may provide easier and more rapid nematode detection. Recombinase polymerase amplification (RPA) is a relatively new isothermal in vitro nucleic acid amplification technique.It has been adopted as a novel molecular technology for simple, robust, rapid, reliable and low-resource diagnostics for different organisms [29].RPA has several advantages over PCR-based methods for plant-parasitic nematode detection: (i) it does not require thermal cycling and can be used in areas with minimal laboratory infrastructure and run by personnel with minimal technical experience; (ii) it is more sensitive than PCR; (iii) sample processing does not require DNA extraction; (iv) amplicons may be detected, at the endpoint or in real-time, within 8 to 30 min.RPA assays have shown high sensitivity and specificity for detecting various agriculturally important plant-parasitic nematodes, including Meloidogyne hapla [30], but have not been developed for diagnostics in A. fragariae. In this study, an LF-RPA assay was developed for the detection of A. fragariae from plant and nematode DNA samples and for crude nematode extracts.Species-specific primers and a probe were designed based on the polymorphism of the ITS ribosomal RNA gene sequences. Nematode Samples Twenty-seven isolates of A. fragariae were used to develop and validate the LF-RPA diagnostic assay.Nematodes of this species were collected from various plants in the USA (California, Connecticut, North Carolina, Washington), Russia, Germany and New Zealand (Figure 1; Table 1).Aphelenchoides fragariae was identified in rice seeds from California, and A. smolae was found in a strawberry sample from OR, USA. (California, Connecticut, North Carolina, Washington), Russia, Germany and New land (Figure 1; Table 1).Aphelenchoides fragariae was identified in rice seeds from Cal nia, and A. smolae was found in a strawberry sample from OR, USA. RPA Primers and Probe Design Several ITS rRNA gene sequences of A. fragariae and other Aphelenchoides species were downloaded from GenBank and aligned with ClustalX.Regions with high sequence dissimilarity between A. fragariae and other Aphelenchoides spp.were assessed, and four species-specific A. fragariae candidate primers and one probe were manually designed.The Blastn search of these species-specific candidate primer sequences and probe sequences showed high identity (100% similarity) only with the ITS rRNA fragments of A. fragariae deposited in GenBank. RPA Detection Four primer combinations were screened for the best performance under the same RPA conditions.The species-specific forward AfragF2-ITS and reverse AfragR1-ITS primers were found to be optimal with a clearly visible band.This primer set reliably and specifically amplified the target gene fragment on a gel, approximately 194 bp in length (Figure 2).The final sequences of the primers and probe used for the assays are listed in Table 2 and are indicated in the ITS rRNA gene alignment in Figure 3. Sometimes, non-specific weak bands of other sizes with this primer set were observed in experiments with other nematode samples or a negative control. * ND-DNA extracted from nematodes; PD-DNA extracted from infected p atode extract. RPA Primers and Probe Design Several ITS rRNA gene sequences of A. fragariae and other Aphelen downloaded from GenBank and aligned with ClustalX.Regions with similarity between A. fragariae and other Aphelenchoides spp.were asse cies-specific A. fragariae candidate primers and one probe were man Blastn search of these species-specific candidate primer sequences an showed high identity (100% similarity) only with the ITS rRNA fragm deposited in GenBank. RPA Detection Four primer combinations were screened for the best performa RPA conditions.The species-specific forward AfragF2-ITS and rever mers were found to be optimal with a clearly visible band.This prim specifically amplified the target gene fragment on a gel, approximate (Figure 2).The final sequences of the primers and probe used for the assa 2 and are indicated in the ITS rRNA gene alignment in Figure 3. Som weak bands of other sizes with this primer set were observed in exp nematode samples or a negative control. LF-RPA Assay The workflows of LF-RPA detection assay for the strawberry foliar nematode Aphelenchoides fragariae are given in Figure 4. LF-RPA Assay The workflows of LF-RPA detection assay for the strawberry foliar nematode Aphelenchoides fragariae are given in Figure 4. LF-RPA Assay The workflows of LF-RPA detection assay for the strawberry foliar nematode Aphe lenchoides fragariae are given in Figure 4. Specificity Testing The RPA assay was tested for specificity using DNA extracted from eight populations of A. fragariae, six Aphelenchoides spp., six species belonging to other genera from the order Aphelenchida and an unidentified anguinid nematode parasitizing fern, P. munitum (Table 1).The RPA results showed high specificity to A. fragariae only, and no positive reactions were observed against any other nematodes.Positive test lines on the LF strips were observed for all A. fragariae samples, whereas the samples with other nematode species showed only a control line (Figure 5A).The RPA assay was tested for specificity using DNA extracted from eight A. fragariae, six Aphelenchoides spp., six species belonging to other genera from t lenchida and an unidentified anguinid nematode parasitizing fern, P. munitum RPA results showed high specificity to A. fragariae only, and no positive r observed against any other nematodes.Positive test lines on the LF strips for all A. fragariae samples, whereas the samples with other nematode specie a control line (Figure 5A). Sensitivity Testing The sensitivity assay evaluated the specimen number detection limit fo atode extract and for DNA extracted from strawberry leaves and nemato serial dilutions of crude nematode extract were prepared with a range be 0.001 nematode per reaction tube.The reliable detection level of A. fragariae to be 0.03 nematode per reaction tube, although weak test bands were also v replicates at lower dilutions (Figure 5B).DNA extracted from 0.1 g of healt leaves with 1, 5, 10, 15 and 20 nematodes was used in other sensitivity studi reliably detected this nematode species with plant DNA extracted from 0 strawberry leaves containing 5 or more A. fragariae specimens (Figure 6A).sensitivity was estimated at 0.3 nematode per reaction tube using plant DN positive results were obtained from the testing of crude plant extracts witho atodes. Sensitivity Testing The sensitivity assay evaluated the specimen number detection limit for a crude nematode extract and for DNA extracted from strawberry leaves and nematodes.Two-fold serial dilutions of crude nematode extract were prepared with a range between 0.5 and 0.001 nematode per reaction tube.The reliable detection level of A. fragariae was estimated to be 0.03 nematode per reaction tube, although weak test bands were also visible in some replicates at lower dilutions (Figure 5B).DNA extracted from 0.1 g of healthy strawberry leaves with 1, 5, 10, 15 and 20 nematodes was used in other sensitivity studies.RPA assays reliably detected this nematode species with plant DNA extracted from 0.1 g of healthy strawberry leaves containing 5 or more A. fragariae specimens (Figure 6A).Thus, reliable sensitivity was estimated at 0.3 nematode per reaction tube using plant DNA extracts.No positive results were obtained from the testing of crude plant extracts without target nematodes. Testing of Plant Herbarium Materials Aphelenchoides fragariae detection was confirmed using DNA extracts obtained from 20 dried leaf samples previously identified as infected by this nematode.Nematodeinfected plants were collected and identified by Dr. D. Sturhan in Germany and New Zealand and preserved as herbarium specimens (Table 1).Positive test lines on the LF strips were observed for all DNA samples extracted from infected plant leaves only, whereas DNA samples without nematodes showed only a control line (Figure 6B).leaves with 1, 5, 10, 15 and 20 nematodes was used in other sensitivity studies.RPA assays reliably detected this nematode species with plant DNA extracted from 0.1 g of healthy strawberry leaves containing 5 or more A. fragariae specimens (Figure 6A).Thus, reliable sensitivity was estimated at 0.3 nematode per reaction tube using plant DNA extracts.No positive results were obtained from the testing of crude plant extracts without target nematodes. Testing of Field Samples Detection of the strawberry foliar nematode was confirmed in all strawberry samples with leaves and stolons in which fern leaves infected with A. fragariae were added.All extracts from the strawberry samples using Baermann funnels contained more than 30 specimens of non-target nematodes (cephalobids, rhabditids and tylenchids), and strawberry samples with target nematodes contained more than 5 moving A. fragariae specimens, which were visible under a binocular microscope.The samples were used to prepare crude nematode extracts.All samples containing A. fragariae showed strong positive test lines on the LF strips in RPA assays, whereas extracts from strawberry samples containing only non-target nematodes gave only a control line on the LF strips (Figure 7).No false negative or false positive reactions were observed in this experiment. Testing of Plant Herbarium Materials Aphelenchoides fragariae detection was confirmed using DNA extracts obtained from 20 dried leaf samples previously identified as infected by this nematode.Nematode-infected plants were collected and identified by Dr. D. Sturhan in Germany and New Zealand and preserved as herbarium specimens (Table 1).Positive test lines on the LF strips were observed for all DNA samples extracted from infected plant leaves only, whereas DNA samples without nematodes showed only a control line (Figure 6B). Testing of Field Samples Detection of the strawberry foliar nematode was confirmed in all strawberry samples with leaves and stolons in which fern leaves infected with A. fragariae were added.All extracts from the strawberry samples using Baermann funnels contained more than 30 specimens of non-target nematodes (cephalobids, rhabditids and tylenchids), and strawberry samples with target nematodes contained more than 5 moving A. fragariae specimens, which were visible under a binocular microscope.The samples were used to prepare crude nematode extracts.All samples containing A. fragariae showed strong positive test lines on the LF strips in RPA assays, whereas extracts from strawberry samples containing only non-target nematodes gave only a control line on the LF strips (Figure 7).No false negative or false positive reactions were observed in this experiment. Discussion The LF-RPA assay developed and tested in this study is a simple, fast and sensitive method for the detection of the strawberry foliar nematode, A. fragariae.The assay allows for the detection of this species from crude nematode extracts, nematode DNA and DNA Discussion The LF-RPA assay developed and tested in this study is a simple, fast and sensitive method for the detection of the strawberry foliar nematode, A. fragariae.The assay allows for the detection of this species from crude nematode extracts, nematode DNA and DNA extracts from infected plant tissues.The LF-RPA assay has some important advantages over our previously developed PCR method of A. fragariae detection [21], the first being that it uses crude nematode extract for the analysis instead of DNA extracts, which are required for PCR assays.The second advantage is that results are available for up to 25 min for RPA vs. more than 3.0 h for PCR assay including DNA extraction, PCR and electrophoresis. The present A. fragariae diagnostic assay and our recently published Meloidogyne hapla diagnostics assay [30] can be used to detect these plant-parasitic nematodes in support of the strawberry certification program.These LF-RPA nematode diagnostic assays could also be performed without any special equipment. The LF-RPA assay developed in this study is for detection of A. fragariae only.The testing did not reveal any false positive results with other nematode samples, including samples with an unidentified anguinid nematode parasitizing the fern, P. munitum.This putative new anguinid species found in Washington state, in rainforests, causes necrotic symptoms on leaves similar to those induced by the strawberry foliar nematode. Diagnostic specificity of an assay facilitates the detection of a target organism even in the presence of non-target species that are potentially cross-reactive.The selection of appropriate species-specific primer and probe sequences that match the genomic region of the target species is critical for assay design.For example, recent extensive testing of primers and a probe proposed by Wang et al. [31] for an RPA diagnostic assay for Globodera rostochiensis with a wider range of Globodera and other cyst nematode species showed that the assay is not specific.The results of this testing showed that the proposed ITS rRNA gene putatively specific primers and the probe gave positive reactions not only with G. rostochiensis, but also with other Globodera and representatives of the genus Punctodera parasitising grasses [32].It has been known that complementarity between primers and templates is often crucial for DNA amplification.It is likely that lower reaction temperatures during RPA might also lead to less specific DNA hybridization, compromise primer specificity, and may have a significant effect on the occurrence of false positive results despite mismatches [32]. In the present assay, the A. fragariae-specific primers have many mismatch differences with non-target Aphelenchoides, and testing did not reveal any false positive reactions with other non-target nematodes.However, the in silico and laboratory tests did not include two species, A. blastophthorus and A. saprophilus, that have close phylogenetic relationships with A. fragariae.Unfortunately, the ITS rRNA gene sequences of these two species are not currently available in GenBank, and their DNA was also not available for the present study.Further testing with other Aphelenchoides species is needed for confirmation of the high specificity of the described LF-RPA assay. Several Aphelenchoides species are known to parasitize or to be associated with strawberry plants: A. besseyi, A. bicaudatus, A. blastophthorus, A. fragariae, A. pseudobesseyi, A. pseudogoodeyi, A. ritzemabosi and A. rutgersi [33][34][35][36][37][38][39].In this study, A. smolae was reported to be a nematode species associated with strawberry for the first time.This species was recently isolated and described from medium soil and tissues of Lilium orientalis bulbs imported in China from the Netherlands [40]. In the results of our study, we also identified A. fragariae from fern, Polystichum munitum and rice seeds.This is the first molecular identification of this nematode in this fern species in California.Previously, A. fragariae on P. munitum were found in several western states [41] and California.To the best of our knowledge, it is also the first report of A. fragariae from rice.Presently, only A. oryzae and A. pseudogoodeyi belonging to the A. besseyi species complex are known to be parasites of rice [37,39].In California, nematodes from the A. besseyi species complex were detected in a fungal culture of Sclerotium oryzae, which caused stem rot of rice in 1963.The fungus was collected from a rice field in Butte County that was used by a research facility that exchanged seeds with areas in the southeastern USA.In the last 20 years, there have been very occasional Aphelenchoides detections in Butte, Colusa, Sutter, Yolo and Yuba counties during phytosanitary inspections of rice for export [42].The finding of A. fragariae in rice seeds will alert plant pathologists and nematologists to the necessity for further surveys of this pest in rice fields in California.The Aphelenchoides fragariae RPA-LFA assay, together with another novel A. besseyi species complex assay, that combines the RPA and CRISPR/Cas12a methods [43] could be used in rapid diagnostics of these Aphelenchoides pests in plant and soil samples. Nematode Samples Aphenchoides fragariae isolates and other nematodes used in the present study were obtained from various sources (Table 1).Juvenile stages and adult stages of the strawberry foliar nematode were extracted from fresh leaf samples of different plants using a standard Baermann funnel method [44].Extracted nematodes were morphologically and molecularly identified.A herbarium collection of A. fragariae-infected plant leaves collected in Germany, New Zealand and other locations during 1960-2000 were obtained from Dr. D. Sturhan (BBA, Münster, Germany), who identified the nematode morphologically.The herbarium materials were kept at room temperature until this study, and then, used for plant DNA extraction.Plant materials were also provided by Dr. J.L. McCuiston and identified as nematode-infected in our previous study [21].DNA of several other nematodes (A.besseyi, A. oryzae, A. pseudobesseyi, A. ritzemabosi, A. smolae, Aphelenchoides sp., Aphelenchus sp., Bursaphelenchus fraudulentus, B. mucronatus, B. cocophilus and Laimaphelenchus hyrcanus) were also used in assay specificity experiments (Table 1).These species were also identified by molecular methods [37][38][39]. For assay validation with field samples, twelve healthy strawberry plants were provided to the CDFA Nematology lab by California growers.Approximately 20 g of plant tissues from each strawberry sample were placed in Baermann funnels in mist chambers.One gram of infected fern leaves was added to the funnels of half of these samples (Figure 8).The water gradually filled the collection tubes and overflowed slowly enough that nematodes remained in the bottom of the tube.All samples were incubated under the mist for 2 days, and then, each collection tube was carefully removed from its funnel without disturbing the contents.A large pipette was used to draw off the water carefully from each tube to avoid stirring up nematodes that had settled to the bottom, as described by Ayoub [44].Extracts from all samples were visually inspected under a binocular microscope to reveal the presence of moving Aphelenchoides specimens and non-target nematodes.The samples were used to prepare crude nematode extracts. export [42].The finding of A. fragariae in rice seeds will alert plant pathologists and nematologists to the necessity for further surveys of this pest in rice fields in California.The Aphelenchoides fragariae RPA-LFA assay, together with another novel A. besseyi species complex assay, that combines the RPA and CRISPR/Cas12a methods [43] could be used in rapid diagnostics of these Aphelenchoides pests in plant and soil samples. Nematode Samples Aphenchoides fragariae isolates and other nematodes used in the present study were obtained from various sources (Table 1).Juvenile stages and adult stages of the strawberry foliar nematode were extracted from fresh leaf samples of different plants using a standard Baermann funnel method [44].Extracted nematodes were morphologically and molecularly identified.A herbarium collection of A. fragariae-infected plant leaves collected in Germany, New Zealand and other locations during 1960-2000 were obtained from Dr. D. Sturhan (BBA, Münster, Germany), who identified the nematode morphologically.The herbarium materials were kept at room temperature until this study, and then, used for plant DNA extraction.Plant materials were also provided by Dr. J.L. McCuiston and identified as nematode-infected in our previous study [21].DNA of several other nematodes (A.besseyi, A. oryzae, A. pseudobesseyi, A. ritzemabosi, A. smolae, Aphelenchoides sp., Aphelenchus sp., Bursaphelenchus fraudulentus, B. mucronatus, B. cocophilus and Laimaphelenchus hyrcanus) were also used in assay specificity experiments (Table 1).These species were also identified by molecular methods [37][38][39]. For assay validation with field samples, twelve healthy strawberry plants were provided to the CDFA Nematology lab by California growers.Approximately 20 g of plant tissues from each strawberry sample were placed in Baermann funnels in mist chambers.One gram of infected fern leaves was added to the funnels of half of these samples (Figure 8).The water gradually filled the collection tubes and overflowed slowly enough that nematodes remained in the bottom of the tube.All samples were incubated under the mist for 2 days, and then, each collection tube was carefully removed from its funnel without disturbing the contents.A large pipette was used to draw off the water carefully from each tube to avoid stirring up nematodes that had settled to the bottom, as described by Ayoub [44].Extracts from all samples were visually inspected under a binocular microscope to reveal the presence of moving Aphelenchoides specimens and non-target nematodes.The samples were used to prepare crude nematode extracts. Preparation of Nematode Crude Extract and DNA Extrication Nematode crude extracts, nematode DNA and DNA from plant tissues were used for development and validation of the LF-RPA diagnostic assay. For nematode crude extracts, live nematode specimens were placed into a drop of distilled water on a glass slide and cut by a stainless-steel dental needle under a stereo microscope.Cut nematodes were transferred in water suspension into a 0.2 mL PCR tube.Extracts from 20 adults or fourth juvenile stage nematode specimens in 40 µL of water were used to make a series of two-fold sequential dilutions to test the sensitivity of the assay. Nematode DNA was extracted from several specimens.Nematodes were placed in 20 µL ddH 2 O on a glass slide and cut by a stainless-steel dental needle under a stereo microscope.Cut nematodes in water suspension were transferred into a 0.2 mL Eppendorf tube, and then, three µL of proteinase K (600 µg/mL) (Promega, Madison, WI, USA) and 2 µL of 10× PCR buffer (Taq PCR Core Kit, Qiagen, Germantown, MD, USA) were added to each tube.The tubes were incubated at 65 • C (1 h) and 95 • C (15 min) consecutively.After incubation, the tubes were centrifuged and kept at −20 • C until use. DNA from infected and healthy control plant leaves was extracted using the Qiagen DNeasy Plant Mini Kit following the manufacturer's protocol.Total genomic DNA was eluted in a final volume of 30 µL elution buffer and stored at −20 • C. Leaf tissues (0.1 g) were used for each extraction.In total, 1, 5, 10, 15 and 20 nematodes were also added into tubes with uninfected strawberry leaves (0.1 g), and then, used for DNA extraction as described above.These preparations were used to test the sensitivity of the assay for the detection of nematodes in plant tissues. For validation of the assay with field samples, live nematodes in 300 mL of water obtained from each sample were added into individual tubes and homogenized with an ultimate laboratory homogenizer, Mini Bead Beater 1 (BioSpec Products, Bartlesville, OK, USA) and one glass bead (5 mm) for 30-60 s to obtain nematode extract. Nematode Molecular Identification The ITS rRNA and D2-D3 expansion segments of the 28S rRNA gene were amplified and sequenced from nematode isolates to confirm their species identity.PCR protocols were used as described by McCuiston et al. [21].The following primer sets were used for PCR: (i) the forward D2A (5 ′ -ACA AGT ACC GTG AGG GAA AGT TG-3 ′ ) and the reverse D3B (5 ′ -TCG GAA GGA ACC AGC TAC TA-3 ′ ) primers for amplification of the D2-D3 expansion segments of the 28S rRNA gene [45] and (ii) the forward TW81 (5 ′ -GTT TCC GTA GGT GAA CCT GC-3 ′ ) and the reverse AB28 (5 ′ -ATA TGC TTA AGT TCA GCG GGT-3 ′ ) primers for amplification of the ITS1-5.8-ITS2rRNA gene [46].PCR products were purified using a QIAquick PCR Purification Kit (Qiagen, USA) and directly sequenced with the primers mentioned above or cloned using a pGEM-T Vector System II kit (Promega, Fitchburg, WI, USA), and then, the clones were sequenced.Sequencing was performed by Genewiz Inc. (Berkeley, CA, USA).Obtained sequences were compared with those deposited in GenBank using a Blastn search [47].New sequences were deposited in the GenBank database under accession numbers OR685296-OR685297 (ITS rRNA gene) and OR691588-OR691594 (28S rRNA gene). RPA Primer and Probe Design and Testing Two forward and two reverse RPA primers specific to A. fragariae were manually designed based on species sequence polymorphisms in the ITS rRNA gene.Primers were synthesized by Integrated DNA Technologies, Inc. (Redwood City, CA, USA).Primers were screened in different combinations using the TwistAmp ® Basic kit (TwistDx, Cambridge, UK).Reactions were prepared according to the manufacturer's instructions.The lyophilized reaction pellets were suspended in 29.5 µL of rehydration buffer, 2.4 µL each of forward and reverse primers (10 µM) (Table 2), 1 µL of DNA template or nematode extract and 12.2 µL of distilled water.For each sample, 2.5 µL of 280 mM magnesium acetate was added to the lid of the tube and the lids were closed carefully.The tubes were inverted 10-15 times and briefly centrifuged to initiate reactions simultaneously.Tubes were incubated at 39 • C (4 min) in a MyBlock Mini Dry Bath (Benchmark Scientific, Sayreville, NJ, USA), and then, they were inverted 10-15 times, briefly centrifuged and returned to the incubator block (39 • C) for 20 min.Sample tubes were then placed in a freezer to stop the reaction.Amplification products were purified with a QIAquick PCR Purification Kit (Qiagen, Germantown, MD, USA).Five microliters of purified product were run in a 1% TAE buffered agarose gel (100 V, 60 min) and visualized with a Gel Green stain.The primer set was selected based on amplification performance.The probe was designed based on species sequence polymorphisms in the ITS rRNA gene.RPA primers and probe were synthetized at Biosearch Technologies (Novato, CA, USA). LF-RPA Assay The LF-RPA assay was carried out using an AmplifyRP ® Acceler8 ® Discovery Kit (Agdia, Elkhart, IN, USA).The reaction mixture for each RPA assay was prepared according to the manufacturer's instructions: The lyophilized reaction pellet was suspended with a mixture containing 6 µL of the rehydration buffer, 2 µL of distilled water, 0.45 µL each of forward and reverse primers (10 µM), 0.15 µL of the probe (10 µM) and 0.5 µL of magnesium acetate.One microliter of the DNA template or extract was added to a reaction tube.The reaction tubes were incubated at 39 • C in a MyBlock Mini Dry Bath (Benchmark Scientific, Edison, NJ, USA) for 20 min.For visual analysis with Milenia ® Genline Hybridetect-1 strips (Milenia Biotec GmbH, Giessen, Germany), 120 µL of HybriDetect assay buffer was added to a reaction tube, and then, a dipstick was placed in this mixture.Visual results were observed within 3-5 min, and then, photographed.The amplification product was indicated by the development of an intensively colored test line (lower) and/or a separate control line (upper) to confirm that the system worked properly.Three replicates of each variant were performed for sensitivity and specificity experiments. Table 2 . 14 Figure 3 . Figure 3.The fragment of the ITS rRNA gene sequence alignment for A. fragariae and several Aphelenchoides species with positions of RPA primers (yellow) and probe (blue) used in the present assay. Figure 3 . Figure 3.The fragment of the ITS rRNA gene sequence alignment for A. fragariae and several Aphelenchoides species with positions of RPA primers (yellow) and probe (blue) used in the present assay. Figure 3 . Figure 3.The fragment of the ITS rRNA gene sequence alignment for A. fragariae and several Aphe lenchoides species with positions of RPA primers (yellow) and probe (blue) used in the present assay Figure 4 . Figure 4. Workflows of LF-RPA detection assay for the strawberry foliar nematode Aphelenchoide fragariae used in this study. Figure 4 . Figure 4. Workflows of LF-RPA detection assay for the strawberry foliar nematode Aphelenchoides fragariae used in this study. Figure 6 . Figure 6.Lateral flow recombinase polymerase amplification (LF-RPA) assay with examples of lateral flow strips.(A) Sensitivity assay with DNA extracted from samples containing strawberry Figure 8 . Figure 8. Baermann funnel technique in the mist system for nematode extraction.(A) Glass funnels and tubes with strawberry samples.(B) Mist extraction system in the Nematology lab, Plant Pest Diagnostic Centre of the California Department of Food and Agriculture. Figure 8 . Figure 8. Baermann funnel technique in the mist system for nematode extraction.(A) Glass funnels and tubes with strawberry samples.(B) Mist extraction system in the Nematology lab, Plant Pest Diagnostic Centre of the California Department of Food and Agriculture. Table 1 . Samples of Aphelenchoides fragariae and other nematodes tested in the present study. Table 1 . Samples of Aphelenchoides fragariae and other nematodes tested in the present study. * ND-DNA extracted from nematodes; PD-DNA extracted from infected plants; Ex-crude nematode extract. Table 2 . RPA primers and probe for amplification of DNA of Aphelenchoides fragariae.
7,337.4
2024-01-01T00:00:00.000
[ "Environmental Science", "Biology", "Agricultural and Food Sciences" ]
Analysis and performance evaluation of a three-phase sparse neutral point clamped converter for industrial variable speed drives This paper analyzes the operation and characterizes the performance of a three-phase three-level (3-L) Sparse Neutral Point Clamped converter (SNPCC) for industrial variable speed drives (VSDs). The operating principle of the SNPCC, which advantageously employs a lower number of power transistors than a conventional 3-L inverter, is described in detail, focusing on the AC-side differential-mode and common-mode voltage formation and on the DC-side mid-point current generation processes. The degrees of freedom in the SNPCC modulation scheme are defined and several switching sequences are investigated. Afterwards, the stresses on the active and passive components (e.g. semiconductor losses, machine phase current ripple, DC-link capacitor RMS current, etc.) are calculated by analytical and/or numerical means, enabling a straightforward performance comparison among the identified switching sequences. The most suited modulation strategy for VSD applications is then selected and a chip area sizing procedure, aimed at minimizing the total semiconductor chip size, is applied to a 800V 7.5kW three-phase system. The performance limits of the designed SNPCC are evaluated and finally compared to the ones of conventional 2-L and 3-L solutions, highlighting the promising cost/performance trade-off of the analyzed topology. resent excellent candidates for higher voltage drives, as they employ devices with reduced voltage ratings ensuring superior overall performance [3][4][5]. Moreover, taking advantage of the increased number of output voltage levels, they reduce the high-frequency harmonic current stress on the driven machine [6]. The Sparse Neutral Point Clamped converter (SNPCC) [7] illustrated in Fig. 1 has been introduced in [8] and represents a promising alternative to traditional 3-L inverters. The main advantage of the SNPCC resides in its lower number of active devices, i.e. 10 power transistors, compared to NPC and T-type 3-L converters, which both require at least 12 power transistors. On the other hand, the simpler converter structure translates in a lower number of switching states at high modulation indices [9], leading to slightly larger switching frequency output voltage harmonics. Due to its cascaded structure, composed of a 3-L switching matrix (SM) connected to a conventional 3-2-L inverter, the SNPCC lends itself to hybrid implementations. In particular, the 3-L SM and the 2-L inverter can conveniently adopt different semiconductor technologies (e.g. MOSFETs and IGBTs), since the former stage should maximize the switching per- formance, whereas the latter should minimize the conduction losses. Moreover, the complete converter can be integrated in a single power module, minimizing the commutation loop of the 2-L inverter devices, which includes two of the 3-L SM switches [10]. The operating principle and space vector modulation of the SNPCC have been analyzed in [8,9,11,12], while several control strategies for VSD applications are presented in [7,[13][14][15][16][17]. Furthermore, the design of a hybrid GaN-Si SNPCC is reported in [10]. Nevertheless, according to the authors' best knowledge, a complete analysis of the converter component stresses and overall performance is not available in literature, especially considering the degrees of freedom associated with the converter modulation strategy. For instance, [9] identifies and compares several PWM pulse patterns with a different number of switching transitions, however only the effect on the low-frequency DC-link mid-point voltage oscillation is investigated, disregarding the induced switching frequency harmonic current stress on the driven machine and the switching losses in the semiconductor devices. Moreover, the analysis of [9] is limited to symmetric pulse patterns, hence excluding asymmetric ones, which can trade a higher sampling/control complexity for improved converter performance. Accordingly, the main goals of this work are to provide a detailed analysis of the operation of the SNPCC, identify the most suitable symmetric and asymmetric modulation strategies for VSD application and derive the analytical and/or numerical expressions describing the stresses on the major active and passive components. These expressions quantitatively support the converter design, facilitating the sizing of DC-link capacitors, semiconductor devices and heatsink, and provide an indication of the spectrum of the converter 3-output voltage, i.e. of the high-frequency harmonic stress on the driven machine. In particular, a complete performance comparison between modulation strategies over the full converter operating region is provided, taking into account the current ripple stress on the driven machine and the semiconductor switching losses. Moreover, a conclusive semiconductor chip area investigation aims to identify the cost/performance trade-off of the SNPCC, in comparison to traditional 2-L and 3-L solutions. This paper is structured as follows. The operating principle of the SNPCC is described in Sect. 2. The converter switching states are identified and both the AC-side differentialmode (DM) and common-mode (CM) voltage formation and the DC-side mid-point current generation processes are explained. In Sect. 3, several suitable modulation strategies are defined, according to a set of rules constraining the switching sequence. In Sect. 4, the major component stresses, such as the machine phase current ripple, the DClink capacitor RMS current and the semiconductor losses are investigated. In Sect. 5, a chip area minimization procedure is applied to three different 800V 7.5kW 3-VSD systems, adopting a 2-L, a 3-L NPC and a 3-L SNPC converters, respectively. The performance limits of each topology are identified and compared, highlighting the promising cost/performance trade-off of the SNPCC. Finally, in Sect. 6, a brief summary of the main contributions of this work is provided. In the Appendix, further clarifications on the adopted analytical methods for the derivation of the machine phase current ripple and the converter switching losses are given. Operating principle The SNPCC is composed of a capacitively splitted DC input and a 3-L SM, i.e. a 3-L DC voltage source, feeding a 3-2-L inverter, as illustrated in Fig. 1. The role of the 3-L SM is to control the 2-L inverter rail-to-rail voltage v hl in order to provide the desired 3-output voltage in combination with the 2-L inverter. Due to the split DC-link, the 3-L SM semiconductor devices advantageously require only half the voltage rating of the 2-L inverter devices, resulting in lower on-state losses and higher switching speed. Converter states Because of its cascaded structure, the SNPCC offers a total of 2 2 · 2 3 = 32 conduction states [9]. While 18 of these combinations effectively apply a nonzero 3-line-to-line output voltage (denominated active states), the remaining 14 combinations force a short-circuit of the 3-output (denominated zero states). To avoid DC-link short circuits, only the combinations reported in Fig. 2a, b and c are allowed for the 3-L SM. Additionally, for reasons that will be clarified in Sect. 3, Fig. 2 Schematic of the 3-L switching matrix (SM) conduction states and respective SNPCC output voltage space vectors: a zero vector states, b small vector states and c large vector states. The complete SNPCC space vector hexagon is illustrated in (d), together with a reference voltage vector V * and its decomposition. Sector 1 and its subdivision in area I and area II are highlighted in (e) in combination with the associated zero, small and large vectors the 2-L inverter zero states are not considered, thus the total count of zero states reduces to 6. The converter states can be unequivocally identified by the 5 bridge-leg switching functions By leveraging the relation between switching functions and bridge-leg voltages, the space vector representation of Fig. 2d can be finally obtained [9]. The main role of the 3-L SM is to synthesize the amplitude of the reference voltage vector V * , while the role of the 2-L inverter is to establish its direction. Three kinds of space vectors are identified and categorized according to their amplitude: zero vectors, small vectors with amplitude V dc /3 and large vectors with amplitude 2V dc /3. The main difference between the SNPCC and standard 3-L inverters is the absence of medium vectors [18], which translates in a lower number of active states (18 against 24) and slightly higher switching frequency output voltage distortion. For symmetry reasons, the converter operation can be completely analyzed inside a 60 • -wide interval of a 3-output period, i.e. a sector; sector 1 is illustrated in Fig. 2e. Each sector can be divided in two main areas: area I delimited by zero and small vectors, and area II delimited by small and large vectors. The switching functions corresponding to Table 1 Bridge-leg switching functions, space vector amplitudes and converter rail currents corresponding to the space vectors of sector 1 the space vectors of sector 1 are reported in Table 1. It can be observed that small vectors are redundant, since they can all be obtained with either of the two complementary states represented in Fig. 2b, both ensuring v hl = V dc /2. Space vectors dwell-time calculation [9] Defining the converter modulation index M = 2V * /V dc , reporting the reference space vector angle ϑ in a [0, 60 • ] window and leveraging simple geometrical relations, the dwell-time expressions 123 Area I : Area II : and the modulation index boundary between area I and area II is The dwell-time calculation in area II assumes that no small or large vector can be avoided in a switching sequence. Even though area II could be divided into sub-regions to reduce the minimum number of space vector transitions in a switching period [12], this would increase the degrees of freedom in the modulation strategy definition and considerably complicate the PWM and control processes, hence it is not considered herein. DC-link mid-point current generation In order to ensure a symmetric 3-L characteristic and a limitation of the blocking voltage stress on the 3-L SM switches to half of the total DC input voltage, the voltages V pm and V mn across the two series connected DC-link capacitors must be balanced [19], as described in the following. The 2-L inverter rail current allows to derive the 3-L SM rail currents Table 1 (for sector 1 ). Only the small vectors affect the mid-point current i m . In particular, the redundant vectors (V S1,P /V S1,N and V S2,P /V S2,N ) have an equal and opposite effect on i m , allowing to balance the DC-link capacitor voltages V pm and V mn . If both redundant small vectors are used, the mid-point current local average (i.e. mean value over a switching period) in sector 1 is where α ∈ [0, 1] is a control parameter which defines the dwell-time of the positive small vectors, i.e. V S1,P and V S2,P , relative to the total small vector dwell-times δ S1 and δ S2 , respectively. In nominal operating conditions α = 0.5 yields i m,AVG = 0. Output voltage formation The converter 3-output voltage derives from the superposition of both the 3-L SM and the 2-L inverter switching functions. Considering a balanced mid-point voltage V pm = is obtained. The DM component of v xm defines the output voltage fundamental applied to the driven machine and the switching frequency voltage harmonics resulting in the phase current ripple (see Sect. 4), therefore it must be separated from the CM contribution. A straightforward approach considers the 3-L SM and 2-L inverter separately. The 3-L SM equivalent circuit is illustrated in Fig. 3a, where the virtual point m' is defined for the purpose of separating the DM and CM voltage sources into v CM, Finally, the total DM and CM voltages can be derived from (14)- (17), obtaining A summary of the DM and CM output voltages in sector 1 is provided in Table 2. The DM voltage waveform v DM,x is composed of 5 levels in area I and 6 levels in area II , while the CM waveform v CM shows 5 levels in area I and 4 levels in area II . 123 Table 2 2-L inverter rail-to-rail voltage and DM and CM output voltage components corresponding to the space vectors of sector 1 Vector An example of the converter switching functions and voltage waveforms in area II is provided in Fig. 4, where M = 1 is considered. These waveforms are obtained by ordering the space vectors according to a generic switching sequence (i.e. sequence U, see Sect. 3) and translating them into a pulse pattern in the time domain. It can be visualized that the 2-L inverter switching functions (s a , s b and s c ) are alternately clamped either to 0 or 1 for two-thirds of the fundamental period, as in the 1/3 Modulation [20]. This behavior derives from avoiding the 2-L inverter zero states, which allows to switch only one inverter bridge-leg in each sector. To achieve 3-sinusoidal (in local average) output voltages, the 3-L SM operates such that the local average of v hl follows the 3rectified line-to-line output voltage fundamentals. In other words, in each sector, the 3-L SM sets the desired voltage between the two clamped phases, leaving to the third inverter bridge-leg the regulation of the two remaining 3-line-toline output voltages [20]. The local averages of v CM and v xm are the same as for the conventional space vector modulation, independently on the selected modulation strategy. This is because switching sequence U and all sequences described in Sect. 3 must include every available non-zero state and equally distribute the dwell-time between redundant small space vectors, thus yielding the same 3-output voltage local average as a conventional 2-L inverter. Differently from 2-L and other 3-L inverters, the dwell-time allocation of the zero states does not influence the CM voltage, since both V Z1 and V Z2 result The focus over a switching period in Fig. 4b provides insight into the selected switching sequence and allows to verify the DM and CM voltage levels listed in Table 2. To conclude, the generated 3-sinusoidal output and midpoint current waveforms with unity power factor (cos ϕ = 1), i.e. ohmic load behavior, are illustrated in Fig. 5. The value of the machine phase inductance (acting as filtering element) is selected to achieve a 30% maximum peak-to-peak current ripple during a 3-output period. While i m continuously Modulation strategies The reference space vector V * (see Fig. 2d) can be synthesized in different ways, which give light to different modulation strategies. Both the switching sequence and the redundant small vector dwell-time allocation can be varied, yielding different results in terms of DM and CM voltage waveforms, semiconductor losses and mid-point current local average. To limit the degrees of freedom in the modulation strategy definition, only switching sequences that • have a single switching function change per vector transition, • have less than 10 transitions per switching period, • begin and end with the same small vector in order to avoid additional transitions between area I and area II , • consider all redundant small vectors to have full control of the mid-point current local average (α = 0.5 in steady state) and • avoid the 2-L inverter zero states, since these would force additional switching transitions, are considered throughout this work. According to these hypotheses, the converter states can be displayed in grid arrangement as in Fig. 6, with two different grids for area I and area II , where V Z1 and V Z2 in the former are replaced by V L1 and V L2 in the latter. This representation, firstly proposed in [9], easily illustrates the admitted switching transitions for the 3-L SM (blue arrows) and the 2-L inverter (pink arrows). By means of simple geometrical considerations and aiming for the minimum number Table 3. The equivalent switching frequencies of the 3-L SM f sw,M and the 2-L inverter f sw,I in Table 3 (normalized with respect to the sampling/control frequency f s ) are found by averaging the switching frequencies of the respective transistors as where f sw,j and f sw,k are the switching frequencies of the j-th transistor in the 3-L SM and the k-th transistor in the 2-L inverter, respectively. A distinction must be made between switching sequences with symmetric and asymmetric pulse patterns. Symmetric sequences are mirrored with respect to their center, meaning that the first half of the sequence is repeated in inverse order in the second half of the switching period. This is not the case for asymmetric pulse patterns: as a consequence, there is no fixed point in time when the instantaneous current value equals its local average, thus regular sampling cannot be adopted. Nevertheless, this issue can be easily overcome by modern microcontrollers with built-in oversampling and averaging functions. Therefore, both symmetric (C, U, S, G) and asymmetric (O, 8, B, 6, A, H, 3) sequences are considered in the present analysis. Since the switching sequence starting vector (i.e. one of the four small vectors) can determine different switching losses and affect their distribution among the semiconductor devices, all four sequence variants are investigated. In general, it can be advantageous to adopt different variants of a sequence in consecutive sectors. In order to minimize the number of switching actions at the sector transitions, only sequences starting with V S1,P /V S2,P or V S1,N /V S2,N can be alternated, i.e. mirrored. This can reduce or eliminate uncontrollable current ripple spikes at the sector transitions, typically encountered when asymmetric pulse patterns are adopted. Although this measure does not eliminate the lowfrequency harmonics in the output voltage waveform, inherent to asymmetric switching sequences, the resulting current distortion can be reduced with a properly tuned converter closed-loop control, comprising a disturbance feed-forward correction. In addition, it can be demonstrated that alternating complementary switching sequences between odd and even sectors allows to obtain symmetric switching loss characteristics with respect to the load power factor for all modulation strategies (see Fig. 14). For the mentioned reasons, the mir- The equivalent switching frequencies of the 2-L inverter ( f sw,I ) and the 3-L SM ( f sw,M ) are reported in normalized form with respect to the sampling/control frequency f s Equivalent circuit representation of all converter conduction states in area II of sector 1 , ordered according to sequence U. Since sequence U is symmetric, the space vector pattern from V S1,P to V S2,P is repeated in the opposite direction during the second half of the switching period roring of switching sequences is considered throughout this work, yielding results which are independent on the sequence starting vector and thus simplifying the analysis. All the identified symmetric switching sequences (i.e. C, U, S, G) are also reported in [9], which additionally considers modulation strategies that do not ensure a zero mid-point current local average and that adopt the 2-L inverter zero states in area I . For instance, sequence U is referred to as Maximum Neutral Point Balancing (MNPB) sequence in [9] and is tested experimentally on a SNPCC prototype in [10]. Finally, for completeness, an equivalent circuit representation of all converter conduction states in area II of sector 1 , ordered according to sequence U, is reported in Fig. 8. Component stresses The voltage and/or current stresses on the active and passive components have a direct impact on the converter design. Analytical and/or numerical expressions are derived in this section for all major component stresses, in relation with the adopted modulation strategy. where ϕ is the load power factor angle, are considered. Figure 9 highlights the transition region between area I and area II in sector 1 , i.e. the modulation index interval 1 / √ 3 ≤ M ≤ 2 /3 in which the reference space vector V * crosses both areas. Since separate analytical expressions of the component stresses are derived for area I and area II , a merging procedure must be performed in the transition region, in order to ensure the continuity of the results. Leveraging simple geometrical relations, the merged expression of the generic quantity X can be obtained as where X I and X II are the expressions in area I and area II , respectively, and is the angle reported in Fig. 9. Machine phase current ripple The DM voltage-time area applied to the machine phases generates a switching frequency flux linkage ripple, which translates in a current ripple inversely proportional to the machine phase inductance L. This ripple is directly responsible for the high-frequency winding and iron losses in the machine itself [21], thus representing a first performance indicator of the adopted modulation strategy. The current ripple of phase x is found by integrating the high-frequency component of the DM voltage v DM,x,hf , i.e. subtracting the local average from v DM , as This modeling approach has been adopted and experimentally validated in [22], demonstrating a high level of accuracy. A highlight of the output current ripple waveforms is provided in Fig. 10, assuming M = 0.85. To generalize the results, the normalization factor is introduced [23] and the current ripple waveforms are expressed in normalized form. It is worth observing that some asymmetric switching sequences (e.g. 8, 6, A) yield a current ripple envelope that lacks half-wave symmetry, thus featuring an output current spectrum with even order switching harmonics, neverthless, these harmonics do not affect the low-frequency sinusoidal output current shape. To take into account the complete 3-output period, the total RMS current ripple is considered as performance index. Its value can be derived by averaging the RMS current ripple contributions of all three phases over one sector as It is easily proven that I RMS is independent on the starting vector of the switching sequence. Separate I RMS analytical expressions can be derived for area I and area II , as described in Appendix 1. Therefore, to ensure the continuity of the results over the complete modulation index M range, the merging procedure previously described is applied. The normalized total RMS current ripples generated by the different modulation strategies are reported in Fig. 11. A best performing sequence cannot be identified over the full operating range, since I RMS depends on M. Nevertheless, an overall satisfactory trend is offered by sequence 8, which yields the lowest I RMS value for high modulation indices. DC-Link capacitor RMS current Disregarding the switching frequency current ripple, the RMS current flowing into each DC-link capacitor in balanced conditions is not affected by the modulation strategy, since it only depends on the space vector dwell-times, as discussed in the following. Leveraging the sector periodicity, the expressions for the DC-link upper-rail current i p , i.e. Even though I C dc ,RMS is independent on the modulation strategy, the voltage ripple on C dc generally depends on the switching sequence itself. Nevertheless, since all the strategies considered in this work force a zero mid-point current local average, no low-frequency mid-point voltage ripple is present, making a comparison between switching sequences unnecessary in these regards. Conduction losses Silicon IGBTs and diodes are considered throughout this work. Their conduction characteristics can be approximated with a constant forward voltage drop term V th and a differential resistance term R, as illustrated in Fig. 12a. Therefore, their conduction losses can be expressed by where I AVG and I RMS are the average and RMS currents flowing through each device, respectively. Disregarding the switching frequency current ripple, both I AVG and I RMS can be analytically derived for every device. This approach has been largely adopted in literature and its accuracy has been experimentally proven, e.g. in [26]. Different analytical expressions are found, depending on whether the reference voltage vector lies in area I or area II , therefore the merging of the results is applied in the transition region. Due to the complex nature of the derived expressions, these are not reported herein, nevertheless the IGBT current stresses are illustrated in Fig. 13 i It can be demonstrated that none of the expressions of I AVG and I RMS depends on the utilized switching sequence. Only if the 2-L inverter zero states were to be adopted, the conduction losses of the 3-L SM would depend on the modulation strategy, since during the 2-L inverter zero states none of the 3-L SM devices would be conducting. Switching losses The converter switching losses and their distribution between the 2-L inverter and the 3-L SM strongly depend on the selected switching sequence, since transitioning from a converter state to another can result in a more or less lossy commutation depending on the switched voltage and switched current values, as shown in Appendix 2. It can be easily demonstrated that the total converter switching losses have sector periodicity. Additionally, sequences G, C, 8, 6, A and 3 show different switching performance depending on the sequence starting vector. Nevertheless, by varying the starting vector between odd and even sectors as described in Sect. 3, this difference averages out over one 3-output period, resulting in a symmetrical switching loss characteristic with respect to the load power factor angle ϕ. To gain a quantitative insight into the switching performance of the SNPCC, the simplified energy loss model illustrated in Fig. 12b, based on a linear dependence with respect to both switched voltage V sw and current i sw , is adopted. Two different coefficients are identified for the turnon (k on ) and the turn-off (k off ) transitions, leading to different loss components, i.e. where k on includes the diode reverse-recovery loss contribution. If both transitions occur with the same V sw and i sw values, the total switching losses can be summarized as where k = k on + k off and its value is assumed to only depend on the semiconductor voltage rating, as explained in Sect. 5. The main goal of the considered simplified loss model is to enable a straightforward comparison among switching sequences. Nevertheless, the adopted linear switching loss approximation is met for most commercial IGBT/diode pairs, as attested by the switching loss data provided by the main manufacturers [27,28]. 123 Disregarding the switching frequency current ripple and analyzing all vector transitions in each switching sequence, it is possible to express the 3-L SM and 2-L inverter switching losses analytically in closed form, as described in Appendix 2. The results of this analysis are independent on the semiconductor chip area (see Sect. 5) and are shown in normalized form in Fig. 14 as a function of ϕ, considering the values of k on , k off and k reported in Table 5. The switching loss subdivision between the 3-L SM and the 2-L inverter is derived for sequence 8 for demonstration purposes and is graphically illustrated in Fig. 21. Since separate analytical expressions are found for area I and area II , the switching losses in the transition region can be calculated with the previously reported merging procedure. Even though no single best solution is identified, sequence O results the best candidate over the full operating range for high M values, i.e. in area II , since it avoids the most lossy switching transitions between large vectors. Converter sizing and performance evaluation The sizing and the performance evaluation of a 7.5kW 3-SNPCC for VSD applications are described in the following, leveraging the analysis presented in the previous sections. The converter specifications and nominal operating conditions are summarized in Table 4. Optimal modulation strategy As discussed in Sect. 4, the performance of the switching sequences is summarized by I RMS in Fig. 11 and E sw in Fig. 14. To compare them, an appropriate switch- . Therefore, all sequences which include these transitions are strongly favored or penalized depending whether the SNPCC is operating in area I or area II , respectively. The switching loss maxima in correspondence of ϕ = ±π/2 are directly related to the 2-L inverter devices, since the two 30 • switching windows of each bridge-leg (see Fig. 4a) become aligned with the respective phase current positive and negative peaks, thus maximizing the averaged switched current ing frequency f sw is selected for each sequence to equalize the switching losses in nominal operating condition. Since I RMS is inversely proportional to f sw , this process allows to obtain a single normalized performance index, providing a clear relative comparison between modulation strategies. This comparison is performed assuming nominal operating conditions (i.e. M = 0.85, cos ϕ = 1) and the results are illustrated in Fig. 15a, where lower index values translate into better performance. Figure 15b and c report the adjusted switching frequency values of both converter stages and the switching loss distribution between the 3-L SM and the 2-L inverter, respectively. It is shown that the relation between the switching frequency of a converter stage and its switching losses is not straightforward, as the results are also affected by the switched current and voltage values (see Appendix 2). Overall, the best performing switching sequence in the considered working point is the asymmetric sequence 8. Even though sequence S shows similar performance, i.e. achieving the best results among symmetric sequences, it features wider variations of I RMS (M) (see Fig. 11) and E sw (ϕ) (see Fig. 14) with respect to sequence 8, thus leading to a larger converter oversizing if a wide range of operating points needs to be covered. Minimum semiconductor chip area Depending on the operating point of the SNPCC, the current flowing through the semiconductor devices can be unevenly distributed, as shown in Fig. 13. This, together with the different conduction and switching characteristics of IGBTs and diodes, can lead to the unbalanced loading of certain devices. Therefore, the required chip area for each semiconductor device is investigated herein. The basic concept behind the described chip area minimization procedure is to adapt the chip size of each semiconductor device based on its power losses, aiming to comply with a predefined maximum operating junction temperature. For instance, devices which are subject to higher current stresses require a larger chip size to reduce both the differential electrical resistance (lower conduction losses) and the thermal resistance (improved heat dissipation). Therefore, to quantitatively determine the minimum required chip area, accurate semiconductor loss and thermal models need to be defined. The conduction characteristics of IGBTs and diodes are approximated as in Fig. 12. Considering R inversely proportional to the chip area A leads to the instantaneous conduction losses where i is the conducted current and R * is the IGBT/diode specific differential resistance (actual resistance multiplied with the chip area, which results in an area-independent characteristic figure, as known from unipolar power transistors [29]). By substituting in (35) the average and RMS current values derived in Sect. 4, the average conduction losses P cond are obtained. The selected switching loss model is described by (33) and (34), which assumes that all losses occur inside the IGBT. The proportionality terms k on , k off and k are considered to be independent on A, as assumed in [6] and [30]. The average switching losses of a single device can be calculated numerically by identifying all hard turn-on and turn-off transitions within a 3-output period T = 1/ f , together with the switched voltage V sw and current i sw values, as where N on and N off are respectively the number of hard turnon and turn-off commutations of the considered device within T . It is important to separately calculate the switching losses for each device, since asymmetrical switching sequences can lead to an uneven loss distribution between devices of the same bridge-leg. The junction-to-heatsink thermal resistance R th model accounting for heat-spreading proposed in [31], i.e. is finally considered. Each semiconductor junction temperature can thus be calculated as 123 Step 5 Determine minimum chip area A min Step 1 Calculate electrical and thermal resistances (R , R th ) Step 4 Verify if junction temperature T j is below maximum limit T j,max Step 2 Calculate conduction and switching losses (P cond , P sw ) Step 3 Calculate junction temperature T j Step 0 16 Flow-chart of the adopted chip area minimization algorithm where T hs = 80 • C is the heatsink temperature and P tot = P cond + P sw is the average power loss in each device. Leveraging these chip area dependent loss and thermal models, the algorithm used for determining the minimum required chip size illustrated in Fig. 16 is adopted. The semiconductor loss parameters V th , R * and k provided in Table 5 are obtained by statistical fitting of the conduction and switching characteristics of the latest generation 600/1200 V Trench and Field-Stop IGBTs and fast-recovery emitter-controlled diodes from Infineon [28]. Since these parameters are temperature dependent, the maximum admitted semiconductor junction temperature T j,max = 125 • C is assumed as operating temperature from the beginning of the procedure, so that no additional iterative loop is required. A starting chip area value A 0 = 4mm 2 is considered due to practical manufacturing and wire-bonding limitations. Then, the chip size dependent electrical and thermal resistances are calculated, and the average conduction and switching losses in the design operating point are derived, leveraging the current stress and switching energy expressions obtained in Sect. 4. Finally, the chip junction temperature is calculated and compared to the maximum admitted value T j,max . If the maximum temperature limit is fulfilled, the chip area value is saved and the algorithm is ended, otherwise the chip size is increased and the procedure is repeated. The chip area minimization algorithm is run for the nominal operating point and different values of f sw , selecting the minimum required chip size for each device to operate the converter at M = 0.85 and cos ϕ = 1. By summing all IGBT and diode chip sizes, the total converter chip area is obtained. Performance comparison Following the described chip area minimization procedure, the SNPCC is compared to the most adopted converter topologies in industrial VSDs, namely the conventional 2-L converter and the 3-L NPC converter. Figure 17a and b shows the trends of the total RMS current ripple I RMS and the total semiconductor chip area A S as functions of f sw , assuming conventional space vector modulation for the 2-L and the 3-L NPC converters, while considering switching sequence 8 for the SNPCC. The results obtained for the traditional 2-L and 3-L converter topologies are in good agreement with [6]. As a further confirmation of the adopted methodology, the minimum chip sizes obtained for all IGBTs result in device RMS current densities in the range of 80−120 A/mm 2 , depending on the semiconductor breakdown voltage (i.e., 600 V, 1200 V) and the switching frequency (i.e., which determines switching losses). This is in close agreement with the typical value of around 100 A/mm 2 at nominal current for IGBTs of these voltage classes [32]. It is observed that the 3-L NPC yields both the minimum I RMS and the lowest chip area increase with f sw . Nevertheless, the SNPCC requires a lower A S at low frequencies, since it features less semiconductor devices. If a conventional 16 kHz operating switching frequency is assumed for the 2-L converter, the switching frequencies of the 3-L NPC and 3-L SNPC converters can be adjusted to ensure the same I RMS stress on the driven machine. This calculation process is graphically illustrated in Fig. 17 and the results are reported in Table 6. The 2-L converter shows the worst overall performance, as already expected from previous analyses [3,5,6]. The 3-L SNPCC instead requires the lowest total semiconductor chip area and offers an efficiency comparable to the one of the 3-L NPC converter. Moreover, due to its lower number of transistors, it requires 2 less gate driver circuits, as well as 6 less diodes, further reducing the total part count Table 6 Chip area and performance comparison between the 2-L, 3-L NPC and 3-L SNPC (switching sequence 8 is selected) converters, considering P = 7.5kW, V dc = 800V, M = 0.85 and cos ϕ = 1 Fig. 17 a Normalized RMS phase current ripple I RMS / I n multiplied by f sw,0 / f sw (i.e. f sw,0 = 16 kHz is a reference switching frequency purely defined for normalization purposes) and b total semiconductor chip area A S as functions of the converter switching frequency f sw for the 2-L, 3-L NPC and 3-L SNPC converters, considering P = 7.5 kW, V dc = 800 V, M = 0.85 and cos ϕ = 1. Conventional space vector modulation is assumed for the 2-L and 3-L NPC converters, while switching sequence 8 is considered for the SNPCC. A cross-over between the three A S curves is observed in (b), around f sw = 16 kHz, clearly defining the topology with the lowest chip area requirement over the complete frequency range. The design point for each converter is obtained adjusting f sw to yield the same I RMS , assuming the 2-L converter operated at 16 kHz as reference design. This procedure is illustrated graphically, resulting in f sw = 7 kHz for the 3-L NPC converter and in f sw = 9 kHz for the 3-L SNPCC (i.e. f sw = f s , cf. Table 3). A complete comparison of the performance results is reported in Table 6. The SNPCC total semiconductor area is divided between the 2-L inverter (A S,I ) and the 3-L SM (A S,M ), as highlighted in the pie chart and overall complexity. Therefore, the SNPCC represents a promising alternative to traditional industrial VSD solutions, particularly in those applications which require relatively low switching frequency. The results of this analysis substantiate and supplement the findings reported in [9,10]. To further enhance the performance comparison among the 3-L SNPC switching sequences, the chip area minimization procedure illustrated in Fig. 16 is carried out for all modulation strategies, considering nominal operating conditions. The same approach described in Fig. 17 is adopted, adjusting the switching frequency of each modulation strategy to achieve the same I RMS as the 2-L converter operated at 16 kHz. Once the switching frequency is selected, the minimum required chip area for each semiconductor device is identified and the converter losses are calculated. It is worth noting that this comparative evaluation is no longer independent of the converter power level and switching frequency (i.e., as opposed to the normalized comparison reported in Fig. 15), as the converter power level affects the conduction losses and the required chip size, while the selected switching frequency modifies the relative contribution of the switching losses to the total converter loss. Therefore, the results of this comparison do not have general validity but are specific for the considered converter specifications. Figure 18a, b and c shows the adjusted switching frequency f sw , the required semiconductor chip area A S , and the generated semiconductor loss P semi for all modulation strategies, respectively. The figures also indicate the distribution between the 3-L SM and the 2-L inverter stages. The overall semiconductor efficiency η semi is shown in Fig. 18d. As expected from the previous analysis (see Fig. 15), it is found that switching sequence 8 achieves the best performance in terms of converter losses and efficiency, while requiring the minimum semiconductor chip area. Again, switching sequence S achieves similar results to sequence 8 but remains less attractive, as it features larger variations in terms of I RMS (M) (see Fig. 11) and E sw (ϕ) (see Fig. 14). Fig. 17. a Switching frequency f sw , b total semiconductor chip area A S and c total semiconductor losses P semi , divided between the 3-L SM and the 2-L inverter stages. d overall semiconductor efficiency η semi . Sequence C is not present since M > 1/ √ 3, while sequence 8 results the best performing in terms of minimum semiconductor chip area, minimum semiconductor loss and maximum converter efficiency ( ) Conclusion The SNPCC has not received much attention in literature so far, nevertheless it appears as a promising candidate for robust and cost-sensitive industrial drive applications, e.g. fans, pumps, etc., since it is able to generate a multi-level output voltage waveform adopting less active devices with respect to traditional 3-L converters. This paper provides a complete investigation of the SNPCC operation, including a novel analysis of the DM and CM voltage formation process. Several modulation strategies are introduced according to a specific set of rules, including both symmetric and asymmetric pulse patterns. For each of these strategies, a complete analytical and/or numerical evaluation of the major component stresses is carried out, including the RMS ripple of the current supplied to the driven machine, the RMS current in the DC-link capacitors and the conduction/switching losses in the semiconductor devices. Based on the calculated stresses, the best performing modulation strategy for the application at hand is selected. Finally, considering a 7.5 kW 3-system, the minimum semiconductor chip areas required by the 2-L, 3-L NPC and 3-L SNPC converters are investigated. It is found that the SNPCC requires the least total chip area below a defined switching frequency and offers comparable efficiency as the conventional 3-L NPC converter for the same harmonic current stress on the driven machine, resulting an excellent candidate for cost-sensitive low-frequency industrial drive systems. Funding Open access funding provided by Politecnico di Torino within the CRUI-CARE Agreement. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Fig. 19 Instantaneous waveforms considering a switching period in sector 1 (ϑ = 15 • , cf. Fig. 2d), area II (M = 0.85) and sequence 8 (cf. Table 3). a αβ voltage waveforms and b αβ current ripple waveforms. The distinctive behavior of the current ripple for asymmetric pulse patterns is shown, i.e. the ripple value is nonzero at the beginning, middle and end of the switching period where i α and i β are the current ripple components with respect to the αβ-axes defined in space vector theory. It is worth mentioning that symmetric pulse patterns are completely identified within half a switching period, however asymmetric sequences require the total period to be fully described. Therefore, the integration interval of (39) has been defined in the most general way. The expressions of the instantaneous i α and i β can be obtained by integration of the high-frequency DM voltage component at the converter 3-output, resulting in a piecewise linear function. Being N the total number of transitions in the switching sequence (i.e. 6, 8 or 10), the current ripple values at each state transition are defined as where V k,αβ is the applied space vector, V * αβ is the reference voltage vector, δ k T sw is the state dwell-time and L is the machine phase inductance. The main terms of (40) are illustrated in Fig. 19, considering sequence 8. In order to obtain a current ripple waveform with zero average over a switching period, the starting value i 0,αβ must be equal to the ripple average changed in sign, which can be derived in a first iteration considering i 0,αβ = 0 as A zero current ripple average is already ensured by symmetric pulse patterns, but it is not guaranteed for asymmetric ones. Finally, substituting (42) in (39), the total RMS current ripple over one 3-output period is obtained by averaging i 2 RMS over a single sector (i.e. 60 • ), as The presented procedure is here applied to sequence 8 for demonstration purposes, resulting in two different normalized expressions for area I and area II , respectively where I n is defined in (25). Appendix 2 Analytical switching losses To calculate the converter switching losses, the switched voltage V sw and current i sw values at each space vector transition (i.e. switching state change) must be known. It can be demonstrated that the total switching losses of both the 2-L inverter and the 3-L SM stages depend on the sequence starting vector and have sector periodicity, therefore the following analysis focuses only on sector 1 . The current direction is of primary importance in determining whether the switching transition of a bridge-leg causes a hard turn-on or turn-off. Neglecting the switching frequency current ripple and adopting the simplified linear model reported in (33), the average converter switching energy loss can be calculated as where N is the number of transitions in the switching sequence (i.e. 6, 8 or 10) and k j depends on the semiconductor technology of the device involved in the transition (i.e. different between the 2-L inverter and the 3-L SM) and on whether the transition causes a hard turn-on or turn-off (i.e. depending on the switched current direction). Since the instantaneous switched current can be either positive or negative, (46) can be expressed as V sw,j k j,+ I sw,j,+ + k j,− I sw,j,− , where I sw,j,+ and I sw,j,− are respectively the sector-averaged positive and negative (changed in sign) switched currents involved in the j-th transition, while k j,+ and k j,− are the related switching loss coefficients (i.e. indicating either a turn-on or turn-off commutation). As the switched current can assume any of the 3-sinusoidal output current values, 6 different I sw,j terms are derived, i.e. the positive and negative averages of each 3-sinusoidal output current inside sector 1 . The current averaging procedure and its results are illustrated in Fig. 20, however the analytical expressions are not reported for conciseness reasons. Moreover, Table 7 Table 3) in (a) area I and (b) area II . The loss expressions P sw,M and P sw,I are normalized by f sw k M I V dc and only depend on the k I /k M ratio (see Table 5) marizes the switched current and voltage values involved in each space vector transition, together with the kind of hard commutation (i.e. turn-on or turn-off) and the semiconductor technology of the device being switched, i.e. k on,M /k off,M for the 3-L SM devices and k on,I /k off,I for the 2-L inverter devices. Considering the switching sequence mirroring process between odd and even sectors described in Sect. 3, it is found that each converter bridge-leg transition is repeated in the same direction in the diametrically opposite sector (i.e. at ϑ + 180 • ), however the switched current value is changed in sign. Therefore, simplified expressions of the 3-L SM and 2-L inverter switching losses are derived by averaging the loss contributions of both sequence combinations, obtaining where N M and N I are respectively the number of switching transitions involving the 3-L SM and the 2-L inverter, while k M = k on,M + k off,M and k I = k on,I + k off,I are the total switching loss coefficients. Leveraging (48) and (49), together with the semiconductor device parameters reported in Table 5, it is possible to analytically derive the switching losses for all modulation strategies. The resulting expressions for sequence 8 are here reported for demonstration purposes, divided between the 3-L SM contribution which is independent on the sector area, and the 2-L inverter contribution in area I and area II , respectively P sw,I, |ϕ| ≤ π 6 3 sin|ϕ| 2π |ϕ| ≤ π 6 9 sin|ϕ| 2π The obtained expressions are characterized by a 180 • periodicity with respect to the power factor angle ϕ and are graphically reported in normalized form in Fig. 21.
11,121.6
2021-06-07T00:00:00.000
[ "Engineering" ]
Towards smart scanning probe lithography: a framework accelerating nano-fabrication process with in-situ characterization via machine learning Scanning probe lithography (SPL) is a promising technology to fabricate high-resolution, customized and cost-effective features at the nanoscale. However, the quality of nano-fabrication, particularly the critical dimension, is significantly influenced by various SPL fabrication techniques and their corresponding process parameters. Meanwhile, the identification and measurement of nano-fabrication features are very time-consuming and subjective. To tackle these challenges, we propose a novel framework for process parameter optimization and feature segmentation of SPL via machine learning (ML). Different from traditional SPL techniques that rely on manual labeling-based experimental methods, the proposed framework intelligently extracts reliable and global information for statistical analysis to fine-tune and optimize process parameters. Based on the proposed framework, we realized the processing of smaller critical dimensions through the optimization of process parameters, and performed direct-write nano-lithography on a large scale. Furthermore, data-driven feature extraction and analysis could potentially provide guidance for other characterization methods and fabrication quality optimization. Supplementary Note 1 Design of the XY compliant nano-manipulator Compliant nano-manipulators (CNMs) are well suited for the applications with nanometric precision thanks to their features such as friction-free, maintenance-free and compact desktop-size.The desired compliant nano stages should have the following characteristics: nanometric motion quality (< 100 nm) [S1], large stroke (> 1 mm) [S1-S3], high speed [S3, S4], small parasitic rotation [S2, S3], small cross-axis coupling [S5, S6], and high linearity [S3].It is very challenging for the existing compliant XY motion stages to achieve nanometric precision, large range and other good features simultaneously.Although considerable advances have been made, it is still difficult to achieve both nanometric precision and macro range for a planar layout leaf-spring based motion stage through comprehensively considering parasitic rotation, cross-axis coupling and appropriate natural frequencies. A conceptual design of the XY CNM is provided to achieve nanometric precision and large range.As shown in Fig. S1a, the manipulator consists of the guiding mechanism, the redundant constraint one, the decoupling one and the motion stage.The dimensions of each leaf spring are labeled in Fig. S1b. Actuator Motion stage Decoupling mechanism Guiding mechanism Based on the design and modelling, the size parameters of the CNM are determined.The overall size of the compliant manipulator is 300 mm × 300 mm × 14 mm, which keeps the manipulator desktop-level.The geometric parameters and values are shown in Tab.S1.With the above design, we need to model the proposed compliant manipulator to determine its static and dynamic features.By design, the linearity of the load-displacement curve is usually good within the travel range, and hence the stiffness matrix model can be adopted in the static analysis [S7].Fig. S2 shows a flexure beam with its coordinates.The forces and moments in/around certain axes are exerted on certain points, and the relationship between displacements and loads are formulated as follows, where T are the sets of forces/moments and translational displacements/rotational angles, respectively, and K is the stiffness matrix, and the compliance matrix C = K −1 is of the following form The compliance matrix C j at coordinate xo i y can be obtained by transforming the compliance matrix C i at the local coordinate xo i y where matrix P(r i ) is the transformation matrix from the local coordinate to a global coordinate, matrix R(θ i ) is the direction transformation matrix.Matrices P(r i ) and R(θ i ) can be expressed as follows where vector r i = [x i , y i , z i ] T is the one from the local coordinate o i to the global coordinate o j , and and The compliance matrix method is used to model the CNM.The redundant constraint mechanism is a quadruple parallelogram module, the minimum unit of the module is modeled, and then the compliance matrix of the whole module is obtained with the transformation and superposition of the compliance matrix.Fig. S3a shows the minimum unit -the parallelogram module, and Fig. S3b presents the right half of the whole module, and Fig. S3c displays the whole module.Based on the compliance matrix of the flexure beam, the stiffness matrix of the parallelogram module is given by where C p1 and C p2 are the compliance matrices of the two beams, respectively.Hence the compliance matrix of the double parallelogram module can be calculated by where θ di and r di are the transformation angles and transformation vectors, respectively.Thus, the compliance matrix of the quadruple parallelogram module is given by where C d1 and C d2 are the compliance matrices of the two double parallelogram modules, respectively. Similarly, the stiffness matrix of the guiding mechanism K g , the stiffness matrix of the parallelogram mechanism K p , and the stiffness matrix of the double parallelogram mechanism K dp can be obtained.The compliance matrix of a quarter of the CNM can be expressed as follows, Since the manipulator contains four identical parts, the total stiffness matrix of the manipulator K S can be written as follows where θ i is the transformation angles.By simplifying the elements of matrix K S , it can be found K S1,1 = K S2,2 , the axial stiffness in the X direction is the same as the one in the Y direction.The axial stiffness of the CNM is shown as follows, The CNM is simplified to a single-degree-of-freedom mass-stiffness system as shown in Eq. ( 17), of which the natural frequency is calculated by Eq. ( 18) where M S is the equivalent mass of the stage. Supplementary Note 3 Finite element analysis of the CNM To validate the analytical model and the comprehensive performance of the CNM, the finite element analysis (FEA) is implemented using the software ANSYS ® . According to the static model shown in Eq. ( 16), the theoretical relationship between the axial force and displacement are plotted in Fig. S4 The errors between the simulation results of axial stiffness and the theoretical model of the X axis and Y axis are 1.17 % and 1.13 %, respectively.The simulation results further verify the validity of linear stiffness matrix model. The dynamic features of the compliant manipulator are investigated with the modal analysis.And the results are shown in Fig. S5, where the first and second modals represent the translation of the motion stage in the X and Y axis, respectively.The first and second natural frequencies are 60.57Hz and 60.68 Hz, respectively, which are significantly higher than those of the existing leaf-spring based compliant manipulators.Comparing to the natural frequency calculated by Eq. ( 18), the errors between theoretical dynamic model and FEA simulation are less than 1%.As shown in Fig. S7, the real-time control is implemented by using the dSPACE-R1103 rapid prototyping system.The sampling time applied in the servo is 10 kHz.A laser interferometer (attocube IDS3010) is utilized to measure the displacement difference of two sides of the motion stage to calibrate the two linear optical encoders.To evaluate the effectiveness of the semantic segmentation network, we compute seven metrics for evaluation purposes.These metrics include pixel accuracy (PA), precision, average precision (AP), intersection over union (IoU), mean intersection over union (MIoU), frequency weighted intersection over union (FWIoU), and F 1 -score (F 1 ).The PA metric indicates the percentage of pixels that are classified correctly in the image.Precision provides information about the purity of positive detections with respect to the ground truth.AP is the average precision across all classes.The IoU metric measures the intersection between the predicted and target masks, divided by the total number of pixels in both masks.The MIoU is the average IoU across all classes.FWIoU is the average weighted IoU based on the frequency of each class.The F 1 score represents the harmonic mean of precision and recall.The following are the definitions of each of these metrics: where k is the number of categories of the segmentation target, TP, TN, FP and FN are true positive, true negative, false positive, and false negative, respectively. To assess the quality of the segmentation results for each case presented in the test set, we compute the evaluation metrics outlined in Eqs. ( 19)-( 26), and report the results in Tab.S2.The evaluation demonstrates that the proposed method successfully achieves precise segmentation of nano-structures, and the trained model can be readily utilized to optimize the conditions and parameters of the nano-lithography process.We detail the traditional SPL process parameter optimization method and the proposed one based on ML to show sufficient evidence demonstrating the benefits of the proposed framework.Fig. S8 shows the flow chart of traditional SPL process parameter optimization method.First, the initial process parameters need to be given.In this example, we select two typical parameters, scan speed (v i ) and setpoint (s i ), for demonstration.After nanolithography and in situ characterization, AFM images are obtained.Nano-fabrication results often exhibit variations across different locations, even when subjected to identical process parameters.We present the height distribution of two cross-sectional lines of the "Manual measurement" part in Fig. S8.It is evident that the width and depth of the same groove in nano-lithography vary at different locations.To describe the width of the nano-grooves, the parameter of full width at half maximum (FWHM) is usually adopted.The conventional approach is to select one or several cross-sectional lines for manual measurement of the FWHM of the nano-structures.However, it is very errorprone, time-consuming and usually difficult to fully characterize and measure the entire nano-structures manually.As a result, it remains challenging to optimize the process of nano-lithography without an accurate and reasonable metric.With the results of the manual measurement, the process parameters are adjusted for optimization.The inaccuracy associated with manual measurement and the lack of clear direction in adjusting process parameters significantly hinder the efficiency and results of the optimization process. To tackle the above challenges, we propose the framework to accelerate nano-fabrication process with in-situ characterization via machine learning.Fig. S9 shows the flow chart of the proposed SPL process parameter optimization method.In contrast to the conventional approach, which involves specifying two typical parameters, the proposed method employs a matrix of parameters.This facilitates rapid experimentation with multiple sets of parameters.The proposed framework allows for the automatic processing of AFM topological images using ML to extract a multitude of information.Here we show a nano-lithography result as an example to demonstrate the credible and comprehensive statistical and analytical results automatically obtained based on the framework, as shown in the "Automatic measurements for global information" part in Fig. S9.For each set of parameters, the information (e.g., FWHM and pitch) can be obtained automatically for each position in each etched area.Different from traditional SPL techniques that rely on manual labeling-based experimental methods, the proposed framework intelligently extracts reliable and global information for statistical analysis to fine-tune and optimize process parameters.The matrix of parameters is further refined to find the optimal parameters (coarse-to-fine nano-lithography process method). Start Regarding additional process parameters, more experiments are usually required to complete the optimization process.Therefore, additional variables will complicate the optimization process.The proposed ML-based framework can still accelerate the optimization of multi-parameters with automatic and accurate measurement of nanofabrication results (e.g., FWHM and pitch).The proposed framework still has significant advantages in terms of efficiency and accuracy compared to process parameter optimization methods using manual measurement methods. The proposed framework offers essential advantages and benefits, which can be summarized as follows: ⋆ The proposed ML-based framework focuses on the extraction of nano-structures and the automated calculation of important nano-fabrication results (e.g., FWHM and pitch).Specifically, global nano-fabrication results can be automatically obtained, which provides an accurate and fast measurement method for the process parameter optimization. ⋆ We also propose a coarse-to-fine nano-lithography process method for the rapid determination of optimal process parameters.The process of reducing pitch size relies on the FWHM and pitch information automatically obtained by the ML-based framework, which speeds up the optimization process of manual methods. In the coarse-to-fine nano-lithography process method, we conduct experiments to obtain optimized process parameters.Specifically, the tapping mode of m-SPL is chosen for nano-lithography experimentation.The driving amplitude of the probe and scanning speed are identified as critical factors that affect the machining results while maintaining constant values for other conditions.For each of the aforementioned factors, 11 values are chosen, resulting in 121 parameter pairs for the experiment, which are illustrated in Fig. S10.The area of the nano-lithography pattern is 17 × 17 µm 2 . During the coarse step, the setpoint range is set from 0.5 to 5.5 nA, increasing at an interval of 0.5 nA.The scanning speed values, on the other hand, are selected as 0.5, 1, 5, 10, 50, 100, 300, 500, 1, 000, 2, 000, and 3, 560 µm/s, respectively.To minimize error, nine lines are generated under each parameter pair, with each parameter being processed within a range of 1 × 1 µm 2 and a parallel line spacing of 100 nm.After processing, we use the same probe to acquire surface topography data.The sample is scanned at a Speed Setpoint Fig. S11 presents the AFM topology image obtained during the coarse step of the process method.Fig. S11a showcases the topography of the sample surface after being spin-coated with PMMA before nano-lithography.Following nano-lithography and insitu characterization, the resulting nano-structures are depicted in Fig. S11b.It can be observed that the critical dimension and depth of the nano-structures vary in a gradient with the change of the nano-lithography parameters.In this regard, we select the region of interest and fine-tune its parameters to determine the optimized process parameters.The red box illustrated in Fig. S11b In the fine step of the process method, the Setpoint value ranges from 3.0 to 4.0nA at an interval of 0.1 nA, while the scanning speed ranges from 5 to 55 µm/s at an interval of 5 µm/s.Fig. S12 illustrates the AFM topology image obtained during the fine step of the process method.Fig. S12a presents the topography of the sample surface after being spin-coated with PMMA prior to nanolithography.Following nanolithography and in-situ characterization, the nano-structures are obtained, as depicted in Fig. S12b.In the fine step of the process method, the gradient variation of the critical dimension and depth decrease.By utilizing this process method, the process parameters associated with the desired nano-structure can be attained.with a pitch of 500 mm.The proposed SPL system supports millimeter-level stroke, thus enabling direct processing of this pattern without stitching.However, conventional AFMs typically have a scanning range of less than 100 × 100 µm 2 , and the existing approaches face challenges when directly applied to large-area fabrication and characterization.In order to showcase the SPL results in a stitched manner, the aforementioned large-area pattern is subdivided into 25 sub-patterns, each spanning an area of 200 × 200 µm 2 .These sub-patterns are sequentially processed and subsequently stitched together to form a complete large-area pattern.Fig. S18a and Fig. S18b show the SEM images of the large-area patterns.The speed of nanolithography is 1 mm/s.The stitchless method takes a total of 86 minutes and 43 seconds.And the stitched method takes a total of 166 minutes and 50 seconds.Compared with the stitched method, the stitchless nano-lithography result demonstrates an absence of stitching error, and the throughput is increased by 48%.Fig. S21 shows the SEM image of the junction of the four sub-patterns in a stitched manner.Among them, the four corners belong to four different sub-patterns.It can be seen that there are obvious stitching errors at the junction of each sub-pattern using the stitched manner. Fig. S22a and Fig. S22b show the partial enlarged SEM images of the above two largearea patterns processed by different methods using the same probe.Following several hours of utilization, both patterns produced through SPL exhibit critical dimensions of approximately 20 nm.Experiments verify that SPM can support large-area and long-time etching, and the accuracy is almost unaffected.To study the effect of substrate hardness on SPL results, we further choose polished silicon wafers as substrates.The Tap300DLC probe is chosen, which is the same as the previous probe in the experiments on copper substrates.The lithography speed is 0.3 µm/s.Fig. S28 shows the SPL results on silicon substrate.The pattern is an array of parallel lines with a pitch of 500 nm.Fig. S29 shows the height information of A-A crosssection line in Fig. S28.The critical dimension of the lines is 109.6 nm.And the depth is 1.107 nm.The aspect ratio is calculated to be 0.010.The experimental aspect ratio of the silicon substrate is only 27.8% of that of the copper substrate (0.036).Compared with the soft material (PMMA, aspect ratio: 0.373), as shown in Figs.S30-S31, the harder substrate, the smaller aspect ratio.We compare the SPL results with other lithography methods, as shown in Tab.S3 [S8]. Note that the processing results (resolution and speed) are not standard, we list typical data of different nano-lithography methods in Tab.S3.It is evident that SPL exhibits substantially higher throughput compared to EBL and FIB.Furthermore, SPL achieves a smaller critical dimension than common photolithography.Additionally, SPL offers advantages in terms of environmental conditions and cost considerations. Figure S1 : Figure S1: A conceptual design of an XY CNM. a Top view.b Top view with labeled geometric parameters. Figure S2 : Figure S2: Coordinates of the flexure beam. Figure S3 : Figure S3: Schematic diagram and coordinates of compliant mechanisms.a Parallelogram module.b Double parallelogram module.c Quadruple parallelogram module. Figure S8 :Figure S9 : Figure S8: Flow chart of traditional SPL process parameter optimization method. Figure S10 :Figure S11 : Figure S10: Lithography pattern in the coarse-to-fine process method. Figure S12 : Figure S12: The AFM topology images in the fine step of the process method.a Before nano-lithography.b After nano-lithography. Figure S13 : Figure S13: AFM probe in the SPL system. FigFigure S14 :Figure S15 :Figure S16 :Figure S17 : Fig. S13 presents a cross-sectional view of the nano-lithography results for the parameter set with setpoint of 3.3 nA and speed of 25 µm/s.The FWHM of the structure with nano-lithography is approximately 25 nm. Figure S18 : Figure S18: SEM images of the large-area patterns.a SEM image of the large-area pattern in a stitchless manner (1 × 1 mm 2 ).b SEM image of the large-area pattern in a stitched manner (1 × 1 mm 2 ). FigFigure Fig. S19 and Fig. S20 show the partial enlarged views of the upper left corner and the upper right corner of the large-area patterns, respectively.One can clearly see the SPL Figure S21: SEM image of the junction of the four sub-patterns in a stitched manner. Figure S22 : Figure S22: Partial enlarged views of the large-area patterns.a SEM image of the largearea pattern in a stitchless manner.b SEM image of the large-area pattern in a stitched manner. Figure S27 : Figure S27: Height information of A-A cross-section line in Fig. S26. Figure S28: SPL results on silicon substrate. Figure S29 : Figure S29: Height information of A-A cross-section line in Fig. S28. Figure Figure S30: SPL results on PMMA substrate. Figure S31 : Figure S31: Height information of A-A cross-section line in Fig. S30. Table S1 : Geometric dimensions of the XY CNM Supplementary Note 2 Modelling of the CNM Table S2 : Evaluation metrics for segmentation result of nano-structures. 1 Background 2 Nano-structures Supplementary Note 6 Comparison of traditional SPL process parameter optimization method and the proposed one
4,438
2023-10-10T00:00:00.000
[ "Computer Science" ]
A Computational Method for the Restoration of Images with an Unknown, Spatially-varying Blur In this paper, we present an algorithm for the restoration of images with an unknown, spatially-varying blur. Existing computational methods for image restoration require the assumption that the blur is known and/or spatially-invariant. Our algorithm uses a combination of techniques. First, we section the image, and then treat the sections as a sequence of frames whose unknown PSFs are correlated and approximately spatially-invariant. To estimate the PSFs in each section, phase diversity is used. With the PSF estimates in hand, we then use a technique by Nagy and O'Leary for the restoration of images with a known, spatially-varying blur to restore the image globally. Test results on star cluster data are presented. Introduction The mathematical model for image formation is given by the linear operator equation where d is the blurred, noisy image, f is the unknown true image, η is additive noise, and S is the blurring operator.In the case of spatially invariant blurs, S f can be written as a convolution of the associated point spread function (PSF) s and the object f ; that is, However, if the blur is spatially variant, S f cannot be written as a simple convolution operation.Space variant blur can occur from distortions due to telescope optics, such as was the case for the original Hubble Space Telescope Wide-Field/Planetary Camera, which had a large amount of spatial variation because of errors in the shaping mirrors [1].Other important examples include objects moving at different velocities [2], and turbulence outside the telescope pupil [3]. Accurately modeling the spatially variant PSF in these applications can lead to substantially better reconstructions of the image.On the other hand, allowing for a fully spatially-varying PSF results in a computationally intractable problem [4].Thus one typically makes the assumption that s is spatially-invariant on subregions of the image.That is, that the image d can be broken up into regions {Ω n } p n=1 such that the blur in each region is spatially-invariant.Then, if we let c n be the indicator function on Ω n , i.e. c n (u, v) = 1 for (u, v) ∈ Ω n and 0 otherwise, and we define s n to be the (approximately) spatially-invariant PSF on Ω n , the PSF s will be given by (1.2) Substituting (1.2) into (1.1),we obtain where f n (ξ , η) := c n (ξ , η) f (ξ , η). Computational methods for the restoration of images arising from (1.3) have been sought by several researchers, though only in the case that the s n 's are known.Faisal et al. [5] apply the Richardson-Lucy algorithm, and discuss a parallel implementation.Boden et al. [6] also describe a parallel implementation of the Richardson-Lucy algorithm, and consider a model that allows for smooth transitions in the PSF between regions.Nagy and O'Leary [7] use a conjugate gradient (CG) algorithm with piecewise constant and linear interpolation, and also suggest a preconditioning scheme (for both interpolation methods) that can substantially improve rates of convergence. In this paper, we present an algorithm for restoring images arising from model (1.3) when the s n 's are unknown.In our approach, the PSF in each region is estimated using the technique of phase diversity [8].Once the PSF estimates have been obtained, we use a global restoration scheme of Nagy and O'Leary mentioned in the previous paragraph to obtain the restored image. The paper is organized as follows.We begin in Section 2 with a description of the technique of phase diversity.Our computational method is then presented in Section 3. Results on some tests using simulated star field images taken through atmospheric turbulence are provided in Section 4, while conclusions and comments on the spatially-varying blur removal problem are given in Section 5. PSF estimation and phase diversity As discussed in, e.g.[9], with atmospheric turbulence the phase varies both with time and position in space.Adaptive optics systems use deformable mirrors to correct for these phase variations.Errors in this correction process arise from a variety of sources, e.g., errors in the measurement of phase, inability of the mirror to conform exactly to the phase shape, and lag time between phase measurement and mirror deformation.Thus, a spatially variant model can potentially produce better restorations. In this section we describe a phase diversity-based scheme to approximate the PSF associated with each section of a segmented image.For a sufficiently fine segmentation, the PSF can be assumed to be essentially spatially invariant, and thus a blind deconvolution scheme can be applied.The mathematics of this phase recovery process was first described by Gonsalves [8], and has been applied extensively for imaging through atmospheric turbulence [4]. Assuming that light emanating from the object is incoherent, the dependence of the PSF on the phase is given by s where p denotes the pupil, or aperture, function, ı = √ −1, and F denotes the 2-D Fourier transform, (2.5) The pupil function p = p(x 1 , x 2 ) is determined by the extent of the telescope's primary mirror. In phase diversity-based blind deconvolution, the k th diversity image is given by where η k represents noise in the data, f is the unknown object, s is the point spread function (PSF), φ is the unknown phase function, θ k is the k th phase diversity function. In atmospheric optics [4], the phase φ (x 1 , x 2 ) quantifies the deviation of the wave front from a reference planar wave front.This deviation is caused by variations in the index of refraction (wave speed) along light ray paths, and is strongly dependent on air temperature.Because of turbulence, the phase varies with time and position in space and is often modelled as a stochastic process. Additional changes in the phase φ can occur after the light is collected by the primary mirror, e.g., when adaptive optics are applied.This involves mechanical corrections obtained with a deformable mirror to restore φ to planarity.By placing beam splitters in the light path and modifying the phase differently in each of the resulting paths, one can obtain more independent data.The phase diversity functions θ k represent these deliberate phase modifications applied after light is collected by the primary mirror.The easiest to implement is defocus blur, modelled by a quadratic where the parameters b k are determined by defocus lengths.In practice, the number of diversity images is often quite small, e.g., K = 2 in the numerical simulations to follow.In addition, one of the images, which we will denote using index k = 1, is obtained with no deliberate phase distortion, i.e., θ 1 = 0 in (2.6). The minimization problem To estimate the phase φ , and the object f from data (2.6), we consider the least squares fit-todata functional Here || • || denotes the standard L 2 norm.By the convolution theorem and the fact that Fourier transforms preserve the L 2 norm, one can express this in terms of Fourier transforms, (2.9) Since deconvolution and phase retrieval are both ill-posed problems, any minimizer of J data is unstable with respect to noise in the data.Hence we add regularization terms to obtain the full cost functional, (2.10) Here the regularization parameters γ and α are positive real numbers, and the regularization functionals J ob ject and J phase provide stability and incorporate prior information.Because of atmospheric turbulence, variations in the refractive index, and hence the phase itself, can be modelled as a random process [4].We apply the von Karman turbulence model, which assumes this process is second order, wide sense stationary, and isotropic with zero mean; see, e.g., [4].It can be characterized by its power spectral density, where ω = (ω x , ω y ) represents spatial frequency.Corresponding to this stochastic model for phase, we take the phase regularization functional where ) dω, and the superscript * denotes complex conjugate.For regularization of the object, we take the "minimal information prior" Note that the object regularization functional (2.13) is quadratic and the dependence of the fit-to-data functional (2.8) on the object f is quadratic.Moreover, the Hessian with respect to the object of the full cost functional (2.10) is symmetric and positive definite with eigenvalues bounded below by γ.By setting the gradient with respect to the object equal to zero, one obtains a linear equation whose solution yields the object at a minimizer for J f ull [10].From (2.9)-(2.10)and (2.13) one obtains the Fourier representation for the minimizing object, where (2.15) Substituting (2.14) back in (2.9)-(2.10),one obtains the cost functional that we will minimize, where See Appendix B of [9] for a detailed derivation. Our objective is to minimize (2.16) in order to obtain a reconstruction of the phase φ .In order to do that, we need an effective computational method.Such a method is discussed in the next section. The minimization algorithm To minimize J in (2.16) we will use, as in [11], a quasi-Newton, or secant, method known as limited memory BFGS (L-BFGS) [12].A generic quasi-Newton algorithm with line search globalization is presented below.We denote the gradient of J at φ by grad J(φ ) and the Hessian of J at φ by Hess J(φ ). The BFGS method [12] is one of the most popular quasi-Newton techniques.Given B ν , this method generates a new Hessian approximation B ν+1 in terms of the differences in the successive approximates to the solution and its gradient, L-BFGS is based on a recursion for the inverse of the B ν 's, Given v ∈ IR n , computation of B −1 ν+1 v requires a sequence of inner products involving v and the s ν 's and y ν 's, together with the application of B −1 0 .If B 0 is symmetric positive definite (SPD) and the "curvature condition" y T ν s ν > 0 holds for each ν, then each of the B ν 's is also SPD, thereby guaranteeing that −B −1 ν grad J(φ ν ) is a descent direction.The curvature condition can be maintained by implementing the line search correctly [12]. "Limited memory" means that at most N vector pairs {(s ν , y ν ), . . ., (s ν−N+1 , y ν−N+1 )} are stored and at most N steps of the recursion are taken, i.e., if ν ≥ N, apply the recursion (2.20) for ν, ν − 1, . . ., ν − N, and set B −1 ν−N equal to an SPD matrix M −1 ν .We will refer to M ν as the preconditioning matrix.In standard implementations, M ν is taken to be a multiple of the identity [12].For our application we choose an M ν which has the operator representation on an operand ψ, given by where Φ is defined in (2.11).Under mild assumptions, the local convergence rate for the BFGS method is superlinear [12].In the limited memory case, this rate can become linear. Before continuing, we make the observation that the above computational approach provides estimates of both the phase φ and object f (via (2.14)).In our application of removing spatially-varying blur, the main interest is in using this approach for obtaining the PSF as a function of the phase, as given in (2.4).The object f is then reconstructed in a second stage, which we now discuss. Sectioning the image and applying a global restoration scheme For many spatially variant blurs, in small regions of the image the blur can be well approximated by a spatially invariant PSF.This property has motivated several types of sectioning methods [2,13,14,15] that partition the image, restoring each local region using its corresponding spatially invariant PSF.The results are then sewn together to obtain the restored image.To reduce blocking artifacts at the region boundaries, larger, overlapping regions are used, and then the restored sections are extracted from their centers.Trussell and Hunt [2] proposed using the Landweber iteration for the local deblurring, and suggested a complicated stopping criteria based on a combination of local and global convergence constraints.Fish, Grochmalicki and Pike [14] use a truncated singular value decomposition (TSVD) to obtain the local restorations. An alternative scheme can be derived by partitioning the image into subregions on which the blur is assumed to be spatially invariant; but, rather than deblurring the individual subregions locally and then sewing the individual results together, we first sew (interpolate) the individual PSFs, and restore the image globally.A global reconstruction avoids the problem of boundary artifacts that can occur in traditional sectioning methods [2,13,14,15].In algebraic terms, the blurring matrix S, given in (1.1), can be written as where S i j a matrix representing the spatially invariant PSF in region (i, j), and D i j is a nonnegative diagonal matrix satisfying ∑ ∑ D i j = I.For example, if piecewise constant interpolation is used, then the lth diagonal entry of D i j is 1 if the lth point is in region (i, j), and 0 otherwise.In principle, any partitioning scheme can be used.The most obvious approach is to partition the image into rectangular subregions, or to use concentric annular regions in the case that the PSFs vary radially from the on-axis PSF.Faisal et al. [5] use this formulation of the spatially variant PSF, apply the Richardson-Lucy algorithm with piecewise constant interpolation of the PSFs, and discuss a parallel implementation.Boden et al. [6] also describe a parallel implementation of the Richardson-Lucy algorithm, and consider piecewise constant as well as piecewise linear interpolation.Nagy and O'Leary [7] use a conjugate gradient algorithm with piecewise constant and linear interpolation, and also suggest a preconditioning scheme (for both interpolation methods) that can substantially improve the rate of convergence.Furthermore, it is shown in [16] that by generalizing overlapadd and overlap-save convolution techniques, very efficient FFT-based algorithms can be obtained when the image is partitioned into rectangular subregions.Although other partitioning schemes can be used, the computational cost will increase.If the PSFs vary smoothly, then a rectangular partitioning should provide a good approximation of the spatially variant blur.For these reasons, we assume throughout the paper that a rectangular partitioning of the image is used. Note that, using (3.21), the image formation model can be written as: where s i j is the PSF for region (i, j).This model assumes the PSFs are known, so the question is how to use this for blind deconvolution.The algorithm proceeds follows: 1. First determine a region (i, j) for which a good estimate of the PSF can be obtained.Use the L-BFGS method discussed in Section 2.2 on an extended region of (i, j) to obtain reconstruction of the phase φ in that region.From φ an approximation of the spatially invariant PSF for that region is obtained via (2.4).A reconstruction of the image contained in that region, if desired, could be computed by applying the inverse Fourier transform to F given in (2.14). 2. Now, assuming that the overall space varying PSF varies slowly across the image, the PSFs for the regions neighboring region (i, j) should be similar to the one in region (i, j).That is, PSF s i j should be a good estimate of the PSFs s i, j+1 , s i, j−1 , s i+1, j and s i−1, j .Thus, one can use φ i j for the initial guess φ 0 in the L-BFGS iterations, obtaining reconstruction for the phase, and image if desired.Once again, via (2.4), a reconstruction of the PSF is obtained. 3. Execute steps 1 and 2 for each region.When done, one hopefully has a set of good PSFs, and restored regions of the image.A global image restoration algorithm, with the individual (good) PSFs, can be used to restore the blurred image (as done by Nagy and O'Leary [7]).Note that the restored images in the regions can be pieced together, and used as an initial guess for the global restoration scheme. Numerical examples In this section we describe some numerical experiments to illustrate the potential advantages of the space variant approach discussed in this paper.All tests were done on a simulated star field image, shown in Fig. 1, which was obtained from the Space Telescope Science Institute (www.stsci.edu).To simulate an image blurred by a spatially variant point spread function, we begin by generating 1024 different PSFs.The PSFs were created by generating a single pupil function and a moving phase screen, each on a 32 × 32 grid.Figure 2 shows the pupil function, two selected phase screens, and their corresponding PSFs. The blurred image, shown in Fig. 3, was then created by convolving each of the 1024 PSFs with successive 32 × 32 pixels of the true image.By overlapping these regions, we obtain successive 8 × 8 pixel regions of the blurred image with virtually no blocking (boundary) artifacts; see Fig. 3. Advantages of space variant model To illustrate the potential advantages of using a space variant model, we construct certain average PSFs as follows: • By averaging all of the 1024 true PSFs, we obtain a single PSF which represents a spatially invariant approximation of the spatially variant blurring operator. • Next we section the image into 4 equally sized regions (that is, we use a 2 × 2 partitioning of the image).Note that because of the way we constructed the blurred image, there are 4 = 256 true PSFs corresponding to each region.We obtain a single approximate PSF for each region by averaging the 256 true PSFs in their respective regions.We then use these 4 average PSFs to construct an approximation of the space variant blur, as described in Section 3. • Using an analogous approach on a 4 × 4 partitioning of the image, we construct 16 PSFs to approximate the space variant blurring operator, each obtained by averaging the 1024 16 = 64 true PSFs corresponding to each region. • Finally, we use a 16 × 16 partitioning of the image.In this case 1024 256 = 4 of the true PSFs are averaged to obtain a single PSF for each region. The conjugate gradient (CG) method was used to reconstruct the image with the various approximations of the PSF as described above.Efficient implementation of the matrix vector multiplications for the space variant cases (4 PSFs, 16 PSFs, and 64 PSFs) was done using the approach outlined in Section 3. Goodness of fit is measured using the relative error, where f true is the (diffraction limited) true image, and f k is the (diffraction limited) solution at the kth CG iteration.A plot of the relative errors for the special case of noise free data is shown in Fig. 4. Note that when more PSFs are used, a better approximation of the space variant blur is obtained.This is confirmed by the results in Fig. 4. The computed reconstructions at iterations corresponding to the minimum relative errors for each case are shown in Fig. 5. Similar results occur if noise is added to the blurred image, however the ill-conditioning of the problem means that the reconstructions will be more contaminated with noise.For example, with Poisson and Gaussian noise (mean 0, standard deviation 2) added to the blurred image, the relative errors for CG are shown in Fig. 6 and the corresponding reconstructions are shown in Fig. 7. Fig. 4. Relative errors at each iteration of the conjugate gradient method, using increasingly more accurate approximations of the spatially variant blur.These results were computed using a noise free blurred image. Practical results The results from the previous section clearly show that if we are able to get accurate approximations of the PSFs, then the space variant model will provide better reconstructions than by Fig. 6.Relative errors at each iteration of the conjugate gradient method, using increasingly more accurate approximations of the spatially variant blur.These results were computed using a noisy (Poisson and Gaussian) blurred image.using a space invariant model.In this section we show results when the PSFs are reconstructed using a blind deconvolution algorithm.In particular, we use phase diversity, with one frame and two channels.The blind deconvolution (BD) algorithm we use is described in Section (2.2).Our regularization parameters were taken to be α = 1 × 10 −1 and γ = 1 × 10 −4 .We remark that the problem of choosing appropriate regularization parameters is one of the most difficult issues in the numerical solution of ill-posed problems.There are techniques for estimating parameters for simple noise models; see, for example [17,18,19].In many cases good heuristics based on computational experience with the algorithm and the particular application can be useful.The parameters we use are similar to those reported in the literature; see [11]. In the phase estimation step, regularization is applied before computing a minimum of the least squares functional.Thus it is reasonable to seek estimates of the phase and object that yield a gradient norm near zero.We therefore stop L-BFGS iterations once the norm of the gradient had been reduced by six orders of magnitude in all cases.Note that no noise amplification occurs provided the regularization parameters are properly chosen, and, as previously mentioned, the choice of these parameters is dependent on the data. Once the PSFs are computed from the BD algorithm, we then use the CG algorithm to reconstruct the image, as was done in the previous subsection.Figure 9 shows plots of the relative errors when no noise is added to the data.At first glance these appear to be disappointing results; 4 PSFs produce lower relative errors than a single PSF, but additional PSFs actually increase the relative error.It is important to note, however, that to obtain more PSFs, the blind deconvolution algorithm must be implemented on small regions of the image.If the small regions do not contain significant object information, then we cannot hope to obtain good approximations of the PSFs.Consequently, given the fact that the object that we are using in our experiment has significant black background, it is not surprising that several such small regions occur.Thus the poor results when using 64 PSFs. On the other hand, some of the small regions may contain enough significant object information so that the blind deconvolution algorithm can reconstruct good PSFs.This motivates us to consider an approach where we refine the partitioning in each subregion only if further refinement produces good reconstructions of the PSFs.For example: • Suppose we have the partitioning shown on the left in Fig. 8, and that we have obtained reconstructed PSFs on each of the four regions. • Refine the partitioning of the image as shown in the middle of Fig. 8, and reconstruct PSFs for each of the (now smaller) subregions. • Determine if each of these reconstructed PSFs are sufficiently accurate. -If a reconstructed PSF is not sufficiently accurate, then reject it and use a PSF from the previous step for this subregion.In this case, further refinement and reconstruction of PSFs for these regions is not needed. -If a reconstructed PSF is sufficiently accurate, then accept this PSF. • Repartition all of the subregions corresponding to accepted PSFs, and repeat the above process.The right plot in Fig. 8 shows a possible repartitioning of selected subregions. Thus, this adaptive approach uses a "mix" of PSFs computed by the various partitionings of the image.To automate the process of determining when a PSF is reconstructed sufficiently accurately, an appropriate measure must be used.Such a measure will be dependent on the application, and the type of images being reconstructed.In order to test the potential of this adaptive scheme, we used a systematic approach for choosing a good PSF by comparing the computed PSFs with the average PSFs described in the previous subsection.Of course in a realistic problem the average PSFs will not be available, and some other mechanism must be used to decide on the quality of the computed PSFs.In any case, our systematic approach shows that if the best computed PSFs can be found, then substantially better reconstructions can be obtained; see the bottom curve in Fig. 9.The reconstructed images are shown in Fig. 10. Similar results are obtained with noisy data.For example, with Poisson and Gaussian noise (mean 0, standard deviation 2) added to the blurred image, the relative errors using the CG iterative method are shown in Fig. 11.The corresponding reconstructions are shown in Fig. 12, where we also include arrows to indicate regions in which the space variant approach produces significantly better reconstructions (compared to the diffraction limited true image) than the space invariant (1 PSF) model.9. Relative errors at each iteration of the conjugate gradient method, using increasingly more accurate approximations of the spatially variant blur.The PSFs used to approximate the space variant blur were computed using phase diversity-based blind deconvolution.These results were computed using noise free blurred image data. Conclusions We have presented a computational approach for solving image restoration problems in which the blur in the image is both unknown and spatially varying.The approach has three stages.The first stage involves sectioning the image into regions in which the blur is believed to be approximately spatially invariant.In the second stage, phase diversity-based blind deconvolution via the L-BFGS optimization algorithm is implemented in order to obtain an estimate of the phase in each region.From these reconstructed phases, the corresponding PSF in each region can be computed.In the final stage, with these PSFs in hand, the object can be reconstructed globally via the algorithm of Nagy and O'Leary. Our numerical experiments show, first, that using a spatially varying model when a spatially varying blur is present does indeed provide more accurate results.Secondly, we find that in regions with little object information, the phase, and hence, PSF, reconstructions can be inaccurate.This motivates a "PSF mixing" scheme, in which the object is divided into further subregions only in areas in which there is enough object information.A conclusion of particular importance that follows from our numerical experiments is that in the presence of an unknown, spatially varying blur, our approach is much more effective than is standard one-PSF phase-diversity. We remark that, in principle, our space variant approach should be applicable with other blind deconvolution methods, but as evidenced from our numerical results, the quality of the reconstructed PSFs is important.Thus we would expect that additional diversity information should improve the results.Similarly, we would expect a multi-frame scheme, coupled with our space variant technique, to outperform its single frame counterpart. Fig. 3 . Fig. 3. Simulated space variant blur of star field image; a linear scale is used to display the image on the left, and a log scale is used to display the image on the right. Fig. 5 . Fig. 5. Reconstructions computed by the conjugate gradient algorithm, using accurate approximations of the PSF, and with a noise free blurred image.For comparison, we also show the true image and the true image convolved with the diffraction-limited PSF. Fig. 7 . Fig. 7. Reconstructions computed by the conjugate gradient algorithm, using accurate approximations of the PSF, with Poisson and Gaussian noise added to the blurred image.For comparison, we also show the true image and the true image convolved with the diffractionlimited PSF. Fig. 8 . Fig. 8. Example of adaptive refinement of region partitions.The left plot shows an initial partitioning of an image, the middle shows a refinement of the partitioning, and the right plot shows what might happen in an adaptive approach where only certain subregions are refined further. Fig. Fig.9.Relative errors at each iteration of the conjugate gradient method, using increasingly more accurate approximations of the spatially variant blur.The PSFs used to approximate the space variant blur were computed using phase diversity-based blind deconvolution.These results were computed using noise free blurred image data.
6,389.6
2006-03-06T00:00:00.000
[ "Mathematics" ]
Observation of $W\gamma\gamma$ triboson production in proton-proton collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector This letter reports the observation of $W(\ell\nu)\gamma\gamma$ production in proton-proton collisions. This measurement uses the full Run 2 sample of events recorded at a center-of-mass energy of $\sqrt{s} = 13$ TeV by the ATLAS detector at the LHC, corresponding to an integrated luminosity of 140 fb$^{-1}$. Events with a leptonically-decaying $W$ boson and at least two photons are considered. The background-only hypothesis is rejected with an observed and expected significance of $5.6$ standard deviations. The inclusive fiducial production cross section of $W(e\nu)\gamma\gamma$ and $W(\mu\nu)\gamma\gamma$ events is measured to be $\sigma_{\mathrm{fid}} = 13.8 \pm 1.1 (\mathrm{stat}) \substack{+2.1 \\ -2.0} (\mathrm{syst}) \pm 0.1 (\mathrm{lumi})$ fb, in agreement with the Standard Model prediction. Introduction In the Standard Model (SM) of particle physics, interactions amongst electroweak gauge bosons (, , ) are entirely determined by the non-Abelian SU(2) × U(1) structure of the electroweak sector.In particular, in proton-proton collisions, the production of a boson in association with two photons is sensitive to triple and quartic gauge boson couplings that could be modified by the presence of new physics phenomena [1][2][3].The study of this process therefore provides sensitivity to new physics that is complementary to direct searches as it can constrain new physics at energy scales that are beyond the reach of the LHC.In addition, due to the small production cross section of the final state in proton-proton collisions, it is only now becoming accessible with the data collected during Run 2 of the LHC.Therefore, it remains one of the least studied processes in the electroweak sector of the SM.The production of a boson in association with two photons is also an important background in a number of other measurements, such as the production of the SM Higgs boson in association with a boson, followed by a → decay [4]. The triboson production is studied here through final states compatible with a leptonic decay of the boson.A representative selection of leading-order (LO) Feynman diagrams, and a loop-induced SM Higgs boson Feynman diagram, of → ℓ production are shown in Figure 1.These Feynman diagrams illustrate four of the many possible production modes, and include processes where the photons are produced via: (a) a quartic gauge coupling; (b) two triple gauge couplings; (c) initial (ISR) and final (FSR) state radiation; and (d) as the decay products of a Higgs boson.Production of via a SM Higgs boson is treated as background in this analysis to isolate the signal processes to those with only electroweak gauge boson interactions. Although there are contributions from processes with one or more FSR photons (see diagram (c)), the process will nevertheless be referred to as throughout this letter for simplicity. The largest sources of background in this analysis consist of events in which at least one of the reconstructed objects in the final state is misidentified.Data-driven techniques, described in Section 5, are used to estimate these sources of reducible background, which include photons from misidentified jets or neutral hadron decays, electrons misidentified as photons, leptons from misidentified jets or heavy-flavored hadron decays, and events in which one or both photons do not originate from the primary vertex.In addition, a small fraction of background events originates from multiboson ( (), , ) and top-quark production (, , ).Monte Carlo (MC) simulated samples, described in Section 3, are used to estimate the yield of these sources of irreducible background.To maximize the analysis sensitivity, the uncertainty on background yield from production is constrained from data in a control region (TopCR) that does not overlap with the signal region of interest. Previous measurements of the process were performed at the LHC using proton-proton collisions at a center-of-mass energy of √ = 8 TeV with the ATLAS [5] and CMS [6] detectors, and at √ = 13 TeV with the CMS [7] detector, resulting in a maximum observed statistical significance of 3.1.This letter presents the observation of the process and a measurement of its fiducial cross section in the → and → decay channels.In order to obtain a precise background estimate, the electron and muon channels are combined for both the observation and fiducial cross section measurement.The → signal strength , defined as the ratio of the observed signal yield to the expected yield, is measured to assess the compatibility between data and SM prediction.Results are obtained based on the analysis of L = 140 fb −1 of proton-proton collision data collected with the ATLAS detector at √ = 13 TeV, allowing for improvement over the previous ATLAS result due to both the increase in integrated luminosity < l a t e x i t s h a 1 _ b a s e 6 4 = " 6 W a h H d m k M P R G J v P G 1 9 v T N m Y E d R A = " > A A A F Y n i c j V T P b 9 M w F M 7 W F d b y a 2 N H O F h M 0 1 q 0 l q Z C M A l N m u D C c U h s m d S E y X F e U m u O n d n O I E T 5 8 / g j u H O D I 9 x x 0 l C 1 X b f x p E g v 7 / u + + P l 7 j v 2 E U a U H g + 8 r q 4 2 1 5 p 2 7 6 6 3 2 v f s P H j 7 a 2 H s z j p m 2 + M 2 m 3 F d O P b 6 7 p v + 6 y / e 5 s H 7 a a N W n e f O C 6 f r u M 5 b 5 8 D 5 5 B w 6 R w 5 p d V u f W 8 P W S f t j m 7 d N + 2 I C X V 6 a c p 4 5 V 0 7 7 5 1 + q g a N t < / l a t e x i t > production where the photons are produced via ISR and FSR. < l a t e x i t s h a 1 _ b a s e 6 4 = " R h j p Z x 9 T X 5 6 o y h / n 7 4 7 r O x e / p x 3 q h N 7 Z X 2 W u t r u n a k n W p j 7 U w 7 1 8 j G z 1 a 7 t d 3 q t n 6 3 e + 3 9 9 r C E r q / N O S + 1 y m k f / Q F J 9 c / 6 < / l a t e x i t > q 0 q `⌫ W W H (d) production where the photons are produced via the decay of a SM Higgs boson.and the increase in production cross section, in addition to improvements to the data-driven background estimates. ATLAS Detector The ATLAS experiment [8] at the LHC is a multipurpose particle detector with a cylindrical geometry, forward-backward symmetric, and a near 4 coverage in solid angle. 1 It consists of an inner tracking detector (ID) surrounded by a thin superconducting solenoid providing a 2 T axial magnetic field, electromagnetic and hadron calorimeters, and a muon spectrometer (MS).The inner tracking detector covers the pseudorapidity range || < 2.5.It consists of silicon pixel, silicon microstrip, and transition radiation tracking detectors.Lead/liquid-argon (LAr) sampling calorimeters provide electromagnetic (EM) energy measurements with high granularity.A steel/scintillator-tile hadron calorimeter covers the central pseudorapidity range (|| < 1.7).The endcap and forward regions are instrumented with LAr calorimeters for both the EM and hadronic energy measurements up to || = 4.9.The muon spectrometer surrounds the calorimeters and is based on three large superconducting air-core toroidal magnets with eight coils each.The magnetic field line integral of the toroidal magnets ranges between 2.0 and 6.0 T m across most of the detector.The muon spectrometer includes a system of precision tracking chambers and fast detectors for triggering.A two-level trigger system is used to select events.The first-level trigger is implemented in hardware and uses a subset of the detector information to accept events at a rate below 100 kHz.This is followed by a software-based trigger that reduces the accepted event rate to 1 kHz on average depending on the data-taking conditions.An extensive software suite [9] is used in the reconstruction and analysis of real and simulated data, in detector operations, and in the trigger and data acquisition systems of the experiment. Data and Simulation The measurement presented in this letter is based on proton-proton collision data at a center-of-mass energy of 13 TeV recorded by the ATLAS detector during Run 2 of the LHC (2015-2018).During this data-taking period, the number of interactions per proton bunch crossing (pileup) averaged between 13 and 38 interactions, depending on the year [10].After applying ATLAS data quality requirements [11], the dataset corresponds to an integrated luminosity of L = 140 fb −1 .The uncertainty in the combined integrated luminosity for 2015-2018 is 0.83% [12], obtained using the LUCID-2 detector [13] for the primary luminosity measurements, complemented by measurements using the ID and calorimeters. Simulated samples are used to model the expected signal and irreducible background yields, while reducible backgrounds from misidentified objects are estimated using data-driven techniques described in Section 5. Some of the irreducible backgrounds, as listed in Section 1, contribute to the analysis only when one lepton is not reconstructed or additional photons are present due to FSR. Signal , , and processes are generated with Sherpa 2.2.10 [14] generator using next-toleading-order (NLO) matrix elements (ME) with zero partons, and leading-order (LO) matrix elements for up to two partons calculated with the Comix [15] and OpenLoop [16][17][18] libraries.They were matched with the Sherpa parton shower [19] using the MEPS@NLO prescription [20][21][22][23] using the set of tuned parameters developed by the Sherpa authors.The NNPDF3.0nnlo next-to-next-to-leading order (NNLO) 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the center of the detector and the -axis along the beam pipe.The -axis points from the IP to the center of the LHC ring, and the -axis points upwards.Cylindrical coordinates (, ) are used in the transverse plane, being the azimuthal angle around the -axis. The pseudorapidity is defined in terms of the polar angle as = − ln tan(/2).Angular distance is measured in units of parton distribution function (PDF) set from the NNPDF Collaboration [24] was used.Approximate NLO electroweak corrections are included in these samples [25] and result in a negligible effect in the phase space used in this measurement. The () background is estimated from events generated with the POWHEG-BOX v2 [26] generator interfaced with Pythia 8.212 [27,28] using the AZNLO tune [29] for parton showering modeling and NLO PDFs from NNPDF3.0.Background contributions from (Sherpa 2.2.11), (Sherpa 2.2.10), and (Sherpa 2.2.8) processes are estimated from samples generated to NLO accuracy in perturbative QCD with up to 1 additional parton emission, and merged with samples to LO accuracy in perturbative QCD with 2 to 3 parton emissions; like the signal samples, these are generated with the NNLO PDF set from NNPDF3.0nnlo.Double counting between and is removed at the event generation stage.Contributions from t events, events where the photon is produced in the decay chain, and events are generated with MadGraph5_aMC@NLO 2.3.3 [30].The NNPDF2.3lo [31] PDF sets and parton shower modeling from Pythia 8.212 with the A14 tune [32] are used in the generation of these event samples.Contributions from events where the photon is produced at matrix-element level are generated with MadGraph5_aMC@NLO 2.6.7,NNPDF2.3loPDF sets, and parton shower modeling from Pythia 8.244 with the A14 tune.In all simulated samples where the production of one photon is generated in the matrix element, a second prompt photon can be produced as FSR in the parton shower. Both signal and background MC events are processed through the full ATLAS detector simulation [33] based on GEANT4 [34].The effects of multiple interactions in the same and neighboring bunch crossings are modeled by overlaying the simulated hard-scattering event with inelastic proton-proton events generated with Pythia 8.186 [27] using the NNPDF2.3loset of PDFs [31] and the A3 set of tuned parameters [35].Simulated events are weighted such that the pileup distribution reproduces the pileup distribution of the dataset used in this measurement. Event Selection The process is investigated using the leptonic decays of the boson.While events with a leptonic decay to an electron or a muon are considered as signal events, those with hadronic decays are considered as a background.Candidate (ℓ) events therefore contain two isolated photons, an isolated electron or muon, and missing transverse momentum, with magnitude referred to as missing transverse energy ( miss T ), from the undetected neutrino(s) originating from the leptonic boson decays.The following paragraphs describe the selection requirements used to define the signal region (SR) of the measurement. Events used for this measurement are selected using a suite of triggers that require the presence of at least two photons with T > 10 GeV and at least one electron or muon with T > 20 GeV [36,37].For the 2017-2018 data-taking period, the T thresholds used to select events with at least two photons and at least one electron were increased to 12 GeV (photons) and 24 GeV (electron).In addition to these triggers, single and di-lepton triggers with T thresholds between 14 and 26 GeV are used to select events for the data-driven background estimates.The overall efficiencies for these triggers to select simulated signal events in the signal region are 95% in the electron channel and 82% in the muon channel.In all cases, trigger objects must be matched to reconstructed objects selected for analysis. Events are required to have a primary vertex associated with at least two charged-particle tracks with T > 0.5 GeV in the proton-proton interaction region.If multiple vertices satisfy this criteria, the vertex with the highest 2 T sum is selected. Photon candidates are reconstructed from clusters of energy deposits in the EM calorimeter, calibrated at the EM scale, and tracking information from the ID, which is used to classify candidates as either converted or unconverted photons.Candidate photons are required to have a transverse momentum T > 20 GeV and a pseudorapidity of || < 2.37, excluding the transition region between the electromagnetic barrel and endcap regions of the calorimeter, 1.37 < || < 1.52.Photons must also satisfy the cut-based Tight identification requirement defined using EM shower shape variables [38].To reject non-prompt photons originating from jets, photons must satisfy an isolation requirement based on topological clusters [39] of energy deposits in the EM calorimeter.The isolation energy of a photon, iso, T , is determined by first calculating the scalar sum of the transverse energy of topological clusters within Δ = 0.4 of a photon ( cone T ), corrected for the energy of the photon itself, and then subtracting off a value that depends on the transverse photon energy ( Photons are required to pass the Calorimeter-Only Tight isolation working point [38], which requires iso, T < 2.45 GeV.In addition, the two photons must be separated from each other by requiring Δ > 0.4. Electron candidates are reconstructed from energy deposits in the EM calorimeter that can be matched to ID tracks.These tracks must be consistent with originating from the primary vertex by requiring that | 0 / 0 | < 5 and | 0 • sin()| < 0.5 mm, where 0 is the transverse impact parameter relative to the beam line and 0 is its uncertainty, 0 is the longitudinal impact parameter, and is the polar angle of the track with respect to the beamline.Electron candidates are required to have T > 25 GeV and || < 2.47, excluding the region 1.37 < || < 1.52.Additionally, they must satisfy the likelihood-based Medium identification requirement defined using inputs from the calorimetry and tracking systems [40].To further distinguish signal leptons from background, isolation variables for calorimeter energy deposits ( iso T ) and tracks ( iso T ) are constructed.The calorimeter isolation iso T is defined as the scalar sum of the transverse energy of topological clusters within Δ = 0.2 of the lepton, which is corrected for both the energy of the lepton itself and the average pileup energy density measured in this region of the detector.For electrons (muons), the track-based isolation iso T, ( iso T, ) is defined as the scalar sum of tracks with T > 1 GeV within a T -dependent cone up to Δ = 0.2 (Δ = 0.3) of the lepton, with the lepton candidate removed.Electrons must satisfy iso T / T < 0.06 and iso T, / T < 0.06 [40], and muons must satisfy iso T / T < 0.15 and iso T, / T < 0.04 [42].Hadronic jets are used in the SR definition to veto events with jets containing -hadrons.Jets are reconstructed using the anti- algorithm [43] with a distance parameter Δ = 0.4.The inputs to the jet algorithm are particle-flow objects [44], which make use of both the calorimeter and the ID information to precisely determine the momenta of the input particles.Reconstructed jets are required to have T > 20 GeV and || < 4.5.Jets satisfying 20 < T < 60 GeV and || < 2.4 must pass the Tight requirement on the jet vertex tagger variable [45] in order to suppress jets not originating from the primary vertex.Kinematic properties of -flavored hadrons are used as input to a multivariate jet classification algorithm [46,47].This multivariate classification has a 77% efficiency and is used to identify jets with || < 2.5 containing -flavored hadrons.Events with jets containing -flavored hadrons are rejected. It is possible for tracks and energy deposits to be associated with more than one type of reconstructed object.To remove the overlap between different reconstructed objects in an event, the following selection criteria are applied in the order in which they are described: electrons are removed if they share an ID track with a muon; photons are removed if they are within Δ = 0.4 of an electron or a muon; jets are removed if they are within Δ = 0.2 of an electron; electrons are removed if they are within Δ = 0.4 of a jet; jets are removed if they are within Δ = 0.2 of a muon; and finally photons and muons are removed if they are within Δ = 0.4 of the remaining jets in the event. The magnitude and direction of the missing transverse momentum are reconstructed using calibrated photons, electrons, muons, jets, and tracks from charged particles not associated to any object found in the event [48].An ambiguity resolution procedure is performed as part of the calculation to ensure that energy deposits reconstructed as different objects are not double-counted.Events in the SR are required to satisfy miss T > 25 GeV.Additionally, the transverse mass of the boson T = √︃ 2 ℓ T miss T (1 − cos Δ) is required to be greater than 40 GeV, where Δ is defined as the difference in azimuthal angles between the lepton momentum and missing transverse momentum. A set of veto requirements are implemented to greatly reduce the number of events passing the signal selection, which can occur when an electron is misidentified as a photon.All SR events must have T,ℓ > 30 GeV and ℓ , ℓ 1 , and ℓ 2 ∉ [82, 100] GeV, where 1 and 2 are the leading and sub-leading photons ordered by T .The veto is applied to both the electron and muon channels to ensure a consistent event selection. To reduce contributions from background events with a second lepton originating from processes with two bosons or a boson, two additional selection criteria are applied.Events are rejected if they contain a second lepton, selected without the | 0 / 0 | or isolation requirements, of a different lepton flavor to the primary lepton that passes all SR selection criteria.A similar veto is enforced for events containing same-flavor leptons with the secondary lepton only required to satisfy T > 6 GeV and pass Loose (Medium) identification for electrons [40] (muons [41]), where the lepton identification requirement is loosened to remove a significant fraction of the prompt background from the process. Differences in the reconstruction, trigger, and selection efficiencies for leptons and photons between data and simulation are corrected for with scale factors [36,37,42,49].In addition to the SR defined in this section, other data samples are used to estimate backgrounds coming from misidentified objects using data-driven techniques, as described in Section 5. Background Estimation The largest background in the SR consists of events in which one or both signal photons originate from a misidentified jet or neutral hadron decay.This hadronic fake photon background, denoted as → , is estimated using a data-driven method by performing a two-dimensional template fit to the isolation distributions of the leading and subleading photons in a procedure similar to those discussed in Refs.[5] and [50].The three isolation distribution templates for the cases in which either the leading, subleading, or both photons are → fakes are obtained from data in regions formed by loosening and inverting some of the leading, subleading, or both photon isolation requirements, respectively, in order to enhance the contributions from misidentified jets.This is done by selecting events in which at least one photon candidate passes the Loose photon identification requirement but fails one or more of the four EM shower-shape requirements used in the Tight (T) photon identification [38]; these are denoted as L ′ photons [49].For the estimation of this source of background, events are still required to satisfy all other SR criteria except the photon isolation requirement.The electron and muon channels are combined to ensure a sufficient number of events pass selection requirements for the data samples.These events are categorized into four non-overlapping data samples, TT, TL ′ , L ′ T, and L ′ L ′ , depending whether the T or L ′ photon identification criteria is satisfied by the leading and subleading photons, respectively.Templates for non-prompt leading and subleading photons are built using one-dimensional Bukin functions [51], and their shape parameters are determined from fits to photon isolation energy distributions of data events in the L ′ T and TL ′ regions, respectively.The templates for leading (subleading) prompt photons are formed from double-sided crystal ball functions, whose shape parameters are fit to simulated tight leading (subleading) prompt photons in simulated events; these correspond to leading photons from events in the TT and TL ′ regions and subleading photons from events in the TT and L ′ T regions.Two-dimensional templates for prompt , ( → ), and ( → ) events are formed by taking the product of the two functions used to individually describe the isolation energy of the leading and subleading photons.Due to non-negligible correlations between the two photon candidates in the L ′ L ′ data sample, the two-dimensional template for ( → ) ( → ) events is instead formed by fitting a superposition of Gaussian kernels [52].Finally, coefficients corresponding to numbers of events for each of the four, two-dimensional templates are fit using an extended maximum likelihood fit to data in the TT region that has simulated events from all other background processes with two prompt photons and one prompt lepton subtracted.The coefficients for the TL ′ , L ′ T, and L ′ L ′ regions are further corrected for signal leakage using MC simulation.In order to account for the photon isolation energy requirements that are part of the SR definition, the contribution from → fake events in the SR is obtained by integrating the 2D photon isolation energy distributions fitted to data in the TT region up to the cut value that defines the SR, iso, T = 2.45 GeV.The total expected number of → background events is determined by computing the sum of the integrated coefficients of the ( → ), ( → ), and ( → ) ( → ) templates.A systematic uncertainty due to the choice of photon ′ identification is estimated by forming → templates with alternative identification working points and parameterizing the shape differences as uncertainties on the nominal Bukin template parameters.Statistical and systematic uncertainties on the templates are propagated through to the background estimates using a multivariate Gaussian constraint on the two-dimensional fit, resulting in an overall 11% systematic uncertainty on this background. Events in which one or both photons are misidentified electrons constitute the → fake background.These misidentifications are caused mainly by tracking inefficiencies and the mismatching of tracks in the ID to energy clusters in the EM calorimeter.The background is estimated using a data-driven "fake-factor" method similar to the one described in Ref. [38].The → fake rate is calculated with a tag-and-probe approach using both → and → events, where symbolizes a misreconstructed electron identified as a photon.Probe electrons are selected with T > 20 GeV, the likelihood-based Tight identification [40], and the same isolation requirement as SR electrons, such that their kinematics selection is close to the one for the photons used in the SR.The data sample used to calculate the → fake rate consists of events selected with single electron triggers that have a reconstructed / invariant mass within 20 GeV of the boson mass, 91.2 GeV.Non-resonant backgrounds in this data sample are modeled using an exponential function, and the boson resonance is modeled by a Gaussian with double-sided exponential tails [53].The number of ( ) and ( ) events are extracted using a combined signal and background fit to the invariant mass distributions in bins of T and , and the fake factor is computed as → = / .To estimate the → background in the SR, this fake factor is applied to a sample of (ℓ) events obtained by selecting data events with di-lepton triggers and substituting one SR photon requirement for that of a probe electron.Systematic uncertainties relating to the fitting and integration ranges around the -boson mass, the photon energy calibration, and the exponential background are propagated to the → background estimate.Statistical and systematic uncertainties are 2% and 7%, respectively, on the final SR → background estimate.A validation region dominated by events with → fakes is obtained by inverting the -veto in the SR and in the (ℓ) region.The background estimation method is shown to reproduce data in both of these validation regions. The hadronic fake lepton background, → ℓ, is comprised of events in which the signal lepton is either a misidentified jet or from the decay of a heavy-flavored hadron (non-prompt).This background is also estimated using a data-driven fake-factor [54] using an event sample that is enriched in non-prompt leptons.This data sample is obtained by selecting (ℓℓ) + ℓ events with single lepton triggers where the third lepton, ℓ , is a misreconstructed jet.Two leptons must have an invariant mass within 10% of the -boson mass and be of the same flavor but opposite charge, while the third (probe) lepton must be of a different lepton flavor in order to avoid ambiguity.Additional requirements of miss T < 40 GeV and T < 40 GeV are imposed to reduce prompt leptons from events, and remaining events are subtracted from the data, relying on simulated predictions.The fake factor is defined as the ratio of the number of probe leptons satisfying the SR lepton criteria ( SR ), to the number of probe leptons satisfying a Loose set of criteria ( ).This Loose criteria selects leptons more likely to be non-prompt by inverting the lepton | 0 / 0 | and isolation requirements.The fake factor is estimated in bins of probe lepton T and || for electrons, and only in bins of T for muons due to statistical limitations.The → ℓ fake background in the SR is estimated by applying the fake factor to a region kinematically adjacent to the SR.This region is defined with the same selection requirements as for the SR with the exception of the lepton selection, which uses the Loose selection criteria.Statistical and systematic uncertainties account for a 26% (27%) and 18% (50%) uncertainty in the electron (muon) channel, respectively.Systematic uncertainties relating to a bias in the control region due to the miss T selection are computed by varying the requirement by ±10 GeV [54], and theoretical uncertainties on the subtracted events are propagated through to the fake factors.The method is validated in a region enriched in fake leptons obtained by inverting the miss T and T requirements used in the SR and comparing the estimate to data.The pileup background consists of events in which one or both photons do not originate from the primary vertex, mainly due to a limited photon pointing resolution.The fraction of photons originating from a pileup vertex is calculated in a subset of SR data where at least one photon is converted.Since the fraction of photons that convert is independent of their production vertex, the relative fractions of signal and pileup photons in the converted sample is representative of the fractional number of signal and pileup photons in the full SR.Converted photons that are required to have at least one ID track with silicon hits [49] and a conversion radius, defined as the radial distance of the conversion vertex, of less than 400 mm are used for this estimate because the presence of an ID track allows for the calculation of a longitudinal impact parameter.The difference between the longitudinal impact parameters of the converted photon and the primary vertex, Δ, is Gaussian-distributed and expected to be close to zero for photons from the hard scatter, while pileup photons are expected to have a much broader distribution [55].The |Δ| > 55 mm tails of the distribution are used to estimate the fraction of pileup photons in the SR.The statistical uncertainty on the pileup background is 56%, due to the limited number of events in the estimation region. Event yields in the SR from irreducible sources of background such as (), , and as well as , , and are estimated using MC simulated samples.To further reduce uncertainties from the estimated event yield in the SR, a control region enhanced in events (TopCR) is defined by inverting the -jet veto in the SR selection requirements in order to constrain a normalization factor that is left floating in the likelihood fit described in Section 7. The fitted normalization factor is cross-checked in a validation region (TopVR) formed by inverting the -jet veto, the miss T , and T in the SR selection requirements in order to select events with at least one -jet, miss T < 25 GeV, and T < 40 GeV.The → and → data-driven backgrounds are also computed in the TopCR and TopVR following the same methods outlined for the SR.Due to the reduced number of events in the L ′ regions, the photon identification systematic uncertainty is estimated using a dedicated procedure in the SR, and is +18%/−13%.The → ℓ and pileup backgrounds are negligible in both of these TopCR and TopVR regions. Uncertainties The background uncertainties described in Section 5 are the dominant uncertainties of this measurement described in Section 7. In addition, several other important sources of uncertainty are assessed.These include instrumental uncertainties such as the energy scale and resolution of electrons and photons [49]; photon and lepton trigger, reconstruction, identification, and isolation efficiencies [36, 37, 41, 49]; jet energy scale and resolution [56]; jet vertex tagging [57,58]; -jet identification [46]; missing transverse energy reconstruction [59]; and the luminosity of the dataset [12].These are evaluated for both backgrounds and signal processes. Additionally, theoretical uncertainties associated with the simulation of the signal and background processes are evaluated and propagated through to the measured fiducial cross section.Theoretical uncertainties on the background processes, but not signal processes, are propagated through to the measured signal strength.These include parton distribution function uncertainties [60]; the uncertainty on the strong coupling constant, s [61]; and missing higher-order terms in the cross section calculations [62].The last is evaluated by varying the renormalization and factorization scales independently by factors of 0.5 and 2, avoiding variations where the two scales differ by more than a factor of two. Statistical uncertainties on the data, and signal and background MC samples are also taken into account.All of the previously described uncertainties are accounted for in the detector-to-fiducial region correction factor used for the unfolding procedure detailed in Section 7. Results The → signal strength is extracted from the data using a binned maximum likelihood fit [63,64] including the TopCR and the signal region.All uncertainties considered in the analysis are treated as nuisance parameters in the fit.Systematic uncertainties are constrained by Gaussian functions, and correlations between sources of systematic uncertainties are taken into account.Statistical uncertainties are also treated as nuisance parameters but are constrained by assigning a Poisson function to each analysis bin.These constraints penalize the likelihood fit if the estimated nuisance parameters pull from their measured values. The TopCR is used to determine a background2 normalization factor .The normalization factor is allowed to float via a likelihood scan done simultaneously with the signal-strength extraction in the SR.The fit value of is then applied in the TopVR and the resulting total estimated yield is compared to data. Figure 2 illustrates the yields for the three regions TopCR, TopVR, and SR.The estimated yield in the TopVR region shows agreement with data. Table 1 shows the post-fit yields of the signal and estimated backgrounds in the SR and TopCR, along with their sum and the number of selected data events.The signal strength and normalization are determined to be = 1.01 +0.17In order to obtain an unfolded production cross section measurement, a fiducial phase space is defined to be as close as possible to the SR event sample selected at detector-level.Fiducial requirements are applied to dressed leptons, which are particle-level electrons and muons recombined with radiated photons within a cone of Δ = 0.1.Events are required to have a dressed electron or muon with T > 25 GeV and || < 2.47 while the two particle-level photons must satisfy T > 20 GeV and || < 2.37.Additionally, photons must satisfy the isolation requirement ( cone, gen.T − 0.032 × T ) < 6.53 GeV, where cone, gen.T is computed from the vector momentum sum of all stable, generator-level particles within Δ = 0.4 of the photon.This isolation requirement is derived to vary with photon T to mimic the detector-level isolation requirement.Additionally, two separation requirements are applied to the two photons and between the lepton and each photon: Δ > 0.4 and Δ ℓ > 0.4.Finally, fiducial events must satisfy miss T > 25 GeV, T > 40 GeV, and a veto on -jets with T > 20 GeV and || < 2.5.The unfolding is performed into a fiducial phase space with → and → decays; → decays that pass these requirements, including events in which the tau decays leptonically, are not considered as part of the fiducial phase space. Unfolding is performed on the measurement using a similar maximum likelihood method to the one used to perform the signal-strength extraction, where the effects of statistical, experimental, and theoretical uncertainties on the modeling of the correction from detector-level signal events to the fiducial phase space are taken into account.A correction factor is calculated as the ratio of the number of signal MC events reconstructed in the signal region to the predicted in the fiducial phase-space.The number of detector-level events is defined as the sum of simulated MC events with two photons and -boson decaying into an electron, muon, or leptonically decaying tau that pass all signal region requirements.The number of fiducial events is calculated using only simulated () and () signal MC events, where the electron or muon is prompt.The correction factor is computed to be = 0.210 ± 0.004(stat.)Table 2: Major sources of uncertainty and their impacts on the measured fiducial cross section, as calculated from the correlation matrix of the fiducial cross section fit.Squared values of impacts are determined by setting all nuisance parameters for a given uncertainty source to their best-fit value and subtracting the resulting squared value of the total uncertainty from the squared value of the total uncertainty in the nominal fit.Systematic uncertainty sources that contribute less than 0.1% are not shown.Efficiency uncertainties include, where applicable, uncertainties on data-MC agreement due to reconstruction, trigger selection, identification, isolation, and vertex-matching. Source of uncertainty Impact using the Sherpa NLO signal MC samples; a 2.9% relative difference is found when calculating with MadGraph signal MC samples, which is in statistical agreement and thus no generator choice uncertainty is added.In the likelihood fit, the total number of expected signal events is defined as sig = / pred L. The signal production cross section is measured in the fiducial phase space from the number of signal events observed in data, the integrated luminosity, and the correction factor.The measured fiducial cross section for (/) events is determined to be fid = 13.8 ± 1.1(stat) +2.1 −2.0 (syst) ± 0.1(lumi) fb and it is in close agreement with SM predictions as shown in Figure 3. In Table 2, the dominant sources of uncertainty and their impact on the fiducial cross section are listed.For the purposes of this table, the uncertainties are grouped into common categories given their source.The impact of each group of systematic uncertainties is calculated by performing the likelihood fit where the individual parameters of the grouped systematics are set to their best fit values from the nominal fit and not allowed to float.For each grouping, the square value of the new overall fit uncertainty is subtracted from the squared value of the nominal fit uncertainty to obtain the squared value of the impact of the grouped uncertainties.The fit is performed under the assumption that the nuisance parameters for the grouped systematics that are held fixed are uncorrelated to all others that are allowed to float.This procedure is used only to estimate the impact of the individual groups of systematics, as it avoids the possibility of abnormal pulls that could occur if the fit were performed with only one group of nuisance parameters left floating at a time.The largest source of systematic uncertainty is due to the → data-driven background estimate, followed by the statistical uncertainty on data.The modeling of the identification, isolation, and trigger efficiencies to select photons in simulated events also represents a substantial source of uncertainty, and together these comprise the "Photon efficiency" uncertainty source in Table 2. Conclusion This letter reports the observation and measurement of the process → (ℓ) by the ATLAS experiment at the LHC.Leptonic decays of the boson to an electron or a muon accompanied by two photons are selected from the 140 fb −1 Run 2 dataset of proton-proton collisions at √ = 13 TeV produced by the LHC.A maximum likelihood fit of the signal and background yields leads to a rejection of the backgroundonly hypothesis with an observed and expected significance of 5.6 standard deviations.The measured fiducial cross section for () and () events is fid = 13.8 ± 1.1(stat) +2.1 −2.0 (syst) ± 0.1(lumi) fb, in agreement with the SM predictions for this process.The dominant sources of uncertainty come from the data-driven background estimates and the statistical uncertainty on data.[35] S b Y N m U 5 P u x P m p 8 x M 0 d o 0 P p u P 4 T v 4 E E 6 7 y 7 q U B Z y k y e k 5 3 3 d + 5 0 y Y M q r N Y P B 7 a X m l 1 b 5 3 f 7 W z 9 u D h o 8 d P 1 j e e H m u Z Figure 1 : Figure 1: Representative Feynman diagrams for the production of . Muon candidates are reconstructed by matching tracks in the ID to tracks in the MS.These tracks must be consistent with originating from primary vertices by requiring | 0 / 0 | < 3 and | 0 • sin()| < 0.5 mm.Muon candidates are further required to have T > 25 GeV and || < 2.4, and must satisfy the Medium identification requirement [41] based on the quality and compatibility of their tracks in the ID and MS. Figure 2 : Figure 2: Data, and pre-and post-fit yields for TopCR as a function of leading photon T , and for TopVR and SR each as a single bin.The error bars on data indicate its statistical uncertainty.The bottom panel shows the ratio of the data to the post-fit yield (black points) and the ratio of the pre-fit yield to the post-fit yield (solid line) for each of the regions.The uncertainty band includes both the statistical and systematic uncertainties obtained from the fit.The background is scaled by the normalization factor , and the (ℓ) prediction, by the signal strength .Background contributions from pileup in TopCR and TopVR are neglected. Figure 3 : Figure 3: The measured fiducial (→ /) integrated cross section compared with both the signal event generator predictions. Table 1 : Estimated signal and background yields in the SR and TopCR, as well as their sums, are shown post-fit together with the observed number of events in data.The uncertainties quoted in the table correspond to total uncertainties.Events from the "Multiboson" and "Top" backgrounds are estimated from MC simulation and contain only prompt leptons and photons.Yields denoted with "-" correspond to backgrounds that are negligible.-to- and -to- decays, events from () with leptonic decays that fall into the fit regions are included as a part of the signal in the fitting procedure and are normalized together with () and ().The fit results yield an expected and observed significance of 5.6 standard deviations, corresponding to the observation of the process.No nuisance parameters are significantly pulled or constrained in the fit.
9,837.8
2023-08-06T00:00:00.000
[ "Physics" ]
A Two-Step A/D Conversion and Column Self-Calibration Technique for Low Noise CMOS Image Sensors In this paper, a 120 frames per second (fps) low noise CMOS Image Sensor (CIS) based on a Two-Step Single Slope ADC (TS SS ADC) and column self-calibration technique is proposed. The TS SS ADC is suitable for high speed video systems because its conversion speed is much faster (by more than 10 times) than that of the Single Slope ADC (SS ADC). However, there exist some mismatching errors between the coarse block and the fine block due to the 2-step operation of the TS SS ADC. In general, this makes it difficult to implement the TS SS ADC beyond a 10-bit resolution. In order to improve such errors, a new 4-input comparator is discussed and a high resolution TS SS ADC is proposed. Further, a feedback circuit that enables column self-calibration to reduce the Fixed Pattern Noise (FPN) is also described. The proposed chip has been fabricated with 0.13 μm Samsung CIS technology and the chip satisfies the VGA resolution. The pixel is based on the 4-TR Active Pixel Sensor (APS). The high frame rate of 120 fps is achieved at the VGA resolution. The measured FPN is 0.38 LSB, and measured dynamic range is about 64.6 dB. Introduction Currently, the CMOS Image Sensor (CIS) is widely used in digital cameras, digital camcorders, CCTVs, medical equipment, etc. Among many kinds of CIS studies, R&D to improve the frame rates has been considered important. The frame rates are determined by the conversion speed of the Analog to Digital Converter (ADC) that exists in every column. These days, most CIS systems in a variety of applications use a Single-Slope ADC (SS ADC) because of a simple structure and excellent linearity [1]. However, such a system has a disadvantage in that the conversion speed of the SS ADC slows down at a rate of 2 n times in proportion to an increase in the resolution (n). Furthermore, this makes it difficult to adopt the SS ADC in high resolution systems including digital camcorders, HDTVs, UDTVs, or in other applications that require the image sensor to have frame rates faster than 30 fps. Many papers have reported attempts to improve the disadvantages of SS ADC [2][3][4][5][6][7][8][9][10][11][12][13][14][15]. Among them, cyclic ADC or Successive Approximation Register (SAR) ADC are well-known techniques for the high frame rate CIS. However, those methods require large chip areas and huge power consumption compared to SS ADC, thus it is difficult to apply them for portable devices such as mobile phones, digital cameras, etc. By the way, a Two-Step Single-Slope ADC (TS SS ADC) with a conversion speed faster by more than ten times faster than the SS ADC with the similar area and power consumption has been developed [2,3]. However, due to errors between the coarse ADC and the fine ADC, the fixed pattern noise (FPN) is a serious issue. Further, the error in the slope ratio between the coarse ramp and the fine ramp is one of the main causes of image quality degradation. Therefore, in order to eliminate the FPN, this study proposes a low noise CMOS image sensor with a Two-Step Single-Slope ADC and column self-calibration technique. The contents of this paper are as follows: In Section 2, the architecture and the circuit technique for Two-Step Single-Slope ADC are discussed. In Section 3, the circuit implementation for the proposed CIS is described. Measured results are described in Section 4. Finally, the conclusions are summarized in Section 5. Figure 1 shows the basic principle of SS ADC and TS SS ADC. The SS ADC shown in Figure 1a maximally requires 2 14 (16,384) clock cycles in the worst case to satisfy the 14-bit resolution. Thus the operating speed is too slow to obtain a desired digital code in a high resolution ADC beyond 14-bit. On the contrary, the TS SS ADC is composed of both a coarse block which performs the conversion of the upper bit and a fine block which performs the conversion of the lower bit. For example, a 14-bit TS SS ADC shown in Figure 1b is separated by a coarse A/D conversion for the upper 7-bit and a fine A/D conversion for the lower 7-bit, respectively. Only 256 clock cycles is enough to obtain a 14-bit resolution, because there are 128 clock cycles at the coarse ADC and 128 clock cycles at the fine ADC. Hence, the operating speed of TS SS ADC is much faster (by 64 times) than that of SS ADC. Figure 2a shows the circuit diagram for an analog correlated double sampling (CDS) block with a conventional TS SS ADC [4]. It is composed of a comparator to compare the pixel signal with the ramp signal, four capacitors to perform the coarse A/D conversion and fine A/D conversion, a few switches, and a digital control block. From Figure 2a, the holding capacitor (C H ) which stores the final coarse analog voltage is connected to the external ramp signals in a series. When the next fine ramp signal drives the comparator through the holding capacitor, the fine ramp slope is distorted by parasitic capacitances. Parasitic capacitances (C P ) are inevitably formed by the switching MOS transistors, metal routing, and other side effects. Thus the real ramp slope is changed as follows: Conventional Two-Step Single-Slope ADC (TS SS ADC) (1) Figure 2b shows the conceptual diagram of ramp slope change by the parasitic capacitances. It generates the difference of slope ratio between the coarse ramp and the fine ramp. Such difference in the slope ratio degrades the linearity of the ADC which is the most important part in the CMOS image sensor. Further, the column fixed pattern noise (CFPN) becomes serious because the parasitic capacitance is differently formed in every column. Normally, since the metal routing, device mismatching, and other side effects of each column are different, the value of parasitic capacitances of each column is also different. Thus the gain and linearity of each column ADC are also different. For those reasons, in case of the conventional TS SS ADC, it is very difficult to obtain a high resolution beyond 10-bit, and additionally, the linearity is much worse than that of SS ADC. Figure 3 shows the circuit diagram of the proposed TS SS ADC. From Figure 3a, the analog CDS block performs the A/D conversion with a new 4-input comparator, where the input nodes for the coarse ramp and the fine ramp are separated. It is different from that of the conventional TS SS ADC shown in Figure 2. Figure 3b shows the circuit diagram of the 4-input comparator based on the Gilbert cell structure. Normally, the ideal transfer function of Figure 3b is as follows: Proposed Two-Step Single-Slope ADC (TS SS ADC) where is the input transconductance, and is the output impedance. From Equation (2), the most important consideration of the Gilbert cell is the device mismatching of the input MOS transistors. Assuming that there is a device mismatching, the current flows are different at the input MOS transistors. In this case, the gain mismatching occurs between the coarse block and the fine block. Further, the Equation (2) has some errors. In order to solve the problems, a column self-calibration technique is proposed. It will be discussed later. Figure 4 shows the timing diagram of the TS SS ADC. It is a simple design example for 4-bit TS SS ADC with the 2-bit coarse ADC and the 2-bit fine ADC. In order to easily understand the principle of TS SS ADC, we use a very simple 4-bit design example. Even though this procedure is almost equal to that of the conventional TS SS ADC, the proposed TS SS ADC has some advantages. First of all, the input stage of the proposed TS SS ADC is very simple because of the 4-input comparator. Since the switches of SADC1 and SADC2 at the conventional TS SS ADC are eliminated, the switching noises are reduced by about 30%. This is because the total number of switches at Figure 3 is four, while that of Figure 2 is six. Based on the smaller number of switches for the proposed TS SS ADC shown in Figure 3, some errors such as clock feedthrough, charge injection, and other temporal noises at the switching MOS's are reduced by about 30%, compared to the conventional TS SS ADC shown in Figure 2. Further, in order or reduce the errors, we use a large holding capacitance where the value of C H is about 850 fF. From simulation results, the error at the large holding capacitance is very low to satisfy the 14-bit resolution, and the operating speed is enough to satisfy 120 fps. Secondly, very low offset errors generate because the voltage of Vref is directly connected to the input node of the comparator. Finally, the best advantage of the proposed TS SS ADC is an elimination of the serial holding capacitor shown in Figure 2. Since the holding capacitor is only used to hold the final voltage of the coarse ramp, the fine ramp slope is not affected by the parasitic capacitance. For this reason, the variation of fine ramp slope can be almost reduced. The Structure of CMOS Image Sensor Generally, a CIS with a column parallel ADC structure consists of a pixel array, a Correlated Double Sampling (CDS) block with a column parallel ADC, and a digital control block. The pixel array converts the light to the voltage, and the voltage is converted into a digital code through the CDS block. The digital control block plays a role in controlling the pixel, column, and output interface. Figure 5 shows the block diagram of the CIS with the 14-bit TS SS ADC. The CIS is based on the column-parallel structure with the VGA resolution of 640 × 480, and the pixel uses the 4-TR APS with a size of 5.6 µm × 5.6 µm. The column TS SS ADC consists of both one counter with the parallel load corresponding to the lower bit and two memories which store the digital values of the upper and lower bit. Additionally, it is designed with the memory for the self-calibration. Design of a Digital CDS The digital CDS performs a normal CDS processing by using a digital method, which means that the method of noise reduction is employed in a digital domain by converting the reset voltage and the signal voltage into a digital code. Figure 6a shows the block diagram for the conventional digital CDS. To implement a digital CDS, two up-down counters for the 7-bit coarse ADC and the 8-bit fine ADC are required as well as an analog CDS. The extra 1-bit of fine ADC is used for the digital correction logic (DCL) [8,14]. Such an up-down counter has an advantage in that it reduces unnecessary circuits, because the counter plays a role as the subtractor and memory itself. However, it is difficult to apply the up-down counter within a limited column pitch because the layout area of the counter is large. To compensate for this phenomenon, a digital CDS structure for the proposed 14-bit TS SS ADC is shown in Figure 6b. The structure is comprised of both an 8-bit up-down counter (including the DCL bit) with a parallel load which designates the initial value of the counter and two SRAMs. The SRAMs are designed with 7-bit and 8-bit. Since the layout area to implement the 1-bit is significantly smaller for the SRAM than for the counter, we can have a smaller size than that of the conventional structure by more than 10%. Column Self-Calibration Technique As was previously mentioned, the difference in the slope ratio between the coarse ramp and the fine ramp generates large amounts of CFPN that degrades the image quality. Figure 8 shows the errors of the digital output data that depend on the slope ratio of the two ramps. To compensate such a problem, this paper proposes a new 4-input comparator. However, the error of ramp slope in the comparator may not be perfectly compensated by the 4-input comparator. Generally, the ramp slope ratio of TS SS ADC is determined by the resolution of coarse ADC and fine ADC. It is given by: where S C and S F are the ramp slope of coarse ADC and fine ADC, T C and T F are the conversion time of coarse ADC and fine ADC, N C and N F are the resolution of coarse ADC and fine ADC, respectively. If the ramp slope ratio in Equation (3) differs from the ideal value, as shown in Figure 8, the TS SS-ADC has a non-linearity performance because of the over codes or missing codes. Therefore, a self-calibration circuit to correct the ramp slope error in the comparator must be designed. Figure 9a shows the circuit diagram of the 4-input comparator with a self-calibration. Here, the currents at M coarse and M fine are not the same due to the mismatching of the two MOSs, causing an uneven analog gain and a difference in the slope ratio between the coarse ramp and the fine ramp. The study is focused on the analog calibration of M fine that we change the gm value by adjusting the current amount on M fine , in order to make the same analog gain at M coarse and M fine . Figure 9b shows the magnified circuit diagram for M fine . The gm value at M fine can be adjusted by trim SRAM signal in the digital calibration block. The size of trimming transistors M fine and the bit resolution of feedback circuit are the most important factors to decide the effective range and accuracy of the self-calibration. In order to solve the problems, we have analyzed the error range of each column ADC with Monte-Carlo simulation. From the analysis results including device mismatching errors, we found that the maximum error code was 63LSB, namely 6-bit resolution. Thus a self-calibration circuit with a 7-bit resolution and a control circuit with 0.5LSB step are designed to compensate the 6-bit code errors. Figure 10 shows the flow chart of the proposed self-calibration technique. The slope correction between the coarse ramp and the fine ramp is performed based on the flow chart. In the proposed TS SS ADC, the total codes of fine ADC are 256, because we use a DCL technique with the extra 1-bit of coarse ADC shown in Section 3.2. Thus the fine ramp covers the 2-bit LSB of coarse ADC. It means that the maximum range of fine ramp is 256, which is larger than 1LSB of coarse ADC. Therefore, it is possible that C = A − B can be smaller or larger than 127, as shown in Figure 10. Figure 11 shows the block diagram for digital calibration. First, the initial values configured as default in the trim SRAM are entered into the analog calibration block to start the self-calibration. Then, the analog CDS switch is controlled to convert the lowest fine value of the coarse 1 LSB into the digital code and it is stored to the first memory. Then, the highest fine value of the coarse 1 LSB undergoes the A/D conversion and is saved to the second memory. The two data are subtracted by the subtractor. Here, the difference between the two data causes a change of the initial value configured in the trim SRAM, depending on whether the value is smaller or larger than the ideal 7-bit fine code value (127). The digital code transition in the trim SRAM changes the total amount of the current flowing in the analog calibration circuit and changes the gm value of M fine . Repeating such methods finally makes the analog gain between M coarse and M fine the same. As a result, we reduce drastically the CFPN caused by the difference in the analog gain between the coarse ramp and the fine ramp with the device mismatching error. The calibration technique used in this work is one of a foreground method. The foreground calibration is that all the column calibration has been done before the beginning of CIS operations. Thus there is no lowering of frame rate because of the foreground calibration. However, it takes a time for the calibration. Based on the measured results in this paper, it takes 1.25 ms for VGA pixels. Of course, the more the number of pixels is increased, the more the foreground calibration time is also increased. Nevertheless, most of the recent foreground calibrations are sufficiently finished during the CIS ready time, even in a huge pixel system. Figure 12 shows the simulated output values of the CIS with or without the self-calibration technique. We assume that the same input voltages are applied for each of the three columns with arbitrary mismatching error. For example, we assume that the pixel voltage is 2.095 V at the left figure, and it is 2.075 V at the right figure. From the simulation results, the code outputs of the image sensor without the self-calibration show different outputs for the three columns, even though the same input voltages are applied. It is said that the CFPN generates if the outputs for each column are different. However, we found that the three columns have the same output values, when the self-calibration technique is used. Therefore, the column self-calibration technique drastically reduces the CFPN. Figure 13 shows the chip microphotograph of the fabricated CIS with Samsung 0.13 μm 1P4M CIS technology. The chip size is 6.5 mm × 6.5 mm and the pixel array conforms to the VGA resolution (640 × 480). The CIS in this paper is configured to provide most of the control signals through an external FPGA. Using such a configuration allows us to establish various test environments for the image sensor, to verify the performance of the CIS, and to check its various features. Figure 14 shows the configuration of measurement systems, and it is comprised of the board which contains the Xilinx-XEM 3050 FPGA and the board with the CIS of the Chip On Board (COB). The FPGA plays a role in generating the control signal for measurement, receiving the output data from the image sensor, and delivering the result to the PC through the USB interface. The transmitted data are handled in the PC to apply the processing of a real image. Here, unfortunately, the resolution of normal display is limited to 8-bit. Since the normal display systems such as LCD, LED display, and other ones are set to always 8-bit resolution, the testing for the 14-bit CIS has a problem. With a normal display, the 14-bit full-resolution of CIS cannot be tested. In order to overcome the problems, we divide and select the digital output codes of TS SS ADC from the FPGA one by one. For example, if we choose 7-bit coarse and 1-bit fine, the testing resolution is 8-bit. If we choose 3-bit coarse and 5-bit fine, the testing resolution is 12-bit. Assuming x-bit coarse ADC and y-bit fine ADC, we can obtain the testing resolution as follows: Chip Microphotograph and Measurement Systems (4) Therefore, (7 + y)-bit is the final testing resolution. From Equation (4), for example, the testing resolution is 14-bit, when we choose 1-bit coarse and 7-bit fine. Figure 15 shows the measured VGA sample image with the condition of 7-bit coarse and 1-bit fine, namely 8-bit resolution. It achieves the frame rates of 120 fps at a main clock speed of 40 MHz. It shows a high image quality. Figure 16 shows the measured sample images with the condition of 1-bit coarse and 7-bit fine, namely 14-bit resolution. Figure 16a shows the sample image without the self-calibration technique. It shows very poor image quality due to the presence of high levels of random noise and of the CFPN caused by the degraded ADC linearity error from the analog gain difference between the two ramps. On the contrary, Figure 16b shows the sample images with the self-calibration technique. The ADC linearity is drastically improved as a result of the self-calibration of the analog gain difference between the two ramps. Therefore, the CFPN and the random noise in the image sensor were significantly reduced and an improved image quality was achieved. Figure 17 shows the measured SNR dependent on the amount of exposure with/without the self-calibration. The measurement uses the TE241-OECF noise test chart and the analysis is performed by using the IMATEST software program. If the image data is achieved with the condition of 7-bit coarse and 1-bit fine (namely, 8-bit ADC), the fine bit data is included into the only 1-bit. Thus there is not a large difference in the SNR before and after the self-calibration despite the analog gain difference between the two ramps. If the fine bit data increases to a 4-bit or 5-bit level, the difference of the analog gain between the two ramps degrades the ADC linearity more, and the SNR without the self-calibration becomes very low. If we use the self-calibration technique, however, it improves the SNR about by more than 10dB. Specially, the effect is more excellent at the low exposure region. Figure 18 shows the measured CFPN dependent on the light intensity before and after the self-calibration at the 10-bit resolution. The average CFPN is reduced by about 0.4 LSB after the self-calibration. Figure 19 shows the measured results of the CFPN with the condition of the n-bit coarse and m-bit fine. Since the normal display system like a LED display is set to the 8-bit resolution, the addition of n-bit and m-bit is always 8. For example, the measured condition of the 7-bit coarse and 1-bit fine is 8-bit resolution, and the measured condition of the 3-bit coarse and 5-bit fine is 12-bit resolution. The measured CFPN of 8-bit ADC is 0.38 LSB, and the measured CFPN of 12-bit ADC is LSB Light Intensity Without Calibration With Calibration 5.5 LSB, respectively, at the dark region for the 10-bit resolution scale. Even in 13-bit or 14-bit, the measured results of CFPN cannot be obtained. Therefore, the measured performance of this work is about 10.5-bit, even though a 14-bit CIS was designed. The reasons why the measured result of 10.5-bit is obtained will be discussed in the conclusions. Figure 20 shows the measured results for the variation of the pixel level with/without the digital CDS shown in Figure 6. The pixel output level without the digital CDS becomes uneven and results in a high CFPN. However, the overall pixel output level becomes even when the digital CDS is used, and the CFPN is decreased. Conclusion This paper described a CMOS image sensor with a Two-Step Single-Slope ADC, a high speed frame rate of 120 fps, a low fixed pattern noise (FPN), and a column self-calibration technique. In order to satisfy the required specifications, therefore, new techniques have been proposed in this paper. The TS SS ADC performed the A/D conversion by separating the coarse ADC and fine ADC, and it was suitable to implement the high speed CIS. However, the difference of ramp slope ratio between the coarse ADC and the fine ADC due to the device mismatching error degraded the ADC linearity and generates high levels of noise. To overcome such problems, this paper used a new 4-input comparator to prevent the changes in the ramp slope ratio and to remove the serial capacitor. Further, a feedback circuit of the self-calibration was designed to correct the difference of slope ratio between the coarse ramp and the fine ramp for each column, to improve ADC linearity, and to reduce CFPN generation. As a result, the SNR was improved by about 10 dB and the CFPN was reduced by more than 0.4 LSB relative to those of the TS SS ADC without self-calibration. The chip has been fabricated with the Samsung 0.13 μm 1P4M CIS technology. The resolution of the CMOS image sensor conformed to the VGA specifications of 640 × 480, and the pixel size was 5.6 μm with the 4-TR APS. The conversion time of the designed TS SS ADC satisfied 12.5 μs at the main clock speed of 40 MHz. Thus the frame rate was faster by about 48 times than that of the conventional one based on SS ADC. Table 1 shows the summary of measured CIS performance. Even though a 14-bit TS SS ADC was designed, the measured Effective Number of Bits (ENOB) in this CIS was about 10.5-bit from Figure 19 and Table 1. There were a few reasons why a low ENOB was obtained. First of all, a high performance ramp generator beyond 14-bit was not supported. Although a 14-bit ramp generator with a current steering DAC was designed, the measured ENOB was just 12-bit. In order to obtain a 14-bit TS SS ADC, the ENOB of ramp generator should have been the level of 16-bit. Secondly, kT/C noise, charge injection noise, and clock feethrough noise from switching MOS transistors were not carefully analyzed. They were the dominant noises to degrade the performance of ADC. Further, the pixel noise was also a problem to degrade the performance of CIS. Table 2 shows the comparison results of the proposed ADC with the published works. In terms of FOM, while it is larger than that of SAR ADC [8], Delta-Sigma ADC [9], cyclic ADC [10], and Single-slope ADC [11], it is smaller than that of SAR/SS ADC [13], TS SS ADC [14], and TS SS ADC [15]. Even though the measured ENOB is 10.5-bit, the FOM performance of this work is remarkable in the area of TS SS ADC. Based on those conclusions, further works will be discussed. Firstly, a novel calibration technique based on a background method will be studied. In this work a foreground calibration technique has been proposed and verified, but a more efficient background calibration technique will be needed for the market products. Secondly, a high performance ramp generator beyond the 16-bit resolution will be proposed for the 14-bit TS SS ADC. As mentioned above, the performance of a ramp generator is the most critical point to obtain the desired results of 14-bit CIS. Thus the design of a 16-bit ramp generator will be focused on. Thirdly, the noise analysis must be researched in the area of pixels, analog circuits, and digital circuits, respectively. Further we have to analyze kT/C noise, charge injection noise, clock feethrough noise, and so on. Based on the results of noise analysis, in order to implement a 14-bit CIS, a few kinds of noise reduction techniques will be discussed.
6,199.2
2014-07-01T00:00:00.000
[ "Computer Science", "Engineering" ]
State Estimator Design of Generalized Liu Systems with Application to Secure Communication and Its Circuit Realization The generalized Liu system is firstly introduced and the state observation problem of such a system is explored. A simple state estimator for the generalized Liu system is developed to guarantee the global exponential stability of the resulting error system. Applications of proposed state estimator strategy to chaotic secure communication, circuit implementation, and numerical simulations are provided to show the effectiveness and feasibility of the obtained results. Besides, the guaranteed exponential convergence rate of the proposed state estimator and that of the proposed chaotic secure communication can be precisely calculated. Introduction Frequently, it is either inappropriate or impossible to measure all the elements of the state vector.In particular, states estimator is more intricate when system is chaotic, nonlinear, or stochastic in model or parameters.Estimating system states has come to take its pride of place in system identification, control design, and filter theory, which has taken up engineers' attention from early 1960s.Recently, a wide variety of methodologies have been proposed for the estimator design of systems, such as sliding-mode observer (SMO), Chebyshev neural network (CNN), separation principle, frequency domain analysis, and passivation of error dynamics.For more detailed knowledge, one can refer to [1][2][3][4][5][6][7][8][9] and the references therein.As usual, most of the observer designs focus on high tracking accuracy and the fast response.For the fast response, the use of a sigmoid function in a boundary layer is particularly popular.Nevertheless, the observer error cannot be guaranteed to converge to zero within the boundary layer [1]. Because chaotic system is highly sensitive to initial conditions and the output behaves like a random signal, several kinds of chaotic systems have been widely applied in various applications such as secure communication, ecological systems, system identification, master-slave chaotic systems, chemical reactions, and biological systems; see, for instance, [10][11][12][13][14][15][16][17][18][19][20][21] and the references therein.Recently, various problems of chaotic Liu system have been investigated; see, for example, [11][12][13][14][15][16][17][18][19][20][21].In [12], by geometric analysis, it has been shown that the basin of attraction of the Liu attractor has riddled property, leading to the conclusion that it has a strange attractor in the sense of Milnor.In [21], by means of Routh-Hurwitz criteria, a feedback strategy has been proposed for stabilizing the unstable equilibrium points of Liu chaotic system.Besides, based on Lyapunov stability theory, an adaptive control law has been offered in [20] to achieve the antisynchronization between Liu system and Rössler system with unknown parameters. In this paper, the state estimator of the generalized Liu system is firstly introduced and studied.A simple state estimator for such systems is provided to guarantee the global exponential stability of the resulting error system.Moreover, the guaranteed exponential convergence rate can be accurately estimated.Applications of proposed state observer scheme to secure communication, circuit implementation, and numerical simulations are offered to evidence the utility and effectiveness of the main results. This rest of the paper is organized as follows.The problem formulation and main result are presented in Section 2. Several numerical simulations and implementation of electronic 2 Mathematical Problems in Engineering circuits are given in Section 3 to demonstrate the presented schemes.Finally, some conclusions are drawn in Section 4. Problem Formulation and Main Result In this paper, we consider the following generalized Liu system: where is the state vector, () ∈ R is the system output, and and are the system parameters with > 0 and ̸ = 0.For the existence and uniqueness of the system ((1a), (1b), (1c), and (1d)), we assume that all of functions (⋅), for all ∈ {1, 2, 3, 4}, are sufficiently smooth with 4 being an invertible function.It goes without saying that since states are not always available for direct measurement, states must be estimated.The aim of this paper is to search a state estimator for the system ((1a), (1b), (1c), and (1d)) such that the global exponential stability of the resulting error systems can be guaranteed.In what follows, is used to denote the transpose for a matrix , ‖‖ := √ ⋅ denotes the Euclidean norm of the column vector , and || denotes the absolute value of a real number . Before presenting the main result, let us introduce a definition which will be used in the main theorem. As a result, we conclude that This completes the proof. Application with Numerical Simulations For any information vector ℎ() in the transmitter system, the objective of secure communication system is to recover the message ℎ() in the receiver system.Let us consider the following chaotic secure communication system and the proposed scheme is illustrated in Figure 1. Conclusion In this paper, the generalized Liu system has been firstly introduced and the state observation problem of such a system has been investigated.A simple state estimator for the generalized Liu system has been developed to guarantee the global exponential stability of the resulting error system.Applications of proposed state estimator scheme to chaotic secure communication, implementation of electronic circuits, and numerical simulations have also been provided to illustrate the practicability and effectiveness of the main results.Meanwhile, we have shown that the guaranteed exponential convergence rate of the proposed state estimator and that of the proposed chaotic secure communication can be precisely calculated.However, the state estimator design for more general uncertain Liu system still remains unanswered.This constitutes an interesting future research problem.
1,233.4
2014-03-26T00:00:00.000
[ "Engineering", "Computer Science", "Physics" ]
Quercetin Decreases Claudin-2 Expression Mediated by Up-Regulation of microRNA miR-16 in Lung Adenocarcinoma A549 Cells Claudin-2 is highly expressed in human lung adenocarcinoma tissues and cells. Knockdown of claudin-2 decreases cell proliferation and migration. Claudin-2 may be a novel target for lung adenocarcinoma. However, there are no physiologically active substances of foods which decrease claudin-2 expression. We here found that quercetin, a flavonoid present in fruits and vegetables, time- and concentration-dependently decreases claudin-2 expression in lung adenocarcinoma A549 cells. In the present study, we examined what regulatory mechanism is involved in the decrease in claudin-2 expression by quercetin. Claudin-2 expression was decreased by LY-294002, a phosphatidylinositol 3-kinase (PI3-K) inhibitor, and U0126, a MEK inhibitor. These drugs inhibited the phosphorylation of Akt and ERK1/2, which are downstream targets of PI3-K and MEK, respectively. In contrast, quercetin did not inhibit the phosphorylation. Both LY-294002 and U0126 inhibited promoter activity of claudin-2, but quercetin did not. The stability of claudin-2 mRNA was decreased by quercetin. Quercetin increased the expression of microRNA miR-16. An inhibitor of miR-16 rescued quercetin-induced decrease in the claudin-2 expression. These results suggest that quercetin decreases claudin-2 expression mediated by up-regulation of miR-16 expression and instability of claudin-2 mRNA in lung adenocarcinoma cells. Introduction Flavonoids present in fruits, vegetables, and plants have profound pharmacological properties, and a daily intake of them is associated with a decreased risk of cancer [1]. Quercetin is one of the major flavonoids found in human diet. Quercetin shows apoptotic and anti-proliferative effects in various human cancer cells derived from gastrointestinal tract [2], breast [3], prostate [4], and lung [5]. Quercetin induces apoptosis mediated via reduction of Bcl-2 family protein, release of cytochrome c, and activation of caspases [6,7]. In contrast, the underlying molecular mechanism of anti-proliferation has not been fully understood. Tight junctions (TJs) seal adjacent cells of epithelia in a narrow band at the apical pole of the lateral membrane. TJs regulate the flux of ions and solutes through the paracellular pathway, cell proliferation, and differentiation [8][9][10]. Claudins are integral membrane proteins in the formation of TJs and comprise a large family of 27 subtypes [11,12]. The functions of TJs may be regulated by the combination of claudin subtypes. Dysregulation of claudins expression has been shown in various tumor tissues [13]. The expression of claudin-1 and -7 is up-regulated and that of claudin-2 is down-regulated in human lung adenocarcinoma. The improvement of the expression level of these claudins decreases cell proliferation or migration in NSCLC cells [14][15][16]. Cell proliferation and migration are regulated by several intracellular signaling pathways including a MEK/ERK and phosphatidylinositol 3-kinase (PI3-K)/Akt. Therefore, it is important to clarify the relationship between these signaling pathways and the expression of claudins. MicroRNAs (miRNAs) are small and non-coding RNA molecules (21-23 nt) bind to the complementary recognition sequences in the 3′-untranslated region (3′-UTR) of target mRNAs and act as a negative regulator of gene expression either by blocking mRNA translation or RNA interference [17,18]. miRNAs play essential roles in a variety of biological and pathological processes including cell proliferation, migration, and apoptosis. The target and function of miRNA are different in each tissue. The specific miRNA acts as either oncogenes or tumor suppressors, depending on the target gene. miRNA microarray studies reveal that various miRNAs are aberrantly expressed in lung cancer [19]. miR-16 expression is down-regulated in human non-small cell lung cancer (NSCLC) tissue samples compared with normal tissues. The ectopic expression of miR-16 decreases hepatoma-derived growth factor, a potential oncogene, and inhibits cell growth, migration, and invasion in NSCLC cells [20], indicating that miR-16 functions as a negative regulator of cell cycle progression. Furthermore, miR-16 has manifold cellular functions in other tissues. miR-16 inhibits the transcriptional activity of nuclear factor-kappaB and Slug, resulting in suppression of epithelial-mesenchymal transition in human glioma [21]. miR-16 expression is inversely correlated with Bcl-2 expression, which induces apoptosis in leukemic cells [22]. The correlation of miRNAs and claudins has been reported in several cancer cells. Claudin-1 expression is suppressed by miR-155 in colorectal [23] and ovarian cancer cells [24]. In contrast, the expression is up-regulated by miR-198 in hepatocellular carcinoma cells [25]. Claudin-18 expression is suppressed by miR-1303 in gastric cancer cells [26]. However, there is no report that claudin-2 expression is regulated by miRNA. In the present study, we found that quercetin decreases claudin-2 expression by decreasing the stability of its mRNA in A549 cells. Quercetin increased miR-16 expression and an inhibitor of miR-16 rescued quercetin-induced decrease in claudin-2. Our results indicate that quercetin may decrease claudin-2 expression through increasing miR-16 expression in lung adenocarcinoma cells. SDS-Polyacrylamide Gel Electrophoresis (SDS-PAGE) and Immunoblotting The preparation of cytoplasmic extracts was performed as described previously [16]. Samples were applied to SDS-PAGE and blotted onto a PVDF membrane. The membrane was then incubated with each primary antibody (1:1000 dilution) at 4 °C for 16 h, followed by a peroxidase-conjugated secondary antibody (1:5000 dilution) at room temperature for 1 h. Finally, the blots were incubated in Pierce Western Blotting Substrate (Thermo Fisher Scientific, Waltham, MA, USA) and exposed to film, or incubated in ECL Prime Western Blotting Detection System (GE Healthcare UK Ltd., Buckinghamshire, UK) and scanned with a C-DiGit Blot Scanner (LI-COR Biotechnology, Lincoln, NE, USA). Band density was quantified with ImageJ software (National Institute of Health, Bethesda, MD, USA). RNA Isolation and Quantitative RT-PCR Total RNA was isolated from A549 cells using TRI reagent (Sigma-Aldrich, St Louis, MO, USA). Reverse transcription was carried out with ReverTra Ace qPCR RT Kit (Toyobo Life Science, Osaka, Japan). Quantitative real time PCR was performed using FastStart Universal SYBR Green Master (Roche Applied Science, Mannheim, Germany). The primers used to PCR are listed in Supplementary Table 1. The threshold cycle (Ct) for each PCR product was calculated with the instrument's software, and Ct values obtained for claudin-1 and -2 were normalized by subtracting the Ct values obtained for β-actin. In the experiments of miRNAs, reverse transcription was carried out with Mir-X miRNA First-Strand Synthesis Kit (Takara, Osaka, Japan). Quantitative real time PCR was performed using the specific primers for miRNAs (forward) and mRQ 3′ primer (reverse). The primers are listed in Supplementary Table S1. The Ct values obtained for miRNAs were normalized by subtracting the Ct values obtained for U6 snRNA as recommended by the manufacturer's instructions. The resulting ∆Ct values were then used to calculate the relative change in mRNA expression as a ratio (R) according to the equation R = 2 −(∆Ct (treatment) − ∆Ct (control)) . 3′-Rapid Amplification of cDNA Ends (RACE) PCR 3′-RACE PCR was carried out using 3′-Full RACE Core Set (Takara). Revers Transcription was carried out using Oligo dT-3 sites adaptor primer. PCR reaction was carried out using 3 sites adaptor primer and claudin-2 specific primer (5′-ATTTGAGATTGGAGAGGCTCTTTAC-3′) under the following conditions: denaturation at 94 °C for 0.5 min, annealing at 54 °C for 0.5 min, and extension at 72 °C for 0.5 min; these steps were repeated 30 cycles. The PCR products were visualized with ethidium bromide after electrophoretic separation on a 2% agarose gel. Luciferase Reporter Assay Using the reporter plasmid containing fragment of −1031/+37 of human claudin-2, luciferase reporter assay was carried out as described previously [27]. Relative promoter activity was represented as the fold-increase compared to the promoterless pGL4.10 vector. Statistics Results are presented as means ± S.E.M. Differences between groups were analyzed with a one-way analysis of variance, and corrections for multiple comparison were made using Tukey's multiple comparison test. Comparisons between two groups were made using Student's t test. Statics were performed using KaleidaGraph version 4.5.1 software (Synergy Software, PA, USA). Significant differences were assumed at p < 0.05. Decrease in Claudin-2 Expression by Quercetin Claudin-2 is highly expressed in human lung adenocarcinoma tissues and A549 cells compared with that in the normal lung tissue [28]. The expression level of claudin-2 protein in the cytoplasmic fraction time-dependently increased in A549 cells, which was inhibited by quercetin ( Figure 1A). In contrast, claudin-1 expression was unchanged by the treatments with quercetin ( Figure 1B). Furthermore, quercetin decreased the expression level of claudin-2 protein in a dose-dependent manner ( Figure 1C). These results indicate that quercetin decreases claudin-2 expression without affecting claudin-1 expression in A549 cells. Effect of Quercetin on the Phosphorylation of Akt and ERK1/2 The expression level of claudin-2 protein was decreased by LY-294002, a PI3-K inhibitor, and U0126, a MEK inhibitor (Figure 2A). LY-294002 and U0126 inhibited the phosphorylation of Akt and ERK1/2, respectively ( Figure 2B). In contrast, quercetin had no effect on the phosphorylation of Akt and ERK1/2. Claudin-2 expression is decreased by down-regulation of p-c-Fos, a down-stream target of ERK1/2 [27]. However, quercetin did not decrease p-c-Fos level. These results indicate that ERK1/2 and Akt are not involved in the decrease in claudin-2 expression by quercetin. Inhibition of Promoter Activity by Quercetin The expression level of protein is affected by the level of mRNA and/or the stability of protein. The expression level of claudin-2 protein was time-dependently decreased in the presence of cycloheximide, a translational inhibitor ( Figure 3A). Quercetin did not change the rate of decrease, indicating that quercetin did not affect the stability of claudin-2 protein. The expression level of claudin-2 mRNA was significantly decreased by quercetin, LY-294002, and U0126 ( Figure 3B). These results are consistent with those of Western blotting. Furthermore, quercetin decreased the expression level of claudin-2 mRNA in lung adenocarcinoma cell lines such as PC-3 and ABC-1 (Supplementary Figure S1). The transcriptional activity of claudin-2 is up-regulated by STAT3 in Madin Darby canine kidney (MDCK) cells [29], GATA4 in intestinal HIEC-6 cells [30], and cdx1, cdx2 and hepatocyte nuclear factor-1 in intestinal Caco-2 cells [31]. These transcriptional biding sites were schematically shown in Figure 4A. We recently reported that an EGFR/MEK/ERK/c-Fos pathway is involved in the up-regulation of promoter activity of claudin-2 in A549 cells [27]. Both U0126 and LY-294002 inhibited luciferase promoter activity ( Figure 4B). In contrast, quercetin enhanced rather than inhibited it. Decrease in the Stability of Claudin-2 mRNA by Quercetin To clarify whether quercetin decreases stability of claudin-2 mRNA, we examined the expression of claudin-2 in the presence of actinomycin D, a transcriptional inhibitor. The expression level of claudin-2 mRNA was time-dependently decreased in the presence of actinomycin D ( Figure 5). Quercetin significantly enhanced the decrease in claudin-2 expression. These results indicate that quercetin decreases the stability of claudin-2 mRNA. Figure 5. Decrease in the stability of claudin-2 mRNA by quercetin. A549 cells were treated with 10 μM actinomycin D in the absence and presence of 50 μM quercetin for the periods indicated. After isolation of total RNA, semi-quantitative RT-PCR (upper images) and quantitative real time PCR (lower graph) were performed using specific primers for claudin-2. n = 3. ** p < 0.01 is significantly different from control. Effects of Polyadenylation and miRNA on the Stability of Claudin-2 mRNA The stability of mRNAs is regulated by polyadenylation and mRNAs with long poly (A) are more stable than those with short poly (A) [32]. We examined the effect of cordycepin, an inhibitor of polyadenylation, on the expression level of claudin-2 mRNA. Both quercetin and cordycepin decreased the expression level of claudin-2 mRNA ( Figure 6A). In the same samples, 3′-UTR of claudin-2 is amplified by PCR reaction in control and quercetin-treated cells ( Figure 6B). In contrast, the short and faint band was detected in the cordycepin-treated cells. The expression level of mRNA was quantified by real-time PCR using primers for claudin-2 and β-actin, and was represented relative to values at control; (B) After reverse transcription using Olig-dT-3 sites adaptor primer, 3′-RACE PCR was carried out. The PCR products were visualized with ethidium bromide. n = 3-4. ** p < 0.01 is significantly different from control. Discussion The elevation of claudin-2 expression is reported in cancer tissues including lung [27], liver [33], colon [34], and stomach [35]. We recently found that claudin-2 up-regulates the expression of cell cycle regulators such as ZO-1 associated nucleic acid binding protein and cyclin D1 in the nuclei of A549 cells, resulting in the enhancement of proliferation [16]. Similarly, forced claudin-2 expression in colon cancer cells increases tumor growth in mice [34]. Therefore, claudin-2 may be not only a novel biomarker but also a therapeutic target of cancer. Claudin-2 is expressed in several normal tissues including the small intestine and renal proximal tubule. The transcriptional activity of claudin-2 is up-regulated by STAT3 [29], GATA4 [30], and cdx1, cdx2 and hepatocyte nuclear factor-1α [31]. We reported that the EGFR/MEK/ERK/c-Fos pathway up-regulates claudin-2 expression in A549 cells [27]. However, other regulatory mechanism of claudin-2 expression has not been fully investigated. The effects of flavonoids on the function of TJs and the expression of claudins have been examined in epithelial cells. Quercetin enhances intestinal barrier in Caco-2 cells through the elevation of claudin-4 [36] and through the assembly of zonula occludens-2, occludin, and claudin-1 and the expression of claudin-4 [37]. Genistein and daidzein restore lipopolysaccharide-induced impairment of barrier function in endometrial cells [38]. Here, we found that quercetin decreases claudin-2 expression mediated via up-regulating the instability of claudin-2 mRNA in A549 cells. The expression of claduin-4 is not detected in A549 cells [28] and was not changed by quercetin (data not shown). U0126 and LY-294002 inhibited the phosphorylation of ERK1/2 and Akt, respectively, and both of them decreased claudin-2 expression. However, quercetin did not inhibit the phosphorylation of ERK1/2 and Akt, whereas it decreased claudin-2 expression. These results suggest that quercetin decreases claudin-2 expression mediated by the different mechanisms of U0126 and LY-294002. So far, claudin-2 has been reported to regulate paracellular permeability and remodeling [39]. In contrast, quercetin did not change barrier functions including transepithelial electrical resistance and the permeability to dextran (Supplementary Figure S2), although it decreased claudin-2 expression under our experimental conditions. We suggest that the inconsistency may be attributed to the difference of cell density in the cultures. The stability of mRNA varies range from about 15 min to more than 24 h [40]. The half-life of mRNAs is regulated by cis-elements of mRNA and trans-acting RNA binding proteins. Human antigen R (HuR), an mRNA-binding protein, increases the stability of several target mRNAs. The HuR-mediated stabilization of mRNA is involved in carcinogenesis and onset of inflammatory disease and Alzheimer's disease. Quercetin inhibits the interaction of HuR to the AU-rich element (ARE) of 3′-UTR of tumor necrosis factor-α mRNA in mouse macrophage RAW264.7 cells [41] and myeloid cell leukemia-1 mRNA in human myelomonocytic U-937 cells [42]. The binding of HuR to ARE of the target genes may prevent degradation of mRNA caused by nucleases. In contrast, 3′-UTR of claudin-2 does not contain ARE, suggesting that the stability of claudin-2 mRNA is not regulated by HuR. Polyadenylation is fundamental for mRNA stability, extranuclear transport and translation during development [43]. The majority of genes generate multiple mRNAs as a result of alternative polyadenylation in the 3′-UTR. In 3′-RACE PCR reaction, cordycepin did not amplify DNA of claudin-2, but quercetin amplifies it similar to that in control. We suggest that the inhibition of polyadenylation is not involved in the quercetin-induced instability of claudin-2 mRNA. The roles of miRNAs have been investigated in normal and tumor lung. Some regulators including Dicer, DGCR8, and Drosha have been reported to be involved in the disruption of miRNAs [44,45]. Knockout mice lacking a key processing enzyme, Dicer, exhibit a failure of epithelial branching [46]. miRNA microarray studies showed that various miRNAs are aberrantly expressed in lung cancer [19]. The expression of miR-16 is down-regulated in 82% of adenocarcinomas and in six NSCLC cell lines [47]. Ectopic expression of miR-16 inhibits cell proliferation, colony formation, migration, and invasion in NSCLC cells [20]. miR-16 induces cell cycle arrest through targeting G1 cyclin, an essential regulator of G1 phase progression [48]. Our data indicate that quercetin decreases claudin-2 expression mediated by the elevation of miR-16 expression. The correlation of claudins and miRNA is reported in other tissue cells. Over expression of miR-155 prevents tumorigenesis in human ovarian cancer mediated by down-regulation of claudin-1 [24]. Down-regulation of miR-1303 prevents proliferation, migration, and invasion of gastric cancer cells mediated by up-regulation of claudin-18 [26]. Proliferation of A549 cells is inhibited by quercetin [5] and the knockdown of claudin-2 [16]. We suggest that quercetin prevents tumorigenesis in human lung cancer partially mediated via the elevation of miR-16 and decrease in claudin-2 expression. Quercetin-rich diet affects the expression of miRNA and is associated with lower risk of lung cancer. We need further study to clarify whether quercetin decreases claudin-2 expression mediated by the up-regulation of miR-16 using in vivo model system. Conclusions In the present study, we found that quercetin decreases claudin-2 expression in A549 cells. Quercetin decreased the stability of claudin-2 mRNA without inhibiting its promoter activity. The expression of miR-16 was increased by quercetin, but that of miR-15a, 15b. 195, 424, and 497 was not. Knockdown of miR-16 rescued quercetin-induced decrease in claudin-2. These results indicate that quercetin is a physiologically active substance of foods which decreases claudin-2 expression in A549 cells. Claudin-2 and miR-16 may be potential therapeutic targets for the treatment of adenocarcinomas. Acknowledgments This work was supported in part by grants from the Takeda Science Foundation, Nakatomi Foundation, Ichiro Kanehara Foundation, and Sagawa Foundation for Promotion of Cancer Research, and collaborative research grant from API Co., Ltd. (Gifu, Japan). Author Contributions Hiroyuki Sonoki and Tomonari Sato performed experiments and analyzed the data. Satoshi Endo, Toshiyuki Matsunaga, Masahiko Yamaguchi, Yasuhiro Yamazaki and Junko Sugatani contributed the experiment plan and discussion of the manuscript. Akira Ikari contributed to supervision of the project, interpretation of the data and writing the paper. Conflicts of Interest The authors declare no conflict of interest.
4,109
2015-06-01T00:00:00.000
[ "Biology", "Chemistry" ]
Working Capital Management of Catering Enterprises Based on the Supply Chain — Taking the Jiumaojiu Group as an Example , Introduction Working capital management mainly refers to a series of management strategies adopted by enterprises to improve the efficiency, profitability and paying ability, which is the core content of short-term financial management and can help enterprises use their funds efficiently.As the epidemic spreads around the world, many countries are facing problems of negative economic growth.In the domestic markets, the consumers' consumption expenditure has been greatly reduced (Xu, 2022) and consequently the service industries are under the pressure from the weak purchasing power of end consumers.As the competition in domestic and foreign markets becomes intensifying, whether the company's funds can operate efficiently has been vital to the sustainable development of the enterprises.It is particularly urgent to pay attention to the management of working capital of enterprises.The working capital flow starts from procurement, involving production, marketing and other aspects related to the supply chain.The essence of working capital management is the working capital management throughout the supply chain. As a traditional service industry, the catering industry has made positive contributions to economic development, expanding domestic demand, prospering the market economy, providing jobs and improving people's better life.However, according to Ma (2022), the catering industry was still seriously affected by the epidemic, with decreased income, high cost, tight cash flow and difficulty in financing.Jiumaojiu Holding Co., Ltd., a catering group focusing on Chinese catering chain operation, was founded in Haikou, Hainan Province and has been operating for more than 28 years.The Group has established and operated five Chinese catering brands indifferent segments, including -Jiumaojiu Northwest Cuisine‖ and -Taier Chinese Sauerkraut Fish‖.It was listed on the Stock Exchange of Hong Kong on January 15, 2020.However, recently, Jiumaojiu issued a profit warning announcement, due to the epidemic and the fluctuation of the exchange rate, the profit attributable to shareholders was not less than 47 million yuan, down about 86.2% year-on-year, that is, the net profit of Jiumaojiu fell by nearly 90%.In order to keep the business running during this tough time and develop in the long-term, it is crucial to strengthen the efficient management of its working capital. In times of financial setbacks owing to the international relations and the epidemic, there are a lot of difficulties for the developments of enterprises.From the perspective of supply chain, this paper aims to study the current situation of working capital management of Jiumaojiu Group, analyze the existing problems in its working capital management, and put forward improvement suggestions, which may help to explore the reasonable capital structure of the enterprise, enhance the risk resistance ability, and achieve maximum economic benefits.Also, this study aims at providing useful information for financial data users from the perspective of supply chain, promote managers and financial personnel in enterprises to cultivate view of the channel management, so as to formulate more objective and scientific working capital management policies and improve the efficiency of working capital use and management. Literature Review American scholar Michael E. Porter firstly proposed the concept of value chain, which provided a theoretical foundation for the research of supply chain.In the early days, the supply chain was viewed as a logistics process.Beamon (1999) defined the supply chain as the logistical connection of manufacturers, suppliers and sellers, which could improve the efficiency of the enterprise.In the 1990s, people's understanding of the supply chain has changed from a -production chain‖ to a -value-added chain‖.Mentzer et al. (2001) suggested that the upstream and downstream enterprises form a complete supply chain, in which the products, funds and information between enterprises could be exchanged.Later studies paid more attention to the network of the core enterprises.Hofmann and Kotzab (2011) believed that the integration and coordination of supply chain became the main content of the supply chain research.The theme of integration played an important role in the research of supply chain management.Tong and Gu (2013) introduced business flow and workflow on the basis of the original supply chain management research method, and formed a 5F research model, which enriched the supply chain management research method.Gang et al. (2016) pointed out that the application of big data based on information technology to the supply chain management can improve the efficiency of enterprises, and help to measure how much benefit the supply chain can bring to the enterprise.Zhang (2020) carried out the research of the integration of supply chain and finance, which provided a new development concept for corporate financing. In 1930s, Guthmann (1934) noticed that many businesses went bankrupt due to insufficient working capital during the Great Depression in the United States.He first proposed a study of working capital in response to this phenomenon.Mao (1995) was the first scholar to study working capital in China.He expounded the concept of working capital management, and proposed that the importance of integrating the two aspects of current assets and current liabilities.Wang and Ma (2005) believed that enterprises need innovation in working capital management.The focus should be shifted from the allocation of current assets to the channel control.It was believed that while considering the relevant interests of upstream and downstream enterprises, more working capital isn't always better (Fu, Huang & Gan, 2006).Wang and Zhang (2012) pointed out that for the use of working capital, only a comprehensive analysis of all aspects and elements could form a more objective evaluation system.Kumar and Sisadia (2012) used nine major factors for quantitative analysis, and the pointed out that working capital management is to ensure that the company have sufficient capacity to pay the long-term and short-term debts.Liu (2020) used different indicators to conduct a comprehensive study on the working capital of enterprises, and pointed out that the channel capital management of enterprises would have a direct impact on the working capital management. Since 2001, studies in the United States had introduced supply chain thinking into working capital management, believing that it was necessary to incorporate supply chain management in business management.In recent years, many scholars at home and abroad have also conducted the research of working capital management from the perspective of supply chain.It was believed that though there were differences in working capital management between industries, improving supply chain management and channel relationship management can improve the working capital management in different companies (Wang, Pang & Sun, 2007).Lind et al. (2012) proposed that with the intensifying competition in the market, the competition between enterprises was no longer limited to the interests, but has shifted to the competition for the market, customers and suppliers in the entire supply chain.Therefore, the management of working capital should abandon the traditional single factor, and should be based on the overall management of the supply chain.It was vital to optimize the cooperative relationship between enterprises, suppliers and customers.Zhao and He (2018) conducted a study using case analysis.They pointed out that the speed of logistics turnover in the supply chain has a significant impact on the efficiency of working capital.Xie (2018) constructed the working capital turnover period index, and evaluated the procurement, production, sales process of the enterprise from the perspective of the business process.Wen and Wang (2021) pointed out that the more stable and concentrated the supply chain, the higher the efficiency of working capital management of enterprises.Zhang (2021) took a manufacturing enterprise as the research object and found some problems in its working capital management.He proposed countermeasures for these problems from three parts of procurement, production and sales. Overview of Catering Industry in China According to the National Bureau of Statistics, in recent years, the overall scale of China's catering industry has shown a growth trend.In 2020, owing to the outbreak of epidemic, the annual catering revenue fell by about 15% compared to 2019.In 2021, with the epidemic gradually under control, China's catering industry has recovered to the pre-epidemic level.However, in 2022, the revenue of China's catering industry was 4,390 billion yuan, down 6.3% from the previous year. Owing that the total profit of catering enterprises in China in 2022 haven't been revealed, here we analyze the total profit from 2017 to 2021.Although the total revenue showed the trend of fluctuation, the total profit of China's catering enterprises has dropped significantly, from 258 billion in 2018 to 25 billion in 2019.What's more, in 2020, the total profit has dropped by 50% in a year, from which we can conclude that owing to the epidemic, the operation of the catering industry has been affected seriously and the total profit in catering industry has declined. Basic Information of Jiumaojiu Group Jiumaojiu (Guangzhou) Holding Co. Ltd., a catering group focusing on Chinese catering chain operation, was founded in Haikou, Hainan Province and has been operating for more than 28 years with more than 16,000 employees.The Group has established and operated five Chinese catering brands in different segments, namely, -Jiumaojiu Northwest Cuisine‖, -Taier Chinese Sauerkraut Fish‖, -Song Hot-pot Factory‖, -The Uncle Chef‖ and -Laimeili Meishan Grilled Fish with Green Prickleyash‖.It is a leading Chinese restaurant, committed to operating catering brands distributed in different market segments.According to its interim report in 2022, as of December 31, 2022, the number of its self-operated stores has reached 556.It has worked with more than 100 shopping centers for more than five years.The restaurant network has covered 109 cities in China and 2 cities in Singapore and Canada.The number of registered members in the membership system has exceeded 8 million.However, the company is in a poor financial situation in recent years.Its operating income in the first half of 2022 decreased by 6. 1% year-on-year, and its net profit fell from 205 million to 63 million.Its financial situation in the second half of the year may be worse owing to the epidemic. Supply Chain Management of Jiumaojiu Group The supply chain of Jiumaojiu Group can be divided into upstream, midstream and downstream parts.The upstream part of the supply chain is the pre-production preparation, including raw material procurement and the research and development of new products.The midstream part refers to the production process.Jiumaojiu Group's own central kitchens in Guangdong, Hainan and Hubei produce most of the semi-products.It also purchases a small number of semi-finished products from third-party supply centers.The downstream part is the marketing of its own products.The products eventually flow to consumers at the end of the supply chain through online channels and offline channels.In recent years, Jiumaojiu has continued to invest in the construction of supply chain, especially the upstream and midstream parts.Its 2022 interim report shows that it has completed a supply chain management system with five functions: estimating quotation, warehouse management, procurement management, production and processing, and logistics management. On December 23, 2021, the Jiumaojiu constructed a National Supply Chain Center Base in Hengli Town, Nansha District, Guangzhou.With a construction area of about 138,000 square meters and a total investment of 500 million yuan, the base is expected to achieve an annual output value of more than 3 billion yuan in 2027.It will become a comprehensive catering base integrating raw material storage, food processing, transshipment, distribution, personnel training and cultural display.Investment in the supply chain will help Jiumaojiu increase the company's profits.For example, through the layout of the upstream, though the market bass prices increase generally, the purchase price of sea bass in 2021 has decreased compared with 2020, greatly reducing the cost. Working Capital of Jiumaojiu Group This paper analyzes Jiumaojiu's working capital structure in detail before analyzing the working capital management form the view of the supply chain.It is believed that the working capital structure includes current assets and current liabilities.Based on the financial data from 2018 to 2022, the following analysis is carried out. (1) Current Assets The proportion of current assets can describe the liquidity of the enterprise's assets.The scale of current assets is shown in Table 1 below 1, the total current assets of Jiumaojiu Group increased from 227 million yuan to 2.62 billion yuan.According to the details in the financial reports, it is due to the increase of the cash.However, the proportion of current assets in the later period declined, which is related to its focus on supply chain management.Although Jiumaojiu had invested more on the midstream and the downstream of the supply chain, in recent years it also begun to build its own raw material production base in the upstream of the supply chain and its own central kitchen in the midstream, so it also needed non-current assets such as production plants and production equipment.Therefore, the proportion of current assets has decreased slightly.The current assets mainly include: monetary funds, accounts receivable, inventory, etc. Table 2 shows the proportion of major current assets of Jiumaojiu Group from 2018 to 2022.2, the cash and cash equivalents and the trade and other receivables were relatively high.The proportion of inventories showed a downward trend, indicating that the Jiumaojiu Group took certain measures to avoid the impairment caused by inventory backlog and reduce the proportion of inventory occupied by working capital.The surge in cash and cash equivalents was mainly related to the need of cash for the expansion in recent years.However, its proportion was too high, indicating that the company doesn't invest efficiently, and the return would reduce.In addition, accounts receivables fell significantly, indicating that its status in the supply chain has become higher, reducing the risk of uncollectible accounts receivable. (2) Current Liabilities: It is inevitable for enterprises to borrow in daily operations.The reasonable liability structure can solve the problem of insufficient cash of enterprises and help enterprise use external funds to carry out internal business, bringing financial leverage to enterprises.The current liabilities of Jiumaojiu Group are shown in Table 3 below.3, the proportion of current liabilities of Jiumaojiu Group remained relatively stable, fluctuating around 45%.According to the research of Li ( 2022), the portion of current liability from 30% to 70% is moderate, which means it would not rely solely on short-term borrowing.Although short-term debt can temporarily cope with the risk of insufficient liquidity, their borrowing interest rates are high and may lead to a break in the capital chain. Working Capital Management in the Procurement Process The working capital management in procurement mainly includes the management of payables, advance payments and material inventory.Both accounts payable and notes payable are debts incurred by enterprises to purchase raw materials from suppliers, in which case the enterprise appropriates the funds of upstream suppliers.Prepayment is the payment paid by an enterprise to suppliers before receiving the goods, in which case the upstream suppliers of the enterprise occupy working capital.In order to speed up the working capital turnover efficiency of the procurement process, it is necessary to reduce the amount of raw material inventory and appropriately extend the payment cycle.Through the construction of the procurement platform, which includes the bass tracing system, Jiumaojiu Group can communicate with suppliers more efficiently, maintain friendly relationship with them, and reduce the cost of procurement.The calculation formula for the working capital of the procurement process is: Working capital in the procurement process = material inventory + advance payment -accounts payable.The working capital in the procurement process of Jiumaojiu Group in recent years is shown in Table 4.The Working Capital in the Procurement Process procurement process is relatively high, reflecting its inefficient capital flow and the capital occupied by inventories and the suppliers, which do harm to the operation of the enterprise.Turnover period in the procurement process in these three years is positive, which is 1.65 in 2019, 2.76 in 2021 and 2.93 in 2022.A positive turnover period indicates that Jiumaojiu didn't use the suppliers' funds and accounts payable may be poorly managed in business. Working Capital Management in the Production Process In the production process, the capital is mainly occupied by the salary of employees and other payables.If the enterprise delays the payment of these two parts, it can reduce the capital investment.The formula for calculating working capital in the production process is: Working capital in the production process = productive inventory + other receivables -employee pay payable -other payables. Jiumaojiu Group has established a logistics system, including three central kitchens located in Guangdong, Hainan and Hubei.Each kitchen is equipped with three warehouses.The service radius of the various logistics facilities is about 200 km.While some of its ingredients are directly transported to the restaurants, its own central kitchens in Guangdong, Hainan and Hubei produce most of the semi-finished products.Guangdong's Central Kitchen opened in December 2016 to provide ingredients to its restaurants in South China (excluding Hainan).Hainan's central kitchen opened in September 2016, providing ingredients for its restaurant in Hainan. Hubei's Central Kitchen opened in December 2015 and provide ingredients to its restaurants in Central and East China.In conclusion, Jiumaojiu invested a lot in the production process.The production process is the core part of the catering industry, and the efficiency of the production process provides a guarantee for enterprises to obtain profits.The working capital of Jiumaojiu Group in the production process in recent years is shown in Table 5. 3.79 In the production process, the working capital of Jiumaojiu Group was negative in the first three years, and it has increased significantly in the past two years.Since its financial statements do not reflect productive assets and employee pay payable, the calculation of working capital for its production process is relatively simple.Specifically, other receivables increased from 67.5 million yuan to 241.61 million yuan, an obvious increase, which shows that the company have to improve them management of funds in the production process.Other payables grew faster, from 114.554 million yuan in 2018 to 201.60 million yuan in 2022, an increase of nearly three times in five years, which shows that Jiumaojiu have tried to occupied funds in the production process.The turnover period in the production period from 2018 to 2020 is negative and about minus eight, indicating that it used the capital efficiently in these three years.However, it grew up rapidly and turned positive from 2021, showing that the capital may be occupied by some plants or labor cost in the production process and was not efficiently used. Working Capital Management in the Sales Process Selling is the last step of business activities, which is the key way to bring finished products to the sales market and the consumers, in order to return funds for operation.Enterprises gain capital returns through this channel, and then invest the funds in the next round of procurement, production, and marketing, or some financial management activities.The cycle is a vital to the sustainable development of enterprises.In the sales part, the working capital of the enterprise includes advance receivables, accounts receivable and tax payable.Advance receivables are funds prepaid by the customers, which means the occupation of the customers' funds.The finished products and account receivables will occupy the working capital of the enterprise in the sales part.The formula for calculating the working capital of the sales process is: Working capital in the sales process = accounts receivable + finished productsadvance receivables -taxes payable.The working capital in the sales process of Jiumaojiu Group in recent years is shown in Table 6.The main business of Jiumaojiu Group is to operate restaurants, provide takeaway business and sell special local products.Jiumaojiu Group tries to expand sales channels, making full use of the takeaway platform.Through online and offline channels, the revenue can grow and the funds for operation can return.In recent years, the tax payable has generally shown an upward trend.The accounts receivable also increased as a whole, up by about 38% from 2018 to 2022, showing that accounts receivable occupies more and more funds, which limits the capital turnover efficiency of the enterprise.It is not conducive to the development of enterprises, and the management of accounts receivable of Jiumaojiu Group needs to be improved.The advance receivables are funds prepaid by customers, which can increase the liquidity of Jiumaojiu Group and help obtain adequate sales information.However, the proportion of advance receivables of Jiumaojiu Group is very small.By calculating the working capital of the sales process, it is shown that from 2018 to 2020, the working capital of the sales process has been negative and continued to decline, indicating that the capital of consumers occupied by Jiumaojiu Group is increasing.However, working capital increased from 2021.Though the turnover period in the sales process were still negative, it increased dramatically from 2020 to 2022, showing the poor management of working capital in the sales process in these three years. Problems and Suggestions in the Working Capital Management Based on the Supply Chain for Jiumaojiu Group 4.4.1 Problems in the Working Capital Management Based on the Supply Chain Based on the above analysis, the problems in the working capital management for Jiumaojiu Group could be summarized as follows. (1) Low Efficiency of Inventory Management Compared with other industries, the inventory of the catering industry is with high liquidity, but the expiration date is generally the shortest.There are strict requirements for the freshness of ingredients.Therefore, the requirement for storage environment is high.If the material inventory doesn't be managed well, the loss for the company would be large.It's material inventory increased form 36.4 million in 2018 to 78 million in 2022, and the inventory turnover in days increased from 18.9 in 2018 to 20.2 in 2022.The overstock of inventory restricted the liquidity of funds, resulting in the increase in the turnover period in the procurement process.It is necessary to accelerate the inventory turnover efficiency of enterprises. (2) Excessive Account Receivable With the continuous expansion and development, the operating income of Jiumaojiu Group has increased significantly, and the accounts receivable were also surging.From 2018 to 2022, accounts receivable increased from 16.79 million yuan to 23.21 million yuan, an increase of 38%.The company adopted a relatively loose credit sales policy in order to seize market share in development, resulting in a surge in corporate accounts receivable.A large number of accounts receivable can bring with a series of adverse effects on a business.On the one hand, excessive accounts receivable indicates that the ability of the enterprise to convert products into funds is weak, which is easy to cause difficulties in capital turnover of enterprises; On the other hand, accounts receivables have the risk of bad debts, which increases the management cost and opportunity cost of the enterprise.At present, there are three main sales methods, cash sales, credit sales and pre-sales.Jiumaojiu Group generally implements cash sales, that is, to pay after the meal.This can prevent the occurrence of bad debt losses, quickly recover funds, and speed up the flow of funds.However, by comparing the accounts receivable of Jiumaojiu Group in the past five years, it can be seen that the scale of corporate accounts receivable has skyrocketed, indicating that the financial risk of the enterprise will increase accordingly.As of 2022, the accounts receivable of Jiumaojiu Group reached 23.2 million yuan.These receivables may come from third-party payment platforms such as Alipay and WeChat.This may lead to limited working capital turnover in the marketing process, which increases the cost of the enterprise to a certain extent. (3) Aggressive Expansion Strategy Table 7 shows the number of stores of each company in recent years.Affected by the epidemic, the number of passengers has declined, and the seat turnover rate for restaurant has dropped significantly, which has had a negative impact on revenue.But Jiumaojiu is still expanding, especially Tai Er.The number of stores has surged.In the process of implementing the scale expansion strategy, Jiumaojiu Group has not closely considered the current economic and social situation. Suggestions in the Working Capital Management Based on the Supply Chain (1) Jiumaojiu Group needs to improve its inventory management capabilities.Jiumaojiu needs to ensure that the inventory level is maintained in an appropriate state, avoiding the occupation of funds, thereby enhancing the efficiency of capital use in the procurement process.Therefore, Jiumaojiu could use the just-in-time (JIT) inventory system, which aligns raw-material orders from suppliers directly with production schedules.Companies employ this inventory strategy to increase efficiency and decrease waste by receiving goods only as they need them for the production process, which reduces inventory costs.This method requires Jiumaojiu to forecast demand accurately. (2) In addition to scientific and technological means to supervise the inventory, Jiumaojiu Group should also designate special personnel to regularly check the volume and freshness of raw materials every day, preventing food materials from deteriorating due to improper storage, which could reduce the losses and costs caused by inventory. (3) Jiumaojiu could establish the marketing investigation system to forecast the market demand.The rapid expansion has led to the inability to research some cities and estimate the consumption of these stores, which makes it difficult for procurement plans to meet market demand.The consumption of food raw materials is affected by many uncontrollable factors, such as fluctuations in customer flow and changes in consumer preferences.If warehouse management ignores the needs of downstream customers, it will greatly increase the difficulty of matching with market demand.Therefore, Jiumaojiu group needs to establish the marketing investigation system and set up a team specializing in marketing investigation. (4) In order to avoid the shortage of the company's working capital, it is necessary to strengthen the management of accounts receivable.Enterprises can formulate corresponding collection policies according to the age of accounts receivable, the number of accounts receivable, etc.Although most of the receivables of Jiumaojiu Group belong to third-party payment platforms, which would be paid in the short term, the amount of accounts receivable of Jiumaojiu Group is too large.Jiumaojiu Group could collect money through encouraging customers to use some real-time payment methods such as online banking. (5) Jiumaojiu should find a development model that meets the company's actual situation and reduce expansion slightly.After listing, Jiumaojiu Group has gradually expanded.However, owing to the rapid change of the market under the pressure of the epidemic and the reduction of national consumption level, many enterprises in catering industry are facing the risk of bankruptcy.In this context, companies should keep a cautious attitude and pursue a contractionary strategy. Conclusion This paper takes Jiumaojiu Group as an example and discusses its working capital management from the perspective of supply chain management using the financial index analysis method.Although Jiumaojiu Group has invested a lot in the management of the supply chain, the financial index shows that it didn't manage the working capital efficiently in the procurement process, the production process and the sales process in these years.Therefore, this study propose that it could improve the management of inventory, accounts receivable and improve the expansion strategy. This study conducts a case study of Jiumaojiu Group, but due to the confidentiality of its internal information, the data used in the article mainly comes from the public financial reports of Jiumaojiu Group and relevant information on the Internet.The information for conducting the in-depth research is limited.In the future, if more data and information are available, research should more closely integrate the management of working capital and supply chain of enterprises by explaining the financial indicators from more concrete behaviors of the enterprises.What's more, the sample size of the study can be expanded.Researchers can select more companies and use their data for empirical research. Table 1 . . Working Capital Structure of Jiumaojiu Group Table 2 . The Proportion of Major Current Assets of Jiumaojiu Group Table 3 . Current Liabilities of Jiumaojiu Group Table 4 . Working Capital in the Procurement Process Table 5 . Working Capital in the Production Process Table 6 . Working Capital in the Sales Process Table 7 . Number of Stores
6,339.2
2023-11-29T00:00:00.000
[ "Business", "Economics" ]
Development of CD-Rom to Appraise Nutritional Knowledge of the Information Technology Professionals : Computer technology provides the opportunity to explore new and creative methods of delivering nutrition education to the professionals in information technology sector. It has the potential to make learning about nutrition more interesting, informative, enjoyable, exciting and effective. This can also helps to reach large population with accurate and consistent nutrition education with greater efficiency and in less time. The purpose of this study was to develop and evaluate the appropriate digital nutrition education material, CD-Rom to appraise the IT professionals on balanced diet, good eating habits & correct food choices. As part of a community nutrition education program, nutrition education material in the form of CD-Rom was developed keeping in view the updated scientific information. The identified areas for development of CD-Rom included balanced diet, classification of foods based on their functions, nutritional significance of nutrients, therapeutic information, importance of physical exercises and dietary guidelines for Indians. The study demonstrated that nutrition and health related messages incorporated in the digital nutrition education material successfully transmitted to the target group of adults. Introduction Physical demands were a typical aspect of daily life. In modern times, many of these demands on the human body are no longer part of daily life due to mechanization of the society, i.e., an increased use of automobiles and decreased opportunities to walk in many modern living areas. Such changes in lifestyle have also reflected the rising prevalence of various chronic diseases like obesity and diabetes especially, among the sedentary population (Gupte et al., 2001). The professionals in the IT sector are at nutrition and health risk. Such professionals in the companies with ample access to energy-rich foods and low physical levels are at increased risk of becoming overweight or obese. Since the sedentary lifestyle is growing fast in the information technology sector, the number of IT professionals suffering from chronic diseases is increasing. Interventions are needed for nutritional health promotion of the target individuals and that can be delivered inexpensively to large segments of the population. The CD-ROM is a cheap and rapid tool for the distribution of up-to-date knowledge to the population. With due consideration, the digital nutrition education material, CD-Rom to appraise the professionals on importance of balanced diet, good eating habits and correct food choices to lead a healthy life and achieve their targets was developed. A number of studies have been conducted on nutrition interests and health needs in different populations, such as adolescent girls (Kaur et al., 2007), young women (Joshi and Vijayalaxmi, 2009) and many others, but there have been few studies among young adults, who are potential target group for nutrition education. Nutrition education lessons were given to a group of IT professionals with the purpose of increasing their nutrition knowledge and ultimately changing behavior. Topics that were covered in each lesson were relevant to the IT professionals. This pilot study will form the basis for planning educational interventions not only to improve the nutrition knowledge but also change in the attitudes related to food, nutrition and health issues. The purpose of this study was to develop digital nutrition education material, CD-Rom for improving the nutritional knowledge levels of the subjects and to appraise the professionals on balanced diet, good eating habits & correct food choices. Developing the CD-ROM The CD-Rom was developed on the specific areas identified from the information elicited from the study samples on physical activity pattern; food preferences; food consumption and meal pattern and knowledge levels and anthropometry of the professionals keeping their socio-economic profile as the background. The content of the CD-Rom was prepared with the help of Microsoft Office Power Point 2007. Then the final draft of the CD-Rom was made into a flash slide show. Nutrition Education Intervention Nutrition education is defined as the process by which beliefs, attitudes, environmental influences and understanding about food lead to practice are scientifically sound, practical and consistent with individual needs and available food resources. The importance of nutrition education as a means for improving the nutrition of the community in developing countries has been increasingly realized during the recent years. Nutritional education was imparted to the subjects after adjudging their basic nutritional knowledge, through the pre-test data collection with the help of structured schedule. The subjects were informed personally about the nutrition education intervention programme. The professionals were educated about the specific food and nutrition areas that were identified in the study. The developed education material i.e., CD-ROM was then given to the professionals followed by interactive sessions. The subjects attended these interactive sessions at their respective offices. A pre-test and post-test were used for this study. After respondents had finished the intervention, a post-test questionnaire was provided to them. The post-test questions were identical to the pretest questions; however, the order of the questions was rearranged. Duration of the Nutrition Education Material The professionals of four IT companies were interacted once in fifteen days respectively, for a period of one month. The interactive session took between 45 to 60 minutes. Evaluation of the CD-Rom At the end of the whole session, the subjects were also asked to evaluate the CD-Rom by examining the degree of understanding, sufficiency of information, self-explanation and usefulness of pictures to understand the content of the education material. Food and Nutrition Areas Identified for Development of CD-Rom Understanding the nutritional knowledge and awareness of the target group will help the researcher to identify the grey areas of nutrition so that the information and issues to be incorporated in developing the appropriate nutrition education material. The content of the nutrition education material should be based on assessed individual/group needs. Nutritionists should plan the nutrition education intervention based on the needs and socio-cultural background of the target group. Information tailored and designed to the audience needs is more effective than providing general information (Trevena et al., 2006). In the present study, research was carried out prior to the CD-Rom development to elicit information on physical activity pattern; food preferences; food consumption and meal pattern and knowledge levels and anthropometry of the IT professionals keeping their socio-economic profile as the background. The average knowledge levels of subjects indicated that the basic aspects of nutrition like balanced diet, functions of food etc and specific issues like consequences of consuming junk foods, excessive intake of fat with no physical activity; food pattern etc requires to be incorporated in the e-learning material i.e., CD-Rom. Stevens et al. (2003) also conducted a study with a 20-min interactive computer-based intervention using a touch-screen format on healthy women and showed significant decrease in dietary fat and increase in fruit and vegetable consumption. The identified areas and content under each areas of the CD-Rom are summarized below: Balanced diet and classification of foods based on their functions. Nutritional significance of macronutrients and micronutrients. Therapeutic information on obesity, diabetes mellitus and hypertension. Importance of physical exercises and dietary guidelines for Indians. Development of self learning e-material in the form of CD-Rom was decided as it is the best means of imparting nutrition education to literate subjects. CD-Rom Outlines and Design The CD-Rom titled "Know your Food for Healthy living" consisted of 67 slides. The main outlines of the content for developing a CD-Rom were given in Table 1. The CD-Rom was developed in an appealing design format. Colorful photographs and transition effects were used to maintain the interest of the professionals during an intervention, for better understanding and usage. A nutrition education program was developed by Bae Eun Young et al. (2007), also resulted into a major contribution to nutritional health promotion by correcting the eating-out habits of youth. Contents of the CD-Rom The contents of the CD-Rom were developed by reviewing the manuals and text books available on the food and nutrition information, specifically for Indians. Developing the CD-Rom The staffs of the department of Foods and Nutrition and research associates were consulted to review the CD-Rom. The draft of the CD-Rom was reviewed for fine tuning its appropriateness of the content, sequence, comprehension and finally its appeal to the users. Evaluation of the CD-Rom An evaluation of the CD-Rom education material was conducted with 40 IT professionals (29 males and 11 females). Most subjects opined that the information in the CD-Rom was informative (90%), self-explanatory (80%) and easy to understand (75%) ( Table 2). These results suggested that the CD-Rom developed was easy to understand, informative and provided sample menus which would enable them to practice in their daily life. Summary and Conclusion The major topics in the CD-Rom were introduction to balanced diet, various food groups and their functions; nutrients along with their importance in the body; therapeutic information on obesity, diabetes mellitus and hypertension; and the importance of physical exercises, dietary guidelines, suggested menus for sedentary adults. The characteristics of the education material, CD-Rom are as follows: 1) Simple, specific messages were used for IT professionals. 2) Diverse pictures, different animations in many slides were created and used to enhance understanding, interest and attractiveness. 3) Practical tips were provided for healthy eating and correct food choices. For example, tips on reducing fat and salt, increasing calcium and fiber intake, quitting smoking and drinking were presented. 4) Sample menus and food pictures were presented to help the learner to understand the correct dietary patterns. The rapid increase in obesity prevalence coupled with the expanding fast food industry has become a great public health concern. The interactive material was found to be nutritionally valuable for the IT professionals as it increased their awareness about healthy dietary habits among them. This can further prevent the vulnerability of young generations to various diseases. The food habits of IT professionals have also become increasingly unhealthy which may lead to high risk of chronic diseases. High intakes of fried and junk food may increase the risk of chronic and other diseases, such as obesity and cardiovascular disease. Changes in food habits and preferences towards healthy food choices may help to minimize the disease for this age group. Consequently, developing appropriate strategies to promote a healthy diet for IT professionals in Hyderabad, Andhra Pradesh should be a concern of the nutritionists. Future Research Directions Future research may encompass a detailed research on knowledge, attitude and practices of the IT professionals to know the parameters leading to health risk and nutrition education to create awareness on correct food choices and healthy eating pattern are required.
2,427.6
2016-02-20T00:00:00.000
[ "Computer Science", "Education", "Medicine" ]
Mechanism study on pH-responsive cyclodextrin capped mesoporous silica: effect of different stalk densities and the type of cyclodextrin Cyclodextrin (CD)-capped mesoporous silica nanoparticles (MSN) with pH-responsive properties were synthesized, but little research has been carried out to evaluate the impact of critical factors such as the stalk density and the type of CD on the pH-responsive release behavior. Here, the effect of different stalk densities on the pH-responsive release behavior was investigated. Either too low or too high density of the grafted p-anisidine stalk could result in poor cargo release, and the optimum stalk density for MSN was measured by thermal analysis, and found to be approximately 8.7 stalks nm−2. To achieve effective release control, the CD capes, α-CD and β-CD, were also investigated. Isothermal titration calorimetry (ITC) analysis was employed to determine the formation constants (Kf) of the two CD with p-anisidine at different pH values. The results obtained showed that the complex of β-CD with p-anisidine had excellent pH-responsive behavior as it exhibited the largest changed formation constant (ΔKf) in different pH media. Furthermore, the pH-responsive mechanism between CD and p-anisidine molecules was investigated through ITC and a molecular modeling study. The release of antitumor drug DOX presents a significant prospect toward the development of pH-responsive nanoparticles as a drug delivery vehicle. Introduction Mesoporous silica nanoparticles (MSNs) have been widely studied as a vehicle in controlled release systems due to their favorable properties, such as a large surface area, ordered pore structure and adjustable pore diameter [1]. They have large numbers of silanol groups which allow easy modification with a variety of organic moieties to diversify their design [2,3]. Based on the above advantages, stimuli-responsive end-capped MSNs have attracted much attention for application in controlled release systems by capping silica pores with supermolecules or nanoparticles, thereby endowing the silica with controlled-release ability [4,5]. The caps, as the moving parts of the functionalized silica, can completely block and open the pore outlets automatically under various triggers irrespective of extrinsic or intrinsic factors, such as pH [6,7], light [8,9], magnetic field [9,10] and redox potential [11,12]. Among a variety of stimuli that have been used in capped MSN systems, the pH is attracting growing interest. A good pH-responsive system should allow precise release control with only slight change in the pH media. Previous studies have described the pH-responsive capped MSNs with various caps, including organic macromolecules, nanoparticles and biomolecules [7,13,14]. In recent years, several exciting designs have been investigated using this approach. For example, Rui Liu et al attached Au nanoparticles to the silica surface as caps through acid degradable acetal linkers [7]. Chiyoung Park and co-workers have developed a pH-responsive polypseudorotaxane, which consisted of cyclodextrin (CD) and linear PEI. The system was controlled by the change in pH value due to the reverse complexation and decomplexation of PEI chain and CD molecules [15]. In addition, biomolecules such as lysozyme have also been chosen as caps by the Mengjun Xue group. The pore entrance opening/closing mechanism was related to the conformational change which led to a change in size of the lysozyme molecules induced by a change in pH [16]. However, as far as applications are concerned, it is generally expected that the pH-responsive mechanized nanoparticles will have the advantages of a simple design and production, offering straightforward operation, and effective action. In the work reported here, we constructed a simple and effective pH-responsive end-capped MSN controlled release system that involved CD as the pore cap to regulate the release of the cargo trapped in the silica channels. This pHresponsive system consisted of p-anisidine stalks and CD caps based on MSNs. The caps surround the p-anisidine stalks tightly, thereby trapping the outlets efficiently when a suitable stalk density was employed at a neutral pH. While with the addition of acid, the nitrogen of p-anisidine (on the stalk) became protonated and this resulted in the spontaneous dethreading of the CD caps and unblocking of the silica outlets. The working principle of the system is illustrated in figure 1. Although a variety of pH-responsive MSNs with supermolecular caps based on CD have been synthesized [13,17], several critical factors, such as the stalk density and the type of CD which may play a vital role in the release behavior, have not yet been studied in detail. Furthermore, detailed analysis of the thermodynamic parameters and energetics involved in the process of inclusion are lacking. A better understanding of these processes would lead to design suitable functionalized MSNs. So, the study, for the first time, provided insights regarding the thermodynamic parameters and energetics involved in the inclusion process. Hence, much of our present work has been aimed tailoring the functionalized MSNs properly in order to improve the efficiency of controlled release. As the host material, we selected MCM-41 as the carrier due to its tubular pore passages with a 2D hexagonal porous structure [18,19], and the single connection between individual porous channels allowed it to be an independent reservoir for cargos with a 'no-leaking' capability, even in the event of incomplete capping. A water-soluble biological stain, Hoechst 33342 was employed as the cargo model, with the main objective of demonstrating the working principle of the pH-responsive operation of CD-equipped MSNs. Moreover, the antitumor drug DOX was chosen to verify the potential of the vehicle for pH-responsive drug release. Characterization The porous structure of the silica nanoparticles was characterized using transmission electron microscopy (TEM) on a Tecnai G2 F30 instrument (FEI, USA). Zeta potential values of the samples were measured using ZetaSizer Nano (Malvern Instruments, UK). Nitrogen adsorption-desorption isotherms were measured on a V-Sorb 2800P surface area and pore size analyzer. The Brunauer-Emmett-Teller (BET) model was applied to evaluate the specific surface areas. Pore size and pore volume were determined from the adsorption data by the BJH method. UV-vis spectra were measured using a Unic-2000 UV spectrophotometer (Unic Company, Shanghai). Elemental analyses of Si, O and N element were carried out using a PHI Quantera (ULVAC-PHI Company, Japan). Thermo-gravimetric analysis (TGA) was performed using a NETZSCH STA 449C thermal gravimetric instrument (Germany) by heating the sample at 10°C min −1 from room temperature to 900°C under an N 2 atmosphere. Isothermal titration calorimetry (ITC) experiments were carried out on a fully computer-operated isothermal titration calorimeter (Microcal ITC 200) instrument. The raw data from ITC were analyzed using Origin 8.0 software with a 'one set of binding sites' model. All experiments were performed at 25°C and all solutions were degassed for 10 min before use. Preparation of MCM-41 MSN A mixture of CTAB (1.0 g), 2.0 M of NaOH (aq) (3.5 mL) and H 2 O (480 mL) were added with constant stirring and heated at 80°C. After 30 min of stirring, TEOS (5 mL) was slowly added to the solution and the mixture was kept at 80°C with stirring for another two hours. Following the addition of TEOS, a white precipitation was observed. To reduce the aggregation of nanoparticles, the obtained suspension was homogenized by an ATS AH100D homogenizer at 700 bar for six cycles (ATS Engineer, Shanghai, China). Then obtained solid was collected by filtration and washed with water and methanol. To extract the templating agents, an acid extraction was accomplished in a mixture of methanol (480 mL), concentrated hydrochloric acid (27 ml) and asmade materials at 75°C for 24 h. The as-synthesized MCM-41 silica particles were centrifuged at 6000 rpm and then washed with water and methanol for several times, finally dried under vacuum. The surface modification of MCM-41 MSN Silica nanoparticles were dried under 100°C for 10 h before the functional modification. To construct a p-anisidine modified silica surface, the dried nanoparticles (100 mg) in dry toluene (10 ml) were reacted with IPTMS (0.05 mmol) in an N 2 atmosphere and refluxed for one day to form MCM-41-I nanoparticles. The procedure was carried out under water-free conditions in toluene to avoid unwanted intermolecular polymerization among IPTMS molecules. After this procedure, the obtained MCM-41-I nanoparticles were washed with dry toluene and methanol. Next, p-anisidine (0.5 mmol) and triethylamine (1.5 mmol) were added to the MCM-41-I particles in toluene solution and reacted at 80°C under N 2 gas for 48 h. Finally, the product, named MCM-41-p-anisidine, was collected by filtration and washed thoroughly with toluene and methanol, and then dried under vacuum. Cargo loading and β-CD capping The synthesized MCM-41-p-anisidine sample (50 mg) was added to Hoechst33342 solution (5 mL, 1 mM water solution). An excess of β-CD was added to the solution which contained the Hoechst 33342 loaded nanoparticles, then sonicated for 30 min and stirred at RT for another three days. Finally, to remove excess Hoechst 33342 and β-CD, the dyeloaded solids were centrifuged and washed with water until no dye was detected in the washing supernatant. The capped particles were then dried under vacuum for later stimulated release studies. Controlled release experiment The Hoechst 33342 loaded, CD-equipped silica nanoparticles (10 mg) were suspended in 10 mL pH 7.4 phosphate buffer solution. To measure the concentration of dye release, 3 mL buffer solution with the dispersed particles was removed and centrifuged at 6000 rpm. The supernatant without nanoparticles was subjected to UV-vis spectroscopy at an absorption peak 335 nm to determine the concentration of the dye released. Adjusting the pH of the solution to 5.0 was accomplished by the addition of HCl (0.1 M), which fully activated the release of dye molecules from the nanoparticles. Molecular modeling Structure of p-anisidine stalk was created using Chembiodraw software. The α-CD and β-CD structure were extracted from the crystal structure of α-CD and β-CD complexes obtained from Cambridge Crystallographic Data Centre (Nos. CCDC-136109 and CCDC-798968). Complexes between p-anisidine stalk with α-CD and β-CD under the different simulated conditions were predicted by molecular docking, using Discovery Studio 4.0 software. DOX loading and release The p-anisidine modified MCM-41 nanoparticles (20 mg) were suspended in an aqueous solution of DOX (1 mM, 2 mL) at pH 7.0 and stirred overnight. An excess of β-CD was added to the solution, and then adjusted the pH of the solution mixture to be neutral. The resulting mixture was stirred for three days at room temperature. The DOX-loaded, CD-capped MSN were collected by centrifugation and washed with water until no DOX was detected in the washing supernatant. To investigate the pH responsive DOX release, the DOXloaded, CD-capped MSN were prepared in 10 mL aqueous solution at pH 7.4 and pH 5.0. The emission of DOX was measured at different time intervals using UV-vis spectroscopy with the absorption maximum of DOX at 480 nm. Characterization of the p-anisidine-functionalized MSNs Bare MCM-41 templated by CTAB was synthesized through a well-established procedure [20,21], and then solvent extraction was carried out on bare MCM-41 to obtained empty nanopores by removal of templating agents. Construction of the p-anisidine-modified silica involved derivatizing the silanol functional groups on the bare MCM-41 silica with IPTMS to afford iodo-functionalized porous silica. This provided target sites for the following functionalization with p-anisidine through a nucleophilic substitution reaction to yield MCM-41-p-anisidine. The p-anisidine-functionalized silica nanoparticles were characterized by TEM, N 2 adsorption-desorption analysis and zeta potential measurements. The size and morphology of the silica nanoparticles, before and after functionalization, were investigated by TEM. As shown in figure 2, the TEM images of MCM-41 nanoparticles were spherical in shape and about 100 nm in diameter, with highly ordered streak structural features. It should noted that the particles, with or without functionalization, exhibited no obvious differences in shape, and they both had ordered tubular pores with a spherical shape, which indicated that the organic stalk functionalized silica retained the characteristics of MSN even after grafting p-anisidine groups to the outlets of the pores. The structural parameters of the bare silica particles and functionalized silica particles were calculated by the BET method (table 1). Nitrogen physisorption analysis showed similar type IV adsorption isotherms (figure 3(a)) [22]. For the template-removed MCM-41, the BET surface area, pore volume and pore diameter were 1067 m 2 g −1 , 1.223 cm 3 g −1 and 2.9 nm, respectively. After functionalization with p-anisidine, the surface area, pore volume and pore diameter values were reduced slightly, to 959 m 2 g −1 , 1.196 cm 3 g −1 and 2.8 nm, respectively. This indicated that discrete stalks did not markedly influence the pore size due to their low concentration on the silica walls, indicating that the stalks were present predominantly on the outer surface of the particles rather than on the inner surface of the pore channels. Also, the capped MSNs had a BET surface area of 636 m 2 g −1 , pore volume of 0.583 cm 3 g −1 and a pore diameter of 2.5 nm, which indicated that most of the pore entrances of the capped MSNs were indeed blocked by β-CD molecules. Zeta potentials of bare MCM-41 nanoparticles and panisidine-modified silica at different pH values were measured, as the charge of the outer functional groups of p-anisidine was influenced by the change in pH ( figure 3(b)). At pH 7.4, the zeta potential of bare MCM-41 was −22 ± 1.3 mV, while the zeta potential of p-anisidine modified silica increased to +28 ± 1.2 mV at pH 5.0 from −26 ± 0.9 mV at pH 7.4, demonstrating the successful attachment of the panisidine groups to the mesoporous silica. At the neutral pH, only silanol group with an isoelectric point which was 2-3 deprotonated and was in the way of the negatively charged Si-O − [23]. When the pH value was reduced to 5.0, the functionalized nanoparticles exhibited a positive zeta potential which might attribute to the protonation of nitrogen on the p-anisidine stalk at this pH. These results indicated that the panisidine-modified silica exhibited promising pH-responsive properties. pH-responsive release of cargo molecules at different pH values To examine the pH sensitivity of the functionalized nanoparticles, the influence of pH on release was investigated. We used β-CD as the cap and three different pH values (pH 7.4, pH 6.5 and pH 5.0) were selected. The pH of the system was adjusted to 6.5 and 5.0 by the addition of 0.1 M HCl and the release was measured by UV-vis absorption spectroscopy. From the release curves of Hoechst 33342-loaded, β-CDcapped MSNs, it could be seen that the amount of released Hoechst 33342 increased as the pH fell over a certain value, reflecting pH-dependent behavior (figure 4). The cumulative release was significantly higher at pH 5.0 than at pH 6.5 or pH 7.4, from 0.7% at pH 7.4 and 3.6% at pH 6.5 to 84.2% within 4 h with a medium pH of 5.0. The above results indicated that there were no significant differences in the release between pH 7.4 and pH 6.5, which meant that the system was not fully activated [24]. At pH 7.4 and pH 6.5, the p-anisidine stalks remained hydrophobic and, therefore, could bind to the CD caps tightly. As a result, the CD caps blocked the nanopores and trapped the cargo molecules inside the pores. When the pH of the external medium fell below the pKa (pKa = 6) of p-anisidine, the interactions between the stalks and β-CD decreased and forced the separation of the CD molecules [24]. After removal of the CD molecules, the cargo was able to be released by diffusion. Therefore, the β-CD capped MSNs showed a good pH-responsive release capability. pH-responsive release of cargo molecules with different stalk densities Once the pH-dependent release characteristics were confirmed, the separate effects of stalk density and the type of CD on the release of the cargo Hoechst 33342 were investigated. The variable stalk densities were obtained by tuning the amount of IPTMS, and the experiment was performed to confirm the effect of the amount of IPTMS on the pHresponsive behavior. To optimize the stalk density, 100 mg MCM-41 particles were treated with different amounts of IPTMS (0.025 mmol, 0.05 mmol and 0.1 mmol), and subsequently reacted with an excess of p-anisidine to yield p-anisidine-grafted MCM-41 with low, medium and high stalk densities, giving rise to different functionalization levels. to pH trigger, the premature release percentage of the system with low stalk density and high stalk density was 33.8% and 20.0% within 4 h, that was in stark contrast to the case with medium stalk density which showed no premature release. Upon lowering of the pH, the release percentage increased from 50.3% at a low stalk density to 85.4% at the medium stalk density but fell to 63.8% at the high stalk density. Thus, we were able to conclude that the optimum release was achieved while the silica functionalized with the optimum stalk density corresponded to 0.05 mmol IPTMS. A possible reason for this might be the followings. At a low concentration of IPTMS groups on the silica surface, the functional groups of iodo (I-) reacted almost completely with the panisidine molecules, forming silica particles surrounded by panisidine stalks. However, the stalk density was not enough to completely block the orifices of the particles, although all the stalks were capped with CD molecules, and the existence of the gaps between the stalks was considered not to be effective enough to prevent leaking of the cargo from the channels. By increasing the concentration of IPTMS groups on the silica surface, the functional groups of iodo would increase the interaction with p-anisidine molecules and, hence, increase the stalk density. Also, the p-anisidine stalks would be completely capped with the CD molecules, resulting in blocking of the pore entrances efficiently, thus preventing the cargo from leaking. However, above the critical concentration of the IPTMS groups, the stalk density would be increased, but the capping procedure was not so effective, possibly due to the steric hindrance between the dense stalks and the CD molecules, and this could reduce the accessibility and reactivity of the p-anisidine groups on the stalks. Therefore, there might be an optimum stalk density on the silica surface, at which an effective pH-responsive behavior would be exhibited. Furthermore, N 2 adsorption-desorption analysis, elemental analysis and TGA were used to evaluate the properties of the mesoporous silica with different stalk densities. In contrast to bare silica particles, the surface area, pore volume and pore diameter of different stalk densities were reduced by different degrees, as shown in table 2. For the functionalized silica with a low and medium stalk density, the surface area, pore volume and pore diameter values decreased slightly compared with the bare MCM-41, which showed that the stalks around the pore rim had an appropriate density so the stalks could not significantly influence the pore size. However, for the functionalized silica with a high stalk density, the surface area, pore volume and pore diameter values fell progressively, to 678 m 2 g −1 , 0.763 cm 3 g −1 and 2.6 nm respectively, demonstrating that most of the pores were sealed by stalks. To further confirm the different stalk densities, elemental analysis was carried out to determine the N percentage of panisidine-modified nanoparticles as the p-anisidine groups attached mainly on the external surface of the silica particles. From the results of the elemental analysis, no N element was detected indicating that there was no residual template agent on the silica surface. After modification, the elemental N could be detected in all particles, although the values were very low, possibly because the percentage of N was very low when compared with other elements such as Si and O [25]. The number of p-anisidine stalks per gram of nanoparticles was calculated and given in table 3. The elemental analysis of low, medium and high stalk density modified nanoparticles revealed the presence of 0.257 mmol g −1 , 0.321 mmol g −1 and 0.42 mmol g −1 of stalks anchored to the silica. The stalk density could also be obtained by TGA. The thermal analysis curves of bare silica and functionalized silica with different stalk densities are shown in figure 6. TGA graphs of the samples all showed a sharp weight loss at temperatures below 100°C corresponding to the loss of physisorbed water [26,27]. For the bare silica (figure 6(a)), there was no weight loss below 600°C, while a weight loss of 6.2% was observed in the temperature range 600-900°C, which was attributed to the present of template. However, the TGA spectrum of functionalized silica showed three steps of weight loss, the first step was below 100°C, the second step was in the region 155-292°C and the last step was in the region 292-600°C(figures 6(b)-(d)). As mentioned earlier, the weight loss of first step might be due to the volatility of the water. The second weight loss could be the loss of unreacted IPTMS on the silica surface, which was 1.62%, 2.21% and 3.87% for the silica with a low, medium and high stalk density, respectively. The third degradation range (292-600°C) observed in the TGA curve was a feature of the modified portion of the total amount of p-anisidine stalk and the weight loss was 6.78%, 10.93% and 16.23% for the silica with a low, medium and high stalk density, respectively. On the basis of calculations, the stalk densities of the p-anisidine groups on the silica particles for low, medium and high stalk densities were around 5.4, 8.7 and 12.8 stalks nm −2 , respectively [28]. The detailed calculations were as follows. Because MCM-41 mesoporous silica had an array of hexagonal tubes [29], we considered that the silica was constructed by numbers of hexagonal units. The hexagonal pore model of MCM-41 mesoporous silica is shown in figure 7. In this model, the lattice spacing (d lattice ) was 4 nm and the density of amorphous silica was around 2.5 g cm −3 ,which was obtained from the [28,30]. The pore diameter (r) of the MSNs was 2.9 nm, obtained from BET analysis, and the size The area of the hexagon = × S S 6 . ( 2 ) hex AOC The area of one nanopore pore 2 The total percentage of the silica wall in a single nanoparticle The mass of one particle The area of one nanoparticle = π S R 4 (1/2 ) . (6) particle 2 The number of pores on one nanoparticle pore particle hex The total area of the pore openings on the surface of one single nanoparticle total pore pore pore The total surface area for the modification of silica agent surface silica particle total pore The number of the stalks on each nanoparticle The average stalk density of p-anisidine group on the silica particle stalk denstiy stalk surface silica In addition, the template of CTAB exhibited a weight loss in TGA while no template was detected using elemental analysis, which might be because the residual template was inside the pore channels instead of on the silica surface. pH-responsive release of cargo molecules with different types of CD Unless otherwise specified, functionalized silica with 0.05 mmol (100 mg silica) IPTMS was chosen in the subsequent research as the optimum stalk density. As mentioned above, another key factor was the capping ability of different types of CD. As is well known, CD had a hydrophobic internal cavity with appropriate dimensions, which allowed the binding of different molecules no matter organic or inorganic molecule [31]. Based on the unique nature of CD, the CD families have attracted much attention for application in silica delivery systems [4,[32][33][34]. The CD molecules were capable of blocking the pore outlets of silica, thus blocking the cargo molecules inside the silica channels until they were released from the stalks and left the pore entrance by stimulation under an appropriate environment. In this section, we selected α-CD and β-CD as representative capped molecules to investigate how they affected the release behavior of the cargo molecules. As a comparison, cargo release experiments based on p-anisidine-modified silica capped with different types of CD were performed. As shown in figure 8, a clear difference in the release percentage was observed within 4 h. At pH 5.0, the release percentage of β-CD and α-CD capped silica exhibited a decreasing trend, and the corresponding release percentages were 86.5% and 60.8%, respectively. A plausible explanation for these differences was the different formation constants (K f ) between the CD and the p-anisidine stalks in media of different pH. In order to quantify the formation constants of different CD with p-anisidine unit on the stalk, we assumed the formation constants for complexes of CD with p-anisidine stalk on the surface of MSN were comparable to the K f values for complexes of CD with panisidine in aqueous solution, and the complexation processes were equivalent and independent. ITC experiments which could provide the complex formation constants and other relative thermodynamic parameters were performed at 25°C in phosphate buffer at pH 7. respectively. Thereby the changed K f (ΔK f ) values for α-CD and β-CD with p-anisidine were 320 M −1 and 752 M −1 , respectively. These results indicated that the acidity of the medium had a marked impact on the complexation process [35]. One plausible explanation was that the phenyl ring of panisidine exhibited stronger hydrophobic interaction with β-CD than with α-CD when under a neutral environment, since the β-CD cavity was bigger to allow the aromatic ring to exert deeply [36]. However, in acidic medium, ionized p-anisidine exhibited a much weaker binding to β-CD than that with α-CD due to their enhanced hydrophilic character. In addition, the larger the change in K f at different pH values, the more the improvement in the pH-responsive release behavior. Therefore it could be inferred that the ability of CD to produce a better pH-responsive release was in the order β-CD > α-CD. Figure 9 showed the schematic illustration of α-CD capped MSNs and β-CD capped MSNs on the cargo release. Furthermore, the driving energies for the formation of the inclusion complexes were also investigated. The formation constants obtained at different pH values were related to the strength that driven the formation of host-guest complex. The thermodynamic parameters ΔH°, ΔG°and ΔS°for complexes at pH 7.4 and pH 5.0 calculated from the ITC experiments are shown in table 4. It should be noted that the thermodynamic parameters were mainly influenced by the type of CD and the pH of the media. These parameters could be used to gauge the interactions between p-anisidine and CD. It was confirmed that all the associations of p-anisidine with the CD were exothermic and spontaneous with negative enthalpies and negative Gibbs energies [37,38]. Generally, the complexation formed by CD was believed to be driven mostly by hydrophobic forces, van der Waals forces, hydrogen bonding, electrostatic interaction, release of conformational strain, exclusion of high energy water in the CD cavity and chargetransfer interactions, these main forces were usually combined in the inclusion process [38]. In general, the value of ΔG°was reflected by enthalpy and entropy, and a large and negative value of ΔH°for complexation of CD was mainly attributed to van der Waals forces and hydrogen bonding, while a large and positive value of ΔS°was mainly attributed to hydrophobic forces [39]. For the p-anisidine/α-CD system, the inclusion process was associated with a relatively large negative value of ΔH°and positive ΔS°value but with a very small extent, so, the inclusion process was enthalpy driven. At the same time, the thermodynamic results suggested that the inclusion complex formed at pH 7.4 with ΔH°of −16.32 kJ mol −1 was more stable than that the complex formed at pH 5.0 (ΔH°= −9.52 kJ mol −1 ). On the other hand, large negative enthalpy changes were mainly attributed to van der Waals forces and hydrogen bonding, therefore, the main driving force for the complex formed by p-anisidine with α-CD either at pH 7.0 or pH 5.0 was van der Waals forces and hydrogen bonding. While for the p-anisidine/β-CD system, the opposite sign of large positive ΔS°was observed which might be related to a rather different complexation process [40]. At pH 7.0, the positive entropic contribution to the ΔG°w as notably larger than the enthalpic one. A slightly ΔH°and large positive ΔS°indicating the process was entropy driven, and the main driving force was hydrophobic. The positive Δ°S value was due to the breaking of highly ordered aqueous microenvironments surrounding the hydrophobic part of the p-anisidine upon binding the β-CD, and the p-anisidine fitted well in the hydrophobic CD cavity [41]. But for the case of pH 5.0, ΔS°and ΔG°decreased as K f decreased, which can be attributed to weak interaction of the protonated p-anisidine with the hydrophobic cavity of the CD. It was clear, however, that the changed K f values for β-CD was larger than that for α-CD with p-anisidine, which indicated that the interaction existing in this system was probably influenced less by van der Waals forces or hydrogen bonding and more by hydrophobic ones, and the system of p-anisidine/β-CD exhibited better pH-responsive property than the p-anisidine/α-CD one which was in a good agreement with the release experiment. Moreover, a larger formation constant was found at pH 7.4 than at pH 5.0, this observation was consistent with the other rules seen in the literature [42], where a charged guest exhibited lower formation constants for CD than its corresponding unionized state. The identification of this phenomenon at atomic level will be further discussed in the molecular modeling studies. Molecular modeling Inclusion complexes between p-anisidine and the two types CD were prepared and discussed by experimental studies. We found that molecular modeling was an important tool to clarify mechanism of binding and to confirm binding forces between p-anisidine and the two types CD, the structure of neutral and ionized p-anisidine stalk with α-CD and β-CD was further investigated by molecular docking. The representative conformation for this binding mode is presented in figure 10. Once again, the results well agreed with the experimental data. For p-anisidine stalk/α-CD, it showed that the main driving force for the complex formation constituted a hydrogen bond interaction between NH group of the panisidine stalk and the oxygen atom from α-CD no matter the stalk was ionized or not. The interaction energies for the complexes formed by unionized and ionized p-anisidine stalk with α-CD corresponded respectively to −3.33 and −4.05 kJ mol −1 which suggested that the unionized stalk was more stable than that of the ionized one. As the case for panisidine stalk/β-CD in neutral pH, marked fit phenomena took place when the p-anisidine part on the stalk was totally included in the hydrophobic cavity of β-CD with a strong hydrophobic interaction, which were in agreement with the experimental observations obtained by ITC. When the binding mode of the ionized form of stalk was analyzed, a weak hydrogen bond occurred, which exhibited a lower interaction energies than the neutral form (−4.15 and −2.25 kJ mol −1 , respectively). As predicted in the experimental section, hydrophobic interaction was the main driving force for complexation formed by p-anisidine stalk and β-CD. From the results above, we can hypothesize that the affinity of panisidine stalk diminished as the ionization of the guest increased as a consequence of the higher energy required to desolvate the ionized p-anisidine stalk [42]. To sum up, the ionization state of the stalk caused its affinity for α-CD and β-CD to be lowered, which exhibited excellent pH-responsive property. Drug release In order to prove that the CD-capped nanoparticles can be applied for potential drug delivery system, the antitumor drug DOX was used to investigate the pH-responsive property of the CD-capped nanovalve system. The DOX loading efficiency was 4.5% which was determined on an UV-vis spectroscopy. The cumulative release curves of DOX from β-CD-capped MSN are shown in figure 11. Compared to the release behavior of Hoechst 33342 loaded particles, the release of DOX from MSN was slower in pH 5.0 releasing media. This observation raised the possibility that the cationic property of DOX (pKa = 8.2) made it to interact electrostatically with the Si-O − (negative charge) groups in the silica nanopores and thus this relatively strong interaction led to the slow release of DOX upon the opening of nanovalve [43]. The obvious release of DOX was observed at pH 5.0 in comparison to the case at pH 7.4 within 6 h, the accumulated release percentage of the capped silica at pH 5.0 was less than 60%, while the case at pH 7.4 showed no release within 6 h. These results indicated that the CD-capped MSNs exhibited pH-responsive release behavior and had promising applications in drug delivery system. Conclusions In this study, we successfully constructed a pH-responsive CD capped MSN controlled release system by a relatively simple method. The organic component of the stalk was confirmed by a variety of experimental characterizations, and the system exhibited good pH-responsive controlled release. Different release percentages were investigated when the stalk density was changed or the type of CD was changed. Detailed examinations of the pH-responsive behavior of the CD capped MSNs with differing stalk densities were carried out. The results obtained indicated that either too low or too high a stalk density would adversely affect the amount of released cargo, and only an optimum stalk density could produce an ideal pH-responsive behavior and, therefore, an optimum stalk density was selected. The application of TGA gave an optimum stalk density of ∼8.7 stalks nm −2 . Moreover, different release percentages were observed resulting from the different degrees of changed formation constants of the complexes formed by CD with p-anisidine molecules at different pH values and we found that the β-CD capped MSNs exhibited a good responsiveness to changes in external pH. In addition, the thermodynamic parameters calculated from the ITC experiment showed that the complexation process of panisidine/α-CD were predominantly enthalpy-driven with the main driving force of van der Waals forces and hydrogen bonding under neutral or acidic media, while for p-anisidine/ β-CD, the complexion process was entropy driven with strong hydrophobic interaction under neutral pH and weak hydrogen bonding under acidic media. The influence of pH on the affinity of the p-anisidine stalk with CD was carefully analyzed and discussed at molecular level by applying molecular modeling study, which was in good agreement with ITC results. In short, both the stalk density and the type of CD played a central role in the pH-responsive controlled release behavior. All the above results were encouraging and provided important information about the design of functional mesoporous silica to be applied in a variety of applications, especially drug delivery.
7,991.2
2015-04-01T00:00:00.000
[ "Chemistry", "Materials Science" ]
Collective Motion of Spherical Bacteria A large variety of motile bacterial species exhibit collective motions while inhabiting liquids or colonizing surfaces. These collective motions are often characterized by coherent dynamic clusters, where hundreds of cells move in correlated whirls and jets. Previously, all species that were known to form such motion had a rod-shaped structure, which enhances the order through steric and hydrodynamic interactions. Here we show that the spherical motile bacteria Serratia marcescens exhibit robust collective dynamics and correlated coherent motion while grown in suspensions. As cells migrate to the upper surface of a drop, they form a monolayer, and move collectively in whirls and jets. At all concentrations, the distribution of the bacterial speed was approximately Rayleigh with an average that depends on concentration in a non-monotonic way. Other dynamical parameters such as vorticity and correlation functions are also analyzed and compared to rod-shaped bacteria from the same strain. Our results demonstrate that self-propelled spherical objects do form complex ordered collective motion. This opens a door for a new perspective on the role of cell aspect ratio and alignment of cells with regards to collective motion in nature. Collective bacterial motion is not limited to surfaces and may occur in liquids, but the physical and biological mechanisms involved in such processes may differ.Dombrowski et al. [2] found that highly dense suspensions (20-100 times more crowded than spontaneous overnight cultures) of B. subtilis, a rod-shaped bacterial species, swimming in sessile drops, show collective motion and flow patterns not found in control experiments with inert microspheres.Cisneros et al. [6] studied bacterial motion (B.subtilis) as a function of cell density and showed a transition to high speeds and directional order at high concentrations.Sokolov et al. [4] presented experimental studies of collective bacterial swimming (B.subtilis) in thin fluid films (1 mm), where the dynamics was essentially two-dimensional and the concentration could be adjusted continuously.At concentrations near the maximum allowed by steric repulsion, swimming bacteria formed a dynamic state exhibiting extended spatiotemporal coherence. The reported collective bacterial motion in liquids has been limited for B. subtilis, a rod-shaped bacterial species.Since experiments were preformed at extremely high concentrations, geometrical restrictions arising from the aspect-ratio of the cells played a significant part in the explanations offered for the collective motion.Supporting this approach, two recent theoretical studies argued that spherical cells will not exhibit collective motion if only steric interactions will be considered.Wensink et al. [42] showed that for spherical cells, motion can be either free-swimming at low densities, or a jammed state at medium to high densities.Peruani et al. [43] showed that the transition to dynamic clustering (group behavior) depends on both the density and the cell aspect ratio, with no clustering for spherical cells.In particular, there is no transition to directional order with spherical bacteria.A key question to be discussed later is what exactly is a spherical bacterium -just the cell body or the entire bacterium with flagella combined. In this work, we show that sphere-like bacteria do exhibit robust collective dynamics and correlated coherent motion while grown in suspensions.The 1 mm in diameter spherical cells of an overnight suspension of S. marcescens, swimming in a sessile drop, migrate to the upper surface of the drop, form a monolayer, and swirl in whirls and jets.The motion was found to strongly depend on cell density having a non-monotonic relationship between the mean cell swimming speed and cell concentration.Our results are essentially different from those obtained by others such as Dombrowski et al. [2] in several aspects: (i) The shape of the cells, which is spherical, (ii) Motion is restricted to a twodimensional monolayer, (iii) A natural formation of cell density (no sample manipulation such as concentration by 20 fold), (iv) Gravity and buoyancy do not play a role.Our results demonstrate that, in contrast to previous hypotheses, the rod-like bacterial shape (aspect ratio .5) is not required for collective bacterial motion in liquids. Strain and growth media The wild type (WT) Serratia marcescens 274 is a Gram-negative motile bacterial species [13].During the exponential growth phase (in broth), and at low densities (,1|10 7 ), the cells have a rod-like shape (163 mm) and they swim in the volume at speeds of the order of 15 mm/s (run-and-tumble).In overnight cultures, at the stationary phase (18 h), the cells reach densities of 2|10 9 and shrink to the shape of spheres (possibly due to starvation [45]).The bacteria were maintained at -80uC in Luria Broth (LB) (Sigma, St. Louis, MO) with 25% [wt/vol] glycerol.LB broth was inoculated with the frozen stock and grown for 18 h at 30 uC while shaking (3 ml LB in a 15 ml plastic tube; at 200 RPM).It was subsequently grown to an OD 650 of 2.0, corresponding to approximately 2|10 9 bacteria/ml, calibrated by counting colonies on agar after appropriate dilution.The counting-agar-plates were filled with 2% Difco agar from Becton Dickinson, supplemented with LB; culture dilution yielded 200620 (after 5 independent repetitions) tiny colonies on each plate.OD measurements were done by using a basic spectrophotometer (Novaspec III), and standard 1 ml cuvettes; the suspensions were first diluted 1:10 in LB to obtain a more reliable result.Experiments with rod-shaped cells, of aspect ratio of ,3, were performed by diluting the 18 h old sphere-cells culture (100 ml into 3 ml of fresh LB), and growing them for 2 more hours in shaking. Other S. marcescens strains used in this study are RH1041, which is a serrawettin mutant of S. marcescens 274 (no surfactant production (SMu 13a)), RH1037 which is an immotile mutant of S. marcescens 274 (flagella minus (Hag::Cm)), and S. marcescens strain A [15] which is a WT that produces much more surfactant than the WT 274 strain.To obtain supernatants, cultures were centrifuged at 5000|g for 3 min (high speed limits the amount of remaining cells), then the liquid was harvested.For mixing of centrifuged cells with supernatant, the cells were centrifuged at 500|g for 1 min (low speed prevents damage to the centrifuged cells), their supernatant was removed, and the desired supernatant was added by careful pipette mixing. Microscopic measurements An optical microscope (Zeiss Axio Imager Z2) equipped with a LD 60X Phase Contrast objective lens was used to follow the microscopic motion.The microscope was placed in a temperature and humidity controlled environment.A digital camera (GX 1050, Allied Vision Technologies) captured the microscopic motion at a rate of 100 frames per second and a spatial resolution of 102461024 pixels.Movies were taken for 20 min periods, streamed directly to the hard drive, resulting in 120,000 images in a sequence. The fraction of area occupied by bacteria, r, was calculated by measuring the number of black pixels covering the frame, then divided by the entire number of pixels (1024|1024).See Fig. S1 for additional details. Flow analysis Recorded movies were converted to a sequence of single-frame images.Following standard pre-processing for noise reduction, the optical flow between each two consecutive frames was obtained using the Horn-Schunk method [46].Vector fields were reduced to a 64664 grid by simple averaging, generating an approximated velocity field v v~(v x (x,y),v y (x,y)).In order to characterize the observed flow patterns, several other dynamical variables were calculated (i) The vorticity field v(x,y)defined as the z-component of the curl of v v. (ii) Spatial correlation functions f y (r)~Z {1 Sy(x) : y(y)T jx{yj~r {Sy(x)T jx{yj~r : Sy(y)T jx{yj~r h i where Z is a normalization constant such that f y (0)~1, y(x) is a given field and S : T jx{yj~r denotes averaging over all pairs of grid points x and y separated by r and over all frames with concentration in a given range. (iii) Temporal correlation functions where Z is a normalization constant such that g y (0)~1, y(t) is a given field and S : T t1{t0~t denotes averaging over all grid points and times t 1 and t 0 such that t 1 {t 0 ~t. In particular, we are interested in correlations in directions y~ v v=j v vj denoted f dir and correlations in vorticity y~v denoted f vort .Similar notations are used for temporal correlation functions, g dir and g vort , respectively.Averages at fixed concentrations are taken over all spatial positions and all frames within the 5 seconds corresponding to the specific concentration. Observation of collective dynamics A 5-ml drop of an overnight (18 h) WT S. marcescens 274 culture was placed on a glass slide and observed in upright light microscopy (phase contrast).Bacterial density (per volume) for such a culture was 2|10 9 cells/ml.All cells were spherical with an aspect ratio smaller than 1.1 (some were slightly oval because of pre-splitting due to reproduction) (Fig. 1A), and were constantly swimming from the bulk to the upper surface of the drop.The cells remained at the upper surface, thus continuously increasing the surface cell density (Fig. 1B).Increment of surface density from minimal to maximal lasted approximately 20 min.On the surface, cells formed a monolayer and were swirling in dynamic whirls and jets (Movie S1).The motion was found to be independent of either the material from which the surface was made or the size of the drop; it was also independent of the droplet's surface shape, either concave, nearly flat or convex.For control, some drops were placed on the glass, then tilted upside-down and viewed by an inverted microscope to see the effects of gravity and buoyancy.No change was observed.For quantitative measurements the drop was constrained by a super-hydrophobic ring printed on the glass (polytetrafluoroethylene (PTFE) printed glass slides (63429-04) Electron Microscopy Sciences, Hatfield, PA) in order to prevent wetting and spreading, which may affect the dynamics of the bacteria or cause drifting.To prevent both evaporation and the blowing of air on the sample the drop was enclosed in a small chamber, the top and bottom of which comprised of thin glass coverslips, while the surrounding wall was a metallic ring attached to the glass with vacuum grease (Fig. S2). Scaling of the bacterial dynamics To quantify the dynamics of the sphere-like collectively swimming bacteria, we calculated a velocity field for each two consecutive frames.A typical raw movie lasts ,20 min; the data was taken from 5 similar experiments, ending up with about 600,000 analyzed frames.Each movie was divided into 5-second segments, in which the bacterial density was approximately constant.Figure 2A depicts a typical velocity field, showing the velocity field at a typical bacterial concentration r = 0.67.See also Movie S2. Figure 2B shows the average bacterial speed as a function of the bacterial concentration in each 5-second segment; At low densities, the mean speed is an increasing function, reaching a maximum at r = 0.64.At higher densities, the mean speed decreases sharply to a jammed high-concentrated phase.Figure 2C shows the distribution of speeds at 5 concentrations (r = 0.28, 0.46, 0.67, 0.74 and 0.87), also denoted in Fig. 2B as red dots.Interestingly, all speed distributions are well approximated by a Rayleigh distribution (see inset in Fig. 2C) with probability density xs {2 exp ({x 2 =2s 2 ), where sw0 is a parameter indicating the point in which the maximum of the density is obtained (the mode).The mean of the Rayleigh distribution is ffiffiffiffiffiffiffi ffi p=2 p s and the standard-deviation is ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (4{p)=2 p s. Thus all Rayleigh distribution collapse to a master curve with s~1 upon rescaling by s.The Rayleigh distribution is a consequence of the fact that projections of velocities on the principal axes are approximately independent normal distributions with zero mean and variance s 2 (Fig. 2D).We demonstrate this scaling by showing that the standard deviation of the speed distribution grows linearly with the average speed with a slope of approximately ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4=p{1 p (,0.52).The vorticity field v(x,y)shows a similar scaling with r. Figure 3A shows the vorticity field at the same typical concentration r = 0.67.See also Movie S3. Figure 3B shows the average absolute value of vorticity as a function of the bacterial concentration (note the similarity to Fig. 2B). Figure 3C shows the distribution of the vorticity at 5 concentrations (r = 0.28, 0.46, 0.67, 0.74 and 0.87) denoted in Fig. 3B as red dots.It is approximately normal.At a fixed concentration, speed and vorticity are uncorrelated (0.02 at r = 0.67, see Fig. 4A). Figure 4B shows that the distribution of speeds at three different vorticity ranges (terciles) is the same, implying that cells that move in jets, and cells that move in whirls have same speed distribution.However, the standard deviation in the normal vorticity distribution (and therefore also the average absolute vorticity) depends on concentration (and therefore on s and the mean speed).Figure 4C shows the average speed as a function of the average absolute value of the vorticity.The two are highly correlated (0.65 at r = 0.67) with a slope of 8.8. Both the spatial and temporal correlation functions were found to decay exponentially, thus defining a characteristic decay length l y and time t y .Figures 5A-B depict the spatial and temporal correlation functions, respectively, at r = 0.67.Figures 5C-D show the correlation lengths and times as a function of bacterial density.While the average speed changes dramatically, both l y and t y remain fairly constant; l dir and t dir diverge at very large bacterial concentrations where the speed is small and the dynamics is jammed.The vorticity correlation function f vort (r) becomes negative around the characteristic size of the whirls, which is approximately 20 mm.The vorticity correlation time is about 0.15 s, indicating very short-lived vortices. The scaling described above suggests that the bacterial dynamics is the same at all concentrations up to a single Comparing spherical and rod-like bacteria So far we have shown that sphere-like bacteria exhibit strong collective dynamics.But how different is this motion if compared to rod-shaped cells of the same species?We have thus repeated the entire experiment with similar, but longer cells (aspect ratio of ,3; see Material and Methods).For the long cells the mean speed and vorticity were significantly larger (+35%) (Fig. 6A), while correlation lengths and times were the same.A striking result was that the long cells did not show the Rayleigh speed distribution observed for the spherical cells (Fig. 6B).Instead, the long cells showed a narrower distribution with a faster decaying tail (analyzing the dynamics of B. subtilis, a rod-shaped bacterial species with an aspect ratio of approximately 4.8, Wensink et al. [42] found that projections of the bacterial velocity of the principal axes is roughly Gaussian, pertaining to a Rayleigh speed distribution).These differences support the idea that the long cells do benefit from large hydrodynamic and/or steric interactions that lead to a much faster motion, and to a more concentrated speed distribution.Also it shows that the bacterial shape does not dictate the correlation time and length (l and t) as suggested for B. subtilis by Sokolov and Aranson [8]. Role of chemical signaling and surfactants Collective swirling of the cells may be triggered or enabled by secretion of surfactants or other compounds unique to S. marcescens, thus modifying of the physical properties of the drop.For example, it was shown [15] that the mobility of the upper surface of surfactant producing bacterial swarms is significantly different from those of non-surfactant producers.In order to test the origin of the motion in this experiment, we have used different strains of S. marcescens: (i) a motile strain that does not produce surfactants (RH1041), (ii) a motile strain (strain A) that produces larger amounts of surfactant (compared to WT 274) and (iii) an immotile strain that produces surfactant (RH1037).The results showed that immotile bacteria did not migrate to the surface and did not show collective motion.Furthermore, they did not show individual swimming, but Brownian motion was evident.The two motile strains, one that does not produce surfactants and the other that produces large amounts of surfactants, both showed the same collective motion as the WT 274.We thus suggest that collective motion is independent of surfactant production and that, instead, it stems from the intrinsic motility properties of each cell. Next, we moved motile cells into supernatant of immotile cells, and immotile cells into supernatant of motile cells.The motile cells swirled in the immotile supernatant and the immotile cells did not move in the motile supernatant (beside Brownian motion).When we moved the motile cells into fresh LB broth they did not show the swirling pattern; some cells were swimming individually.We thus suggest that the collective swirling behavior is a result of both self propulsion, and the presence of some molecule that is not a motility-associated material. Discussion Alignment of individuals within a group was found to play a significant role in the dynamics of many species and artificial agents [47].Naturally, the geometry of objects plays an important role in the effective aligning interaction.For example, steric interaction between inanimate rod-shaped particles (e.g., rice seeds) can lead to non-equilibrium and collective behavior [48][49][50].However, no ordered phase was found in inanimate vibrating round particles, both in experiments and simulation [50]. The question of whether motile spherical bacteria can form collective motion is undecided.Recently, two theoretical models did not find any ordered phase in systems of self-propelled spherical particles [42][43] and concluded that the transition to collective behavior strongly depends on the aspect ratio of the particles.On the other hand, it may be argued that even spherical motile cells cannot be considered spherical because of the long flagella at their poles.Indeed, several models simplify the complex shape of bacteria as a sphere attached to a thin rod, or a force dipole representing the flagellar bundle [5,51].However, several experimental works have shown that the flagella do not behave like a rod attached to a sphere, and that flagella are easily deformed; hence, self-propelled spherical cells are effectively spherical and not rods.By using fluorescently labeled flagella, Turner et al. [52] visualized the flagellar filaments of planktonic bacteria and showed that the bundle does not behave as a rigid rod hooked to the cell body.Both Turner et al. [53] and Copeland et al. [20] have found that the bundles of flagella on swarmer cells periodically splay apart, leading to the reorientation of the cell.In addition, Copeland et al. [20] observed that flagella make transient contact with flagella on adjacent cells.More importantly, it was found that swarmer cells spend a considerable amount of their time in the community in a passive state responding to the motion of adjacent cells.We thus suggest that the description of a bacterial cell to be composed of a sphere attached to a rod, may not necessarily be relevant to our experiment, as flagella cannot be considered an extension of the spherical body that makes it cylindrical-like.Nonthe-less, it may be responsible for creating small asymmetries which may assist in the creation of collective motion.We stress that such asymmetries cannot explain the quantitative flow analysis of collective motion observed in experiments since the speed distribution of elongated bacteria was found to be fundamentally different than that of spherical ones. Our experimental results suggest that spherical bacteria may form a collective state after all.Collective motion strongly depends on the concentration (Fig. 2B and Fig. 3B).The speed distribution of the cells followed a Rayleigh distribution (stemming from the fact that projections of velocities on the principal axes are approximately independent normal distributions) at all bacterial concentrations (Fig. 2C).This observation is particularly important since models of granular material show that at high densities, the speed distribution of granular gases in which particles interact inelastically decays exponentially.Hence, a Gaussian speed distribution for spherical bacteria can serve a benchmark test for different swarming or collective swimming models (e.g., [44,51]). In addition, vorticity depends on concentration but is independent of speed (Figs.4A-B).In particular, collective behavior was evident at even low concentrations, where the distance between cells is large, reducing the contribution of steric and short-range hydrodynamic interactions.On the other hand, rod-shaped bacteria moved much faster and showed a different speed distribution (Fig. 6), other than Rayleigh.These results suggest that there are fundamental differences between rods and spheres, possibly due to the added rotational symmetry of spheres.In addition, the relatively constant correlation lengths and times, which are also similar for spheres and rods (Figs.5C-D), suggest that the main factors responsible for the appearance of collective motion may be the physical properties of the medium, such as viscosity, and the diffusion properties of signaling molecules, rather than the geometry of the cells. Indeed, testing the role of chemical signaling, we found it is plausible that a non-motility-associated chemical exists in the culture from early stages.However, we cannot conclude whether such a chemical generates the collective behavior, or simply enables it, acting like a trigger.The results of this work open a door for a new perspective on the role of cell aspect ratio and alignment of cells with regards to collective motion in nature. Figure 1 . Figure 1.Observation of the collective dynamics of WT S. marcescens 274 bacteria, swirling in a drop.(A) A snapshot of an overnight (18 h) culture-drop placed on a glass slide; image taken by an upright phase contrast microscopy.All cells are spherical and are located on the upper surface of the drop.Bacterial density (fraction of area occupied by bacteria) r = 0.67.Scale bar equals 10 mm.(B) A schematic side view, illustrating how cells swim from a homogeneously dense volume to the upper surface of the drop, forming a dense monolayer and an extremely sparse bulk.doi:10.1371/journal.pone.0083760.g001 Figure 2 .Figure 3 . Figure 2. Velocity field analysis.(A) The velocity field at r = 0.67.Length of arrows represents a local speed, also manifested by color: ,20 mm/s green, .20 and ,40 mm/s pink, and .40mm/s red.Scale bar equals 10 mm.(B) The average bacterial speed as a function of bacterial concentration r.The speed reaches a maximum at r = 0.64.Red large dots represent 5 different concentrations discussed in (C).(C) Probability density histogram for 5 different bacterial concentrations r = 0.28 red, 0.46 blue, 0.67 green, 0.74 brown and 0.87 pink; the y-axis is normalized so that the area below each curve equals 1. Inset -All speed distributions are well approximated by a Rayleigh distribution, the black solid line.(D) The probability density of velocity projections on the principal axes are approximately independent normal distributions with zero mean; x-axis blue, y-axis red; dots represent experimental data and solid line is the Normal distribution.Inset -the standard deviation of the speed grows linearly with the average speed with a slope of approximately 0.52 (axes are inmm/s).doi:10.1371/journal.pone.0083760.g002 Figure 4 .Figure 5 . Figure 4. Correlation between mean speed and vorticity.(A) At a fixed concentration (here r = 0.67), the speed and the vorticity are uncorrelated with R 2 = 0.02.(B) Probability density of speeds at three different vorticity ranges (terciles), represented in yellow, green and blue, is the same (r = 0.67).(C) The average speed as a function of the average absolute value of the vorticity.The two are highly correlated (R 2 = 0.65).doi:10.1371/journal.pone.0083760.g004 Figure 6 . Figure 6.Rod-shaped cells.(A) Probability density of speeds shows a much faster speed compared with those obtained for spherical cells (Fig.2C).The y-axis is normalized so that the area below the curve equals 1. (B) Same data as in (A) now normalized vs. a Rayleigh distribution, the black solid line.doi:10.1371/journal.pone.0083760.g006 Figure Figure S1 Calculating the bacterial density r. (A) The raw image.(B-C) An intensity histogram for each frame was plotted.
5,398.4
2013-12-20T00:00:00.000
[ "Physics" ]
LIBERALIZING SELF-DECEPTION: REPLACING PARADIGMATIC-STATE ACCOUNTS OF SELF-DECEPTION WITH A DYNAMIC VIEW OF THE SELF-DECEPTIVE PROCESS : In this paper, I argue that paradigmatic-state accounts of self-deception suffer from a problem of restrictedness that does not do justice to the complexities of the phenomenon. In particular, I argue that the very search for a paradigmatic state of self-deception greatly overlooks the dynamic dimension of the self-deceptive process, which allows the inclusion of more mental states than paradigmatic-state accounts consider. I will discuss the inadequacy of any such accounts, and I will argue that we should replace them with a dynamic view of self-deception that is more liberal regarding the mental states in which self-deceivers may find themselves. ABSTRACT: In this paper, I argue that paradigmatic-state accounts of self-deception suffer from a problem of restrictedness that does not do justice to the complexities of the phenomenon. In particular, I argue that the very search for a paradigmatic state of self-deception greatly overlooks the dynamic dimension of the self-deceptive process, which allows the inclusion of more mental states than paradigmatic-state accounts consider. I will discuss the inadequacy of any such accounts, and I will argue that we should replace them with a dynamic view of self-deception that is more liberal regarding the mental states in which self-deceivers may find themselves. INTRODUCTION The problem of the mental state in which self-deceivers find themselves has a long tradition. It can be traced back at least to Mele's early objections (1997Mele's early objections ( , 2001 to Davidson's intentionalism (1985). Famously, Davidson's idea that selfdeception must be intentional was criticized by Mele for leading to a couple of paradoxes, one of which is the "static paradox," as it is known. 1 It regards the mental state a self-deceiver is in when the self-deceptive process is successfully accomplished. Mele's argument is well known: if self-deception is intentional, and self-deceivers thus intend their self-deception, at the end of the self-deceptive process they must retain the belief that not-p-that is, the belief about how things really stand-while also getting to believe the self-deceptive, desired falsity that p. If this is correct, then it seems that the final, resulting mental state in which self-deceivers find themselves is somehow paradoxical, amounting to both believing that p and also believing that not-p. 2 Thus, Mele suggested that we should get rid of intentionalism altogether and resort to focusing on the motivational set of the subjects engaging in self-deception: since the subjects desire that not-p, the desire in question biases their evaluation and selective search for evidence. This opens the door to a motivationally distorted treatment of the relevant data, which leads the subjects directly into believing that not-p, without also retaining any belief that p. However, although the solution offered by Mele may be a satisfactory way out of the static paradox, it has the drawback of describing the state of mind of the self-deceiver as quite peaceful: if a full-blown belief that p is successfully reached, then there is no trace of the psychological tension that seems, instead, to be highly typical of self-deception. For this tension is obviously due to the fact that the motivationally distorted self-deceptive process runs counter to evidence that is at hand or easily available that not-p. Of course, scholars are sensitive to the interplay of motivation and evidence, and to their opposing thrusts. And once the problem of the psychological tension created by this contrasting interaction has been posed as crucial, scholars have continued to investigate the nature of this resulting, final state of mind of selfdeception, and have tried to keep psychological tension in the picture. To this end, they have advanced interesting and refined analyses of what this final state must be, virtually all requiring that a satisfactory account of self-deception must preserve and account for the psychological tension that is characteristic of the self-deceiver's final state of mind. In this paper, I wish to raise the issue, again and afresh. I will argue that, while most accounts currently on offer are on the right track in their search for states of mind that can account for tension, I will also object that most of them are on a misleading track insofar as they look, or tend to look, for a paradigmatic, final mental state for self-deception. For virtually all accounts working with paradigmatic states for self-deception suffer from a potential flaw that should be carefully considered-namely, a certain restrictedness, at least in spirit. I say "at least in spirit" because, if a state is presented as paradigmatic, that need not be necessary. Thus, there may be room for predicting other mental states in a descriptive account of the mental state a self-deceiver is in-a less paradigmatic one, say, and yet capable of satisfying the constraints set by the motivated selfdeceiver's struggle with opposing evidence. This seems to be especially true for all those accounts that offer only sufficient (and not also necessary) conditions for self-deception, such as Mele's. 3 However, the rhetoric of most accounts addressing the problem of the paradigmatic mental state of self-deception may lead us to assume that, when self-deceiving, more often than not we are in a certain highly typical state, and to focus on that state of mind at the expense of other possibilities. My plan in this paper is to show that these other possibilities are not only live, but also quite common and highly typical as well. They are so common and typical that, as a matter of fact, once we have and keep this empirical truth in view, any claims regarding states that can be taken as paradigmatic become fatally weakened. One might ask why there has been such a great focus on the problem of a paradigmatic state. Most likely, it is created by a bias in favour of a static state, which questionably guides the analysis toward what I will dub a "snapshot theory" of self-deception. Unpacking the metaphor, one discovers that a snapshot theory of self-deception seems to be designed to take a descriptive, static picture of the mental state a self-deceiver is in. But this focus on static states may be highly misleading and lead us to exclude other candidates for typical self-deceptive mental states. Starting from these premises, I will argue that self-deception is a psychological process before it is an end state-a process set in motion by the force field created by motivation and evidence. Accordingly, I will offer an argument in favour of replacing what I call "paradigmatic-state accounts" of self-deception with a dynamic view of the self-deceptive process. Once we have seen the reasons in favour of this move, and once the move is made, we will be in a position to see that mental states that are taken to be paradigmatic of self-deception should be liberalized, so as to include all the variations allowed by the evolving combinations of the two factors of motivation and evidence. I will focus fully on motivation and evidence as the two fundamental constraints singling out the phenomenon of self-deception. I will explain how the pull of motivation and the thrust of reality create a force field that is dynamic, often in motion, strained by the variations in these two components that may be triggered by various contingent or noncontingent factors. If we bear it in mind that self-deception amounts to a dynamic psychological space determined by both motivation and evidence, then a dynamic view becomes a promising option. And this significantly changes our approach to the attitudes that are the products of the self-deceptive process. Such a move turns out to be liberalizing regarding the mental states that can be instantiated in self-deception and whose possibilities should be brought fully into the picture. 13 Here is the plan of the paper. In section 2 I will discuss the inadequacy of the paradigmatic-state accounts of self-deception, thus creating the premises for moving forward toward formulating an alternative view. In section 3 I will explore the most promising alternative-namely, a dynamic, more liberal theory of the self-deceptive process, which amounts to my proposed positive view. I will also address one obvious objection to my positive view, and I will conclude by briefly indicating what my approach suggests in terms of refining and applying the conceptual mental categories that prove to be useful for capturing selfdeception. THE INADEQUACY OF PARADIGMATIC-STATE ACCOUNTS OF SELF-DECEPTION As we have seen, the problem of including and explaining psychological tension in a satisfactory account of self-deception has led several scholars to advance proposals suggesting that we should adopt an "attitude adjustment approach" (cf. Deweese-Boyd, 2017) regarding the mental state that is the product of selfdeception. Since full-blown self-deceptive belief as a final product of self-deception seems unable to explain why the self-deceiver experiences psychological tension, we can choose to adopt alternatives. One possible alternative is to posit some quasi-doxastic or other nondoxastic attitudes towards the self-deceptive proposition. Candidates are hopes, suspicions, doubts, anxieties (Edwards, 2013), "besires" (Egan, 2009), pretense (Gendler, 2007), and imagination (Lazar, 1999). On this view, the subject is not required to end up believing a desired proposition. Rather, the subject can either entertain the hope that the desired proposition is true or suspect that the desired proposition is not true, or the subject can have doubts about its truth value or maybe also anxiously fear the possibility of it being false. All these attitudes seem to be able to explain why the self-deceiver is in a highly tensive state of mind: since the evidence points to a certain truth value of the desired proposition-that is, it suggests that it is at least likely that it is false-the subject struggles with such evidence and tries to see if the desired proposition can be true. Another alternative is to alter the content of the proposition believed. We can do so without incurring any static paradox of the kind associated with the traditional intentionalist models of self-deception. For instance, Funkhouser (2005) suggests that self-deceivers have a second-order belief about their believing that p, while they do not believe that p at all. Reality shows them that p cannot be true or that p is unlikely; however, they process the evidence in a motivationally biased way, so that they can at least believe that they believe that p. 4 That creates a tension between the false second-order belief that they believe that p, along with the dispositions associated with having it, and the first-order belief that notp, along with the dispositions associated with it. Bilgrami (2006) also suggests that the tension is due to a conflict, this time between a fully authoritative, true, second-order belief about a completely transparent, first-order belief that p, and another first-order belief that not-p that is, however, not transparent, and that does not generate any second-order belief that one also has the first-order belief that not-p. Here the tension is created either by the conflict between the transparent, first-order belief that not-p and the opaque, first-order belief that p, or by the clash between the true, second-order belief that one has a first-order belief that p and the first-order belief that not-p, or both, along with the complications created by the further dispositions associated with all of these beliefs. More recently, Lynch (2012) has claimed that "wholeheartedly believing what one wants to be true may be rare in self-deception (p. 440), and that we should look for a "more fine-grained way of capturing the attitudes of subjects towards propositions than can be accomplished with the coarser apparatus of belief" (p. 438). Thus, he claims that unwarranted degrees of confidence in p are enough to explain tension. He also argues that we have self-deception as long as the subject puts some different degree of confidence in p and in not-p, whereas any phenomena in which the subject avoids altogether the question whether p are best captured as "escapism" (Lynch, 2012, p. 446). He draws on Longeway (1990) and describes escapism as a defence against reality. According to Lynch, "deep conflict cases" best represent escapism: while in self-deception there is a cognitive tension due to the different degrees of confidence placed by the subject in p and not-p, in deep conflict cases we have a more profound tension that is behavioural. The subjects here are taking a greater risk by acting upon their attitudes than they take by just speaking and thinking (p. 435). Often the subjects engage in avoidance behavior. Such actions give us reasons to think that the subjects are not simply struggling cognitively with p and not-p, but that they are investing in p in a way that shows that they must have found a way to avoid any vacillation as to whether p. According to Lynch, this may not be self-deception any longer-or at least not a paradigmatic case of it. Interestingly, however, he admits that it may not be a necessary truth that self-deception contains tension, and allows for the possibility that, at times, the subject may become fully convinced of the desired proposition that p (p. 442). I will come back to this later in the paper. In the context of distinguishing willful ignorance from self-deception, Lynch (2016) is even more explicitly interested in singling out paradigmatic cases of self-deception. He is aware that "philosophical analysis of a phenomenon is challenging enough at the best of times, but it becomes all the more difficult when there is disagreement over what the paradigmatic cases are. Such is the situation, unfortunately, with regards to self-deception" (p. 513). However, he thinks that "there are some features that are generally recognized to be present in paradigmatic self-deception," such as (1) the subject's encounter with evidence indicating that some true proposition, not-p, is true; and (2) the strong desire of the subject that p be true. 15 So, according to (1) and (2), in paradigmatic self-deception the subject encounters unwelcome evidence, indicating that not-p. Lynch adds that "beyond that, disagreement persists, particularly with regard to the subject's epistemic/doxastic relation with the truth" (p. 513). Lynch then claims that approaches regarding paradigmatic cases of self-deception can be sorted into three categories (p. 513-514): (a) unwarranted-belief accounts, where the subject ends up self-deceptively believing that p (Mele, 1997, is an example of such an account); (b) implicit-knowledge accounts (e.g., Bach, 1981), where the subject does not believe that p, but recognizes the truth of not-p, while such knowledge is "shunned, ignored, or kept out of mind, and the subject acts in various ways" as if the subject believed that p, though other behaviour may betray the knowledge that not-p (p. 514); and (c) intermediate accounts, where the subject both believes that p and believes that not-p (Davidson, 1985), or where what the subject believes remains indeterminate (e.g., Funkhouser, 2009). All these approaches agree on the discrepancy between the attitude toward p held by the self-deceiver and the attitude the self-deceiver should have, given the available evidence. Lynch adds that it seems to be characteristic of self-deception that the subject encounters the countervailing evidence, while this is not typical of wishful ignorance. For in willful ignorance the subject successfully manages to avoid evidence altogether and does this voluntarily and intentionally. This guarantees that willful ignorance lacks the encounter with evidence that is typical of self-deception. However, Lynch thinks that this is no conclusive evidence that willful ignorance is not a kind of self-deception after all. For the former could, for example, be a nonparadigmatic case of self-deception. Lynch contemplates this possibility because he is persuaded that, if we had an analysis that did not rely on any of the three views listed above, we could get different results. If such an analysis were available, maybe we could be in a position to see that willful ignorance and selfdeception are two of a kind, notwithstanding their obvious differences. I have reasons to think that the theory I will develop could be a promising candidate for being such a view. But before I move on to it, it is important to see why all the paradigmatic-state accounts are inadequate. All the approaches seen thus far lead us to think that there must be a characteristic, paradigmatic state the self-deceiver is in. They do so by looking statically at the final product of the self-deceptive process. Even if Lynch seems to be more liberal in admitting that more states than are predicted by a paradigmaticstate account could be instantiated by the self-deceiver, he ends up adopting a rhetoric suggestive of the importance of singling out the paradigmatic state. Inso- far as all these views look statically at the allegedly final product of self-deception, they can be described as snapshot theories: that is, it is as if they take a static, instantaneous picture of the mental state that a self-deceiver is in at a certain time t, presumably taken to be representative of the central phase of selfdeception, and try to unpack its features so as to meet the constraint of tension. By doing so, however, these views lay themselves open to a quite obvious objection: how should we deal with the empirical discovery that, with regard to the self-deceptive process, self-deceivers are in mental states other than the paradigmatic one? What if these other states are capable of meeting the constraint of tension, too? Obviously, the answer will be that all those accounts suffer from a restrictedness that puts us on the wrong track in our attempts to give a description of the phenomenon, which is rich enough to include other possible, often empirically instantiated, mental products. This is exactly what I think happens if we go beyond the biasing search for a paradigmatic state and head toward a more accurate focus on the self-deceptive process as a whole. The analysis I will propose in the next section is given over precisely to showing how we should frame our view of self-deception, by considering the process, its moving forces, its dynamics, and all its possible, evolving, mental products. We will see that there is a multitude of highly tensive and unstable mental states that can be instantiated by a self-deceiver, and which have been unjustifiably excluded by paradigmatic-state accounts. However, as long as we persist in trying to freeze self-deception in a certain state instantiated at a certain time t, we lose the chance to give citizenship to all those other mental states. Insofar as a dynamic view is interested in focusing on the process instead, it leads us to liberalize the varieties of mental states in which a self-deceiver may be. Let me then move on to an outline of such a dynamic, liberal view. A DYNAMIC, LIBERAL VIEW OF SELF-DECEPTION As we have seen, virtually all scholars who study self-deception agree that there are two main forces that set the process in motion-namely, motivation that p be true, and the thrust of evidence that points to not-p. Both factors are active together, and most likely they create a force field that can be, and in fact often is, highly dynamic. For obviously these factors can vary, and co-vary, depending on contingencies that can occur over time. For instance, there may be times when self-deceived subjects feel more strongly the pull of their motivation that p be true. Accordingly, they may engage more intensely in their biased treatment of the evidence or of the hypotheses associated with what the evidence suggests. At other times, instead, the encounter with evidence that not-p may be more pressing, either because further evidence is provided or because the evidence already possessed is now seen in a less prejudiced light or else because the motivation that p be true as such is weakened by other intervening factors that have nothing to do with evidence (e.g., a diminished interest in p being true on the part of the subject). There are also presumably noncontingent factors that can intervene in the dynamic of the process. For instance, these may be certain quite stable features of the subject's psychology. For example, if a subject is well trained to treat evidence impartially, even if motivation may have more or less momentarily suspended, shunned, or weakened that epistemic virtue, the latter may at some point just spontaneously strike back and correct, at least for a while, the motivationally distorted treatment of evidence. This may not guarantee the subject's complete exit from self-deception, as it may be the case that the motivation that p is true strikes back in turn once again. But it can certainly change the specific mental state self-deceiver is in at least temporarily. Perhaps, before such epistemic virtue made its claims felt, the subject placed more confidence in p than he or she does now. But if the drives of motivation rise forcefully again, the subject may revert once again to a higher degree of confidence in p. Let me now expand on the possible mental states a self-deceiver can experience with regard to the self-deceptive process. It may not simply be the case that degrees of confidence vary, as Lynch (2012) correctly diagnoses. It may also be the case that a subject can at times temporarily reach even a state of full-blown belief that p, while, owing to the variations in the factors described, the same subject can revert to less than that, even to the antipode of believing not-p. Of course, given the dynamics at work, none of these attitudes seems set to last. Or else there may be times when the subject reaches a false second-order belief that he or she has a first-order believe that p, while truly believing that not-p, as Funkhouser (2005) requires. And there may also be moments when the subject is in an intermediate, indeterminate state of mind, possibly even recognizing it as such. As we see, the dynamic force field that self-deception amounts to can easily instantiate a vacillation between p and not-p, one that can (and often does) include a variety of attitudes toward p and not-p. No attitudes can be excluded in principle, and what temporal extension and qualitative intensity such vacillation may have is a totally empirical question. It all depends on the individual subject, that subject's specificities, and the contingent evolution of his or her practical and epistemic interaction with evidence and reality. This vacillation over time is best captured, I think, as an "attitude seesaw." There is no reason to exclude from it a priori any of the states that have been indicated as paradigmatic by different scholars, including those that they judge nonparadigmatic, such as escapism. There seems to be no a priori reason why a subject could not at times engage in escapism as well, perhaps emerging from it after a while. Or a subject might just start with escapism and then move on to other less extreme attempts to deal with reality. Nothing in escapism suggests that it cannot be irreversible, nor is there anything in self-deception to suggest that escapism cannot be one of its more-or-less temporary outcomes. The same interpretative line may be applied to willful ignorance, I think. Willful ignorance may well be a phase in the self-deceptive process, earlier or later on-one that can later be reversed by new variations within the force field, or that can be entered from another state. Lynch gets close to this when he says that a self-deceiver may at times go as far as self-delusion. He thinks that self-delusion is not a paradigmatic case of selfdeception, but he shows an awareness of the variations to which I am drawing attention. My proposal, however, is more radical, and certainly more liberal: there is no need to persist in looking for a paradigmatic state when it is clear that, by its very nature, the process of self-deception can be, and often definitely is, highly dynamic, evolving and varying according to the opposing forces at work in it. It is apparent that, in a liberal view such as the one I am putting forward, tension is fully preserved. First, such a liberal view preserves the tension that has been associated with a certain single state, or set of states, by paradigmatic-stateaccount theorists: if and when the state occurs, it has the tension that its proponents correctly attribute to it. In addition, however, there is a further tension that my account guarantees, and it is not clear that paradigmatic-state accounts take it into account: namely, the tension that a more or less prolonged vacillation produces over time. 5 Note that more self-reflective subjects could also experience a sort of metacognitive tension-that is, they find themselves on an attitude seesaw that they can intuit as such. To be sure, however, even if subjects do not reach this metacognitive level of self-reflection, the psychological effect of going on such a seesaw may make them experience tension, presumably of a more opaque kind. 6 I said that it is a totally empirical question whether a subject enters into any of the possible states that the force field of self-deception allows. Equally, it is a totally empirical question how long such an attitude seesaw can last. Only a case-by-case empirical analysis of specific self-deceptive processes in individual subjects can give an answer to this question. Let me then briefly recapitulate the main tenets of my proposal. When one formulates a theory of self-deception, there seems to be no renouncing a couple of necessary constraints-namely, that (i) the process is triggered by a motivational state that leads the subject to wish that things stand in a certain, desired way (p); and (ii) the subject driven by such motivation struggles with evidence, or easily accessible evidence, that suggests how things really stand. The clash between motivation and evidence makes the process highly tensive from a psychological point a view. Psychological tension is a descriptive feature of self-deception that virtually all scholars refuse to renounce; rather, they all visibly want it to be preserved, predicted, and accounted for. When this desideratum is combined with the bias in favour of the search for a paradigmatic state, scholars may feel the pressure to look for one single, characteristic state for self-deception, which is also highly tensive and unstable in itself. This combination of drives has led to several competing theories of the characteristic state of mind of self-deception, where what is at stake is the kind of state that can best account for instability, while also satisfying our search for a paradigmatic state of self-deception. However, we will see that if we subscribe to (i) and (ii), we must accept the consequence that whatever mental state turns out to be compatible with the combined action of (i) and (ii) must be considered characteristic of self-deception. And clearly, there is no reason why we should not extend this consequence to mental states that may turn out to be even quite distant from the paradigmatic one. Since a wide variety of states are compatible with (i) and (ii), my proposal is that (A) we should liberalize the types of mental states that are representative of self-deception; and (B) to avoid remaining on a misleading track, we should replace any paradigmatic-state accounts of self-deception with a dynamic view of the self-deceptive process. Lynch (2016) provided an argument to show that willful ignorance is not a case of self-deception, whether paradigmatic or nonparadigmatic; therefore, it is a different kind of phenomenon. I think, however, that if we liberalize self-deception along the lines suggested, we are in a position to see that it may be the case that willful ignorance is sometimes a phase along the process of self-deception. Willful ignorance may well be the kind of phenomenon that Lynch diagnoses it as, and may fully retain its characteristic. The two phenomena need not be conflated. Yet willful ignorance could surface along the liberalized self-deceptive process as a possible stage of it. It is also important to note that my liberal view of the self-deceptive process does not rely on any of the three views of self-deception that, according to Lynch, could be used to distinguish willful ignorance from self-deception. Since I am not relying on any of the three, I am in a position to include willful ignorance as a possible stage of self-deception, and to do so by Lynch's own standards. For I am not proposing an unwarrantedbelief account, where the subject ends up believing that p self-deceptively (Mele, 1997, is an example of such an account). I am not proposing an implicit-knowledge account either (e.g., Bach, 1981), where the subject does not believe that p, but recognizes the truth of not-p, while that knowledge is "shunned, ignored, or kept out of mind," and the subject's behaviour suggests that he or she believes that p, though other behaviour may betray the knowledge that not-p (p. 514). Nor do I propose an intermediate account, where the subject both believes that p and believes that not-p (Davidson, 1985), or where what the subject believes remains indeterminate (e.g., Funkhouser, 2009). Rather, I am proposing a view whose focus is on a process within which more than is dreamed of by our paradigmatic-state-account philosophy of self-deception may happen. My sense is that there might be room to apply a liberal solution to twisted selfdeception as well. Twist-self-deception (Mele, 1999) is typically described as a case of self-deception where a subject does not end up believing what he or she desires or wants to be true. Rather, the subject ends up believing what he or she fears and, in any case, does not want to be true. Even if investigating this would lead me too far from my present purposes, it seems credible that a twisted selfdeceiver may experience an attitude seesaw fully compatible with (i) and (ii). If this is correct, then twisted self-deception should cease to be seen as a nonparadigmatic case of self-deception, as, Lynch notes, it is still generally considered (2016, p. 513). One may wonder whether any liberal view is too liberal after all. That is to say, does a liberal view risk being too broad, thus erring on the side of overinclusiveness? Were a liberal view to include more states than really fall within the self-deceptive kind, we would lose the unity of the phenomenon, as well as its specificity. I do not think that overinclusiveness is a genuine risk for a liberal view, except when liberality is completely unrestricted. But I set (i) and (ii) as constraints for self-deception. Thus, I have reason to think that as long as (i) and (ii) are active, none of the states that can possibly be entered by subjects during their self-deceptive attitude seesaw is outside the phenomenon of self-deception. Of course, should either (i) or (ii) stop being active, then the subjects might well enter another kind of phenomenon altogether, such as permanent self-delusion or, in the event of their exiting self-deception completely, full adherence to reality. I conclude with a final remark on what my analysis seems to suggest in terms of the adequacy of the conceptual apparatus we deploy for capturing self-deception. Even if I agree with Lynch (2012) that we should ameliorate our conceptual categories and look for more fine-grained concepts to capture psychological reality (p. 438), I also think that we should be ready to apply the categories we already have more liberally whenever our psychology shows a complexity that no single traditional mental category can capture in isolation. Sometimes, psychological complexity just requires us not to force and freeze it into snapshots. Rather it clearly invites us to look more closely at the richness embedded both in its static dimension and in its temporally evolving dynamic. ACKNOWLEDGMENTS I am grateful to the audience of the Latin Conference of Analytic Philosophy held in Paris at the Jean Nicod Institute in July, 2013, for helpful questions on an early outline of this argument. I am also indebted to two anonymous reviewers of this journal for their valuable comments on an early version of this paper. NOTES 1 Famously, the other paradox is the "dynamic paradox." Since it is not immediately useful for my argument, although it is closely connected to it, I will not go into it in this paper. 2 I will not address the partitioning solutions to the static paradox here. See Deweese-Boyd (2017). 3 I thank one anonymous reviewer of this journal for pointing this extremely crucial aspect out to me. 4 Funkhouser (2005) relies on Nelkin (2002) here. They both think that self-deceiver reaches a false, second-order belief that he or she believes that p because that self-deceiver is primarily moved by a mind-directed desire to believe that p, not by a world-directed desire that p be true. For views that go in the same direction, see also Holton (2001) and Fernández (2013). For comments on the mind-directed desire, see also Pedrini (2012;. 5 I am grateful to one anonymous reviewer of this journal for pointing out the importance of emphasizing this variety of tension. 6 Cf. Pedrini (2018).
7,672
2019-05-07T00:00:00.000
[ "Philosophy", "Psychology" ]
Using Technology to Engage Preservice Elementary Teachers in Learning about Scientific Inquiry Elementary teachers are often required to teach inquiry in their classrooms despite having had little exposure to inquiry learning themselves. In a capstone undergraduate science course preservice elementary teachers experience scientific inquiry through the completion of group projects, activities, readings and discussion, in order to develop a sense of how inquiry learning takes place. At the same time, they learn science content necessary for teacher licensure. The course exposes students to different pathways of scientific discovery and to the use of the computer both as a tool for conducting inquiry-based investigations and as a means of collecting and sharing student opinions. The students involved have many misconceptions about science and it is often difficult for them to distinguish science from pseudoscience. Computer simulations are used to help students understand that difference. In addition, a classroom response system using “clickers” is used to poll student opinions on controversial issues and to stimulate discussion. Introduction The introduction of inquiry-based activities into the science curriculum resulted from a desire in the mid-twentieth century to engage students in the process of science (DeBoer, 1991). Traditionally, science had been taught as a series of facts, often poorly connected to one another. By engaging students in inquiry activities, the students can appreciate the thought processes of scientists and design their own experiments. Several definitions of inquiry are in use (Flick & Lederman, 2006;Minner, Levy & Century, 2009;Novak, 1964). However, a common definition in the United States is that published by the National Research Council: Inquiry is a multifaceted activity that involves making observations; posing questions; examining books and other sources of information to see what is already known; planning investigations; reviewing what is already known in light of experimental evidence; using tools to gather, analyze, and interpret data; proposing answers, explanations, and predictions; and communicating the results. Inquiry requires identification of assumptions, use of critical and logical thinking, and consideration of alternative explanations. (NRC, 2000, p. 14) Most elementary teachers in the United States are now required to use inquiry activities in science lessons (Crawford, 2000;Flick & Lederman, 2006;National Research Council, 2000). In order for tea-In order for teachers to be successful in the use of science inquiry experiences in their own classrooms they need to experience inquiry in their own learning of science (Gitlin, Barlow, Burbank, Kauchak & Stevens, 1999;Haefner & Zembal-Saul, 2004;Howes, 2002;Jones, Buckler, Cooper & Straushein, 1997;Windschitl, 2002). However, most preservice teachers (those in teacher education programmes) have had little experience learning with inquiry (Gabel, 2003;Millar & Lubben, 1996;Newman, Abell, Hubbard, McDonald, Otaala, & Martini, 2004). Not only is an understanding of the process of science necessary in order to teach inquiry skills, teachers also need to have sufficient understanding of science content (Luera & Otto, 2005). Preservice teachers may have misconceptions about science and may not be able to distinguish science from pseudoscience (Cochran & Jones, 1997;Nelson, 2000). Therefore, it appears that preservice teachers will be best prepared to teach science by inquiry if they have learned inquiry in a science context. Although many teachers have to use technology in their classrooms, preservice teachers are often uncomfortable with technology and avoid using it (David & Falba, 2002). Introducing technology into science method courses and science courses for elementary teachers has been found to encourage teachers to use technology in their own classrooms (Kim, Hannafin & Bryan, 2007). The present article reports on a capstone science inquiry course that integrates technology in meaningful ways in order to prepare teachers to lead students in inquiry-based activities and to help them distinguish science from pseu-and to help them distinguish science from pseudoscience. In addition, the use of technology to help students navigate the interface of science and personal beliefs is described. Theoretical background Inquiry-based learning is thought to provide students with authentic learning experiences that develop deeper understanding due to their constructivist nature (Flick & Lederman, 2006). In a review of 18 years of research on learning, Minner, Levy, and Century (2009) found that in the majority of the studies inquiry-based education had a positive effect on the learning of science content and on the development of inquiry skills, particularly when students were actively engaged. Because time and consistent effort is required to build competence in inquiry and productive inquiry communities (Šimenc, 2008), it may be important for teachers to have had multiple inquiry experiences before beginning their teaching careers. One aspect of inquiry learning thought to be beneficial is the observation that different learning styles can easily be accommodated in inquiry-based learning activities (Oblinger & Oblinger, 2005). Learning styles are the various ways of learning preferred by an individual depending on his or her ability, interest or individual preferences. Fleming (1995) classified learning styles in his VARK model (Visual, Aural, Read/Write, and Kinesthetic/Tactile). In this model visual learners prefer seeing visualizations such as pictures, drawings, diagrams, or simulations when they learn. Auditory learners prefer to learn through listening to lectures, discussions, or tapes. Students who learn the best when they read or write fall into the reading/writing group. Kinesthetic/tactile learners prefer to learn through active experience and movement. Including multiple modes of learning in a lesson has the potential to maximize learning because students with different learning styles may benefit from the different representations (Sims & Sims, 1995). Inquiry-based science teaching has also been found to motivate students with different learning styles to learn science (Tuan, Chin, Tsai & Cheng, 2005). The use of technology in the classroom has been found to enhance learning by allowing students to view visualizations of phenomena not otherwise visible (Ardac & Akaygun 2004;Kelly & Jones, 2007), to provide feedback on understanding (Jones & Smith, 1993), and to assess student learning (Ferk, Vrtacnik, Blejec & Gril, 2003). Technology can also be used to enhance inquiry learning (Edelson, Gordin & Pea, 1999). Classroom response systems using small, hand--held devices commonly known as "clickers" provide a means to enhance interactions in the classroom. These devices contain a keypad and, when used by students, emit either infrared or radiofrequency radiation signals that are collected by an instructor. The technology provides the instructor with immediate feedback on student understanding. This technology is considered by the US National Research Council to be a positive trend in education (NRC, 2000a). In science courses clickers have been used for formative assessment, to foster student collaboration, to give quizzes, to allow anonymous responses, and to take attendance (MacArthur & Jones, 2008). Course design To introduce preservice elementary teachers to scientific inquiry, a capstone science course in scientific inquiry was developed at the University of Northern Colorado (Fortino, 2003;Taber & Quadracci, 2006). The course was designed so that students would learn about inquiry in science classes while conducting their own group inquiry activities. Opportunities are provided for them to enhance their own inquiry skills, to develop the ability to evaluate and revise inquiry activities developed by others, and to design and present their own inquiry activities. Classes are held in a room designed for the science education of elementary teachers (Figure 1). At the University of Northern Colorado students who want to be licensed to teach in elementary schools major in Interdisciplinary Studies, Liberal Arts (IDLA). Students take a core set of courses and select an emphasis area, such as social studies, English or chemistry. All IDLA students must complete an elementary methods course with a six-week science portion. They must also complete three introductory science courses. Although students have several choices, most take the three courses (physical science, biology and earth science) that have been developed especially for elementary teachers (McDevitt, Troyer, Ambrosio, Heikkinen & Warren, 1995). Each of these science courses has a laboratory component that incorporates some inquiry experiences. The capstone science inquiry course is taught by science education faculty members from all of the science departments. About 150-240 students per year enroll in the course, which is taught in sections of about 30 students each. The goals of the scientific inquiry course are to engage students in examining science as a "way of knowing, " to help them to develop a sense of "how I learn by inquiry, " to enhance their understanding of the interrelationships between scientific discovery and society, to develop a portfolio of inquiry teaching resources, to learn more about using computers as a tool for conducting inquiries, and to learn science content required for licensure in the State of Colorado. The course is a science course in which students learn scientific content, rather than a methods course. However, an attempt is made to engage students in activities that they can use later in their own classrooms. The criteria for inquiry used in the course were based on criteria recommended by the US National Academy of Sciences (National Research Council, 2000): Derry (1999) and Bryson (2003). These books examine the nature and process of science and include science content that goes beyond what students have learned in their introductory science courses. Students learn the science content as they work in groups to complete activities in earth science (plate tectonics, earthquakes, volcanoes, geologic time, climate), physics (energy, atomic structure, the electromagnetic spectrum), chemistry (the periodic table, chemical reactions, the nature of matter), and biology (the diversity of species, natural selection). About 60-70% of class time is spent on learner-centred activities such as hands-on paper or laboratory activities, computer activities, or group discussions. Because students are seated at tables of four or five students each, discussion normally takes place within each table group. Each faculty member teaches the course with his or her own preferences and innovations. This paper focuses on the introduction of two types of technology-based innovations: the use of online materials and simulations to facilitate inquiry and to help students distinguish science from pseudoscience, and the use of a classroom response system to allow students to share personal views anonymously for later discussion. Types of inquiry activities implemented Several kinds of activities were designed for the capstone science course: laboratory activities, hands-on activities using paper or other objects, discussion and written reports, computer-based simulations, learning to use online research tools, and sharing and communication on the Blackboard course management system (http://www. blackboard.com/Markets/Higher-Education.aspx). Each activity has inquiry components and students are asked to rate the inquiry components of each activity they complete. Initially students are asked to devise their own inquiry criteria. However, students in this type of course benefit from having detailed rubrics (Newman et al., 2004). Therefore, after comparing and discussing their ideas, students are given Table 2-6, "Essential Features of Classroom Inquiry and Their Variations, " from Inquiry and the Science Education Standards (NRC, 2000, http://www.nap.edu/openbook.php?record_id =9596&page=29#p200165cc9960029001). This table lists five aspects of inquiry (posing questions, using scientific evidence, formulating explanations, connecting explanations to scientific knowledge, and justifying the explanations) and provides four levels of student independence, ranging from teacher-directed activity to learner-generated activity. Students are expected to be able to use the concepts in this table to rate the inquiry level of activities in examinations and in their subsequent teaching assignments. In order to prepare students for this type of authentic task the day after the students have performed each activity they are asked to discuss how well the activity fit each of the essential features and to rate the activity. Most assessments in the course are authentic activities that teachers often perform, such as evaluating the inquiry characteristics of an activity, modifying an activity to enhance its inquiry characteristics, and designing inquiry activities. However, multiple-choice questions on the science content are also included in the examinations. Hands-on laboratory activities vary by instructor, but in the version of the course described here they include the generation of gases and the synthesis of polymers. Other hands-on activities include the classification of rock samples and prediction of the design on the bottom of a cube from clues on the visible sides (for an example, see http://brainu.org/inquiry-cubes). After working on inquiry activities in class, students are given descriptions of science activities for elemen-given descriptions of science activities for elementary school children and are asked to revise the activities to enhance their inquiry characteristics. In order to understand scientific thinking processes students model those processes. For example, they are asked to generate a periodic table of "aliens" using methods similar to those used by Mendeleev. (See http://www.gk12.ilstu.edu/New/lesson_template.asp?lessonID=402). In this activity, pictures of aliens who have landed on Earth are sorted according to characteristics such as number of fingers and body shape. One alien is missing and students must predict its appearance. Fostering engagement in scientific inquiry with technology Computer simulations offer many advantages for this type of instruction. Because data can be collected and analysed quickly, students can carry out many experimental trials, more closely modelling the processes of expert scientists. Although only some of the activities are simulations in which students can manipulate the variables, they provide an introduction to an aspect of science that might not be possible in a setting where experiments can be conducted only once. For example, students use sophisticated software such as WorldWatcher (http://www.geode.northwestern.edu/softwareWW.htm) to conduct in-depth inquiry investigations (Taber & Quadracci, 2006). Another online activity that engages students in scientific inquiry is designing a planet (see http://astroventure.arc.nasa.gov/DAP/ index.html). In this activity groups of students use a simulation to design a habitable planet for humans by selecting appropriate characteristics such as type of star, distance from the star, availability of liquid water and the producers on the planet. After they finish designing the planet students receive immediate feedback from the programme on whether their planet is habitable or not. They then examine with their group members how to improve their planet. This type of computerbased activity engages students in learning by inquiry because the simulation guides them in questioning, predicting, reasoning, thinking critically, and applying and evaluating their understandings. In general, most classroom science activities emphasise visual, auditory, and reading/writing aspects, rather than kinaesthetic/tactile aspects. However, access to materials on the Internet has facilitated the introduction of activities that appeal to kinaesthetic/tactile learners. One example is the activity "Kinaesthetic Astronomy" (see http:// education.sdsc.edu/teachertech/downloads/kin_astronomy.pdf). In Kinaesthetic Astronomy students experience basic astronomical concepts such as the meaning of day, year and season through bodily movements and positions such as rotating, spinning and walking. This activity helps students improve their reasoning skills as they physically imitate the motions of the Earth and moon and connect their observations with these motions. The Internet plays an important role in the course. Blackboard is used for online quizzes, discussion, resources, surveys and web links. Students also learn to access the National Science Digital Library (http://www.dlese.org) to search for science lessons with appropriate levels of inquiry characteristics. The Internet is also used by students to complete assignments on the interface of science and personal beliefs. One example is an activity that helps students to distinguish science from pseudoscience. Many of the students in this course have some belief in pseudoscience. This activity presents them with what appear to be on-line occult happenings that they must find a way to explain. Extrasensory perception (ESP) was selected as an example of how easy it is to be fooled by pseudoscience. Initially, students visit a website that claims it can read their minds (for example, see http://sprott.physics.wisc.edu/pickover/esp.html), and develop in groups an explanation of how the ESP program tricks the viewer. They then design experiments to test their explanation. This activity helps students to face personal beliefs that may conflict with scientific knowledge. Students learn to use the computer as a tool. Because communicating ideas is an important inquiry skill, each student group must research and prepare a Powerpoint presentation on a scientific discovery of their choice and present it along with a related hands-on activity that they have devised or that they found on the Internet. Peer review is used to give students practice in the assessment of presentations and activities. Fostering engagement with a classroom response system When a classroom response system utilizing clickers to solicit student input and feedback is used in science courses, typically groups of students are given a problem to solve. The instructor then projects a histogram of the responses, after which students can discuss the findings and revise their answers (Fagen, Crouch & Mazur, 2002). In the science inquiry capstone course clickers were used in a different way. The concepts covered in the course often do not have answers that can be scored objectively. Instead, the scoring of student responses is based more on their ability to justify a response than the response itself. Clickers were introduced into the scientific inquiry capstone course to facilitate the following three processes: • To rate the inquiry aspects of an activity. • For anonymous polling of opinions on controversial topics. • To review for the final examination. Rating the inquiry aspects of an activity When students rate an activity that they have completed the previous day they use clickers to vote on their rating of each aspect of inquiry, as described previously. After the histogram for each aspect is displayed, volunteers who have made each of the more popular ratings are called upon to defend their choices, leading to further discussion. Without clickers the discussions took place in individual groups. Although the groups reported their ratings, it was difficult to determine the majority opinions. Anonymous polling of opinions on controversial topics Because the capstone course deals with societal issues, the interface of science and religion and topics such as evolution, which are not accepted by some students, clickers allow students to share their true opinions without fear of peer rejection or being "punished" by the instructor for a dissenting opinion. Students are asked to respond to questions such as those in Figure 2. Following the voting the class discusses the responses. Derry says "Science attempts to bring coherence to our experiences, whereas religion attempts to infuse our experiences with meaning." Do you think this is a fair statement? a) Yes, I think this statement is fair to both science and religion. b) No, I think this statement overstates the importance of science and understates the importance of religion. Clickers are particularly useful when students are learning about pseudoscience and the belief in the intelligent design of the universe held by some. This application is felt to be important because the students will be classroom teachers and may have in their class students whose parents believe in intelligent design or aspects of pseudoscience. Reviewing for the final examination In the terms during which these innovations were introduced, 20% of the final examination consisted of 25 multiple choice questions, some of which had more than one correct answer. The mid-term examination students had taken did not contain this type of question. Therefore, in order to prepare students for the final examination one day of the course was set aside as a review day, in which a large portion of the class time was spent using clickers to answer a series of eight multiple choice questions similar in nature to those that might appear on the final exam. These questions covered a wide variety of material discussed in the course: activities performed, significant scientific discoveries, scientific concepts such as experimental design and proportions, and aspects of scientific inquiry. Students voted individually on their selection for each question, followed by a classroom discussion. Course evaluation The course as a whole was evaluated by the 29 students in one class. The students were provided two opportunities to evaluate the course: an anonymous online survey midway through the term and an anonymous open-ended written survey to evaluate the activities, course materials and instructional approaches at the end of the term. In addition, the authors of this paper (two of whom were instructors and one who served as a faculty observer and attended the majority of the class sessions) recorded their experiences with clickers and identified any difficulties that had arisen with their usage. Reactions to the activities, course materials, and instructional approaches Throughout the course students had experienced inquiry via various activities, discussions, readings, and assignments. The openended course survey completed at the end of the term showed that when students were asked what aspect of the course they enjoyed the most, they identified the nature of inquiry in their group work (33%) activities (29%), hands-on experiments (15%), computer activities (15%), inquiry (4%), and readings from Derry (4%). In other words, they said they enjoyed inquiry in different ways. Twenty-two percent of the preservice elementary teachers also voluntarily mentioned that they would consider implementing some of these activities in their own teaching. Some of the student responses to the question, "What aspects of this course did you enjoy the most?" are given below. • "I enjoyed the scientific inquiry. " • "I really like being able to work in groups and share information as a group. I like how science is taught in a simple way that is understandable. " • "I enjoyed all of the activities we did; they will be very helpful in my future classroom" • "I really enjoyed all of the activities and resources of sites that were interactive for children. I feel as though I will refer to them when I become a teacher and that they will be useful when I teach anything science related!" • "I enjoyed the online inquiry activities the most because they were hands-on". • "The aspects that I enjoyed the most were the computer-based activities. They were fun, interactive, and I look forward to integrating them into my future classroom". When the students were asked which aspects of the course they found the most useful, they mentioned the in-class activities (35%), using the inquiry table for evaluating the inquiry nature of the activities (19%), modifying activities to be more inquiry oriented (15%), learning inquiry, the computer/online activities (8%), the experiments (4%), the readings (4%), and learning the National Science Education Standards (4%). It should be noted that these comments were generated by the students. Therefore, although the percentages are small, they represent spontaneous comments from students. Overall, students made positive comments on each question. Some of the student responses to the question, "What aspects of this course do you find the most useful?" include the following: • "The inquiry in this course was very important and useful for me because I hadn't known anything about it until I took this course. I feel like it will help me make decisions about my lesson plans when I become an elementary teacher. " • "I think knowing and understanding scientific inquiry will be very useful for me as a future teacher. " • "I thought the activities that we could use in an elementary school setting (like the activity with the aliens that we had to categorise) were most helpful. " • "The inquiry table was helpful/useful. " • "The ideas for how to make activities more student oriented and less teacher oriented. In some activities students need to have more interaction so they will be able to learn better. " • "The aspect that I found most useful is making science activities more inquiry based. " Reactions to the use of clickers Students provided positive feedback regarding the use of technology in general and clickers specifically, in both the online survey and the in-class written evaluation, even though there were no ques-even though there were no questions specifically about technology on either evaluation. One of the questions in the online survey was: "I' d like to have a more active class discussion in lectures. Do you have any suggestions on how to facilitate this?" A student response to this question was: "I think that discussion with the groups is good. Like when we use clickers and we have to answer the questions as a group. " When asked at the end-of-term evaluation to provide an example of something the instructor did well, several students mentioned the use of technology, and one student said "I liked how the instructor used clickers. " Instructor reactions to the use of clickers The clickers were found to be especially useful in eliciting opinions on controversial topics from students. The anonymity of clicker input allowed students to express themselves freely. For example, in a lesson on evolution clickers were used to discover that some students did not believe in evolution. Following the lesson these students retained their beliefs, but because they had been allowed to express their ideas, they did not feel it necessary to continue arguing their points. However, clickers were found to be less useful for activities such as examination review, which could be easily conducted without them. The use of clickers was found to have some drawbacks. It takes class time to implement the technology, students may have difficulty adjusting to the unfamiliar interaction, the technology may not work properly, the format of questions is usually limited to multiple-choice, and it may take additional time to adjust lesson plans to the technology. Despite these difficulties, the technology was found to be valuable and worth using again. Conclusions Throughout the course students experienced inquiry via various activities, projects, discussions, readings, and assignments. The students found the inquiry-based activities to be both useful and enjoyable. The observation that students responded positively to each question of the final course survey suggests that these preservice elementary teachers felt that they learned, appreciated and evaluated inquiry in the course activities, and wanted to include inquiry in their lesson plans when they start teaching. It was possible to expand the inquiry experience through the use of technology. Computers were used to engage students in simulated inquiry experiences not otherwise possible in a classroom. Students also learned how to use the Internet as a tool for designing their own inquiry learning experiences. The use of clickers allowed instructors to introduce more activities consistent with the theme of student empowerment. Others have concluded that student empowerment is an essential feature of successfully implementing clickers into classrooms (Trees & Jackson, 2007). For example, students used clickers to rate the inquiry aspects of activities. The ratings themselves were not as critical as the justifications for their selection, which led the students into a discussion regarding their choices. Perhaps even more empowering was the use of clickers to poll students on their opinions regarding the interaction between science and religion. Students were allowed to express their feelings completely anonymously, as instructors had no record of which clicker each student was using. The use of clickers also allowed instructors to clarify some points of confusion with the reading, as students often chose "I am not sure what this statement means" when provided with the opportunity to state their opinion on an excerpt from the text. The implementation of clickers that was found to be the least useful was in the final examination review session. This observation is consistent with the work of others, who have found that the overuse of clickers may have poor results if the right niche is not found (Draper & Brown, 2004). The examination reviews conducted may not have been a good application of clickers. Although clickers are useful for formative assessment in very large courses, where they help to improve student interactions, other methods are available both for formative assessment and for catalyzing student cooperation in smaller courses such as this. The application for which the clickers were thought to be most useful was in the discussion of controversial issues, as students could propose minority opinions without being singled out.
6,392.4
2018-01-22T00:00:00.000
[ "Computer Science", "Education" ]
Direct measurement of the scattering cross sections of liquid ortho-deuterium for ultracold neutrons and comparison with model calculations Liquid deuterium is a fluid between the quantum and classical regimes. It attracts interest from fundamental research for the verification of quantum calculations but also from neutron physics as it is a widely used neutron moderator medium. We have measured the scattering cross sections of liquid ortho-deuterium in the ultracold-neutron range for four different temperatures (19-23 K) and compared them with calculations from a parameter-free analytical calculation model as well as with previous measurements at 19 K by another research group. All three show remarkable agreement, which establishes the validity of the calculation model and proves it is a reliable basis for the derivation of scattering kernels. The deconvolution of our measured transmission data changed the cross section results noticeably only for neutrons faster than 10 m/s. We found the total scattering cross section of liquid deuterium to be inversely proportional to velocity, as is predicted by theory. I. INTRODUCTION The hydrogen atom and the molecule diprotium ( 1 H 2 ), with its nuclear-spin interactions and specific heat anomaly at low temperatures [1,2], played a fundamental role in the emergence of quantum mechanics [3,4]. Discovered later, the deuterium atom and the dideuterium molecule ( 2 H 2 , most often simply called "deuterium") served in many respects as systems to check and confirm calculation models for the protium atom and molecule. Of the two rotational spin species, ortho-deuterium is the liquid of interest because the rotational spin equilibrium at temperatures between 19 and 23 K is around 98% ortho-deuterium and only 2% para-deuterium [5,6]. Liquid ortho-deuterium (liqD 2 ) has been used extensively as a "cold source" at various research centers to moderate thermal neutrons down to cold-neutron energies and to increase the fraction of ultracold neutrons in the spectrum [7,8]. Recently, it was proposed as a pre-moderator material for novel high-flux converters of ultracold neutrons [9,10], which may advance important research projects in particle and astrophysics [11] and the search for physics beyond the standard model [11,12]. In 1970 [13,14], Seiffert published the first experimental scattering data for thermal and cold neutrons in liquid deuterium down to 0.9 meV. Only in 2005, Atchison et al. published total cross sections σ tot for very cold neutrons (VCNs) and ultracold neutrons (UCNs) [15], and established a questionable velocity dependence of σ tot according to σ tot (v) = σ 0 + c/v, where σ 0 represents the incoherent elastic scattering cross section of one deuterium * Corresponding author Stefan Doege<EMAIL_ADDRESS>molecule. However, liquids do not show elastic scattering [16] and for fluids with a Maxwell-Boltzmann velocity distribution, the scattering cross section is proportional to 1/v [17], where σ scatt (v) is the scattering cross section of one deuterium molecule in barn at a given velocity, c is a temperature-dependent constant, and v is the neutron's velocity. Our group also published experimental scattering cross section data for UCNs in 2015 [18]. In the same publication, we presented a parameterfree analytical calculation model for the scattering cross sections of liquid deuterium for UCNs and VCNs, which provides a way to calculate the constant c from Eq. 1 for given temperatures and ortho/para ratios. The aim of this paper is to compare and reconcile all available experimental scattering cross section data with results from model calculations and simulations for the UCN energy range. These consolidated data can then serve to benchmark nuclear data libraries, such as described in a recent review by Plompen et al. [19] or the EXFOR database maintained by the International Atomic Energy Agency (IAEA) [20], https:// www-nds.iaea.org/exfor, which to date include only the ultracold-neutron cross sections measured by Atchison et al. [15]. II. THEORETICAL BACKGROUND Hamermesh and Schwinger calculated the neutronscattering cross sections of free deuterons and arXiv:2304.00324v1 [nucl-ex] 1 Apr 2023 interaction-free, i.e., gaseous, deuterium molecules in the wake of early neutron research in the 1940s [21]. These cross sections were self cross sections, neglecting collective interactions in the sample. They were calculated for thermal and subthermal neutrons in deuterium gas at low temperatures. Later, Young and Koppel [22] calculated the double differential cross sections of deuterium for the ortho and para species for a wider temperature and energy range, taking into account rotation, vibration, and translation of the molecule. The special case of a liquid was also described but is applicable only to neutron energies above the Debye temperature of the sample. For solid ortho-deuterium, the Debye temperature at low sample temperatures of T = 0-18 K is around Θ D = 110 K [23,24], and for liquid ortho-deuterium at T = 20 K it is Θ D = 101 K [25,26], equivalent to 9 meV, which is the energy of subthermal neutrons. Previously, we published a parameter-free analytical calculation model for the scattering cross sections of liquid deuterium for slow neutrons [18], especially in the UCN and the VCN energy range. The group of Guarini et al. was able to calculate the fundamental properties of the hydrogen liquids from parameter-free quantum simulations [25,27] and to derive the scattering cross sections of liquid ortho-deuterium for thermal and cold neutrons. For a more detailed treatment of the scattering theory for liquid ortho-deuterium, the reader is kindly referred to these three publications. III. EXPERIMENT SETUP The neutron scattering cross section σ scatt is measured by means of a transmission experiment. To this end, we employed the standard transmission equation for uniformly absorbing and scattering media (in optics known as the Lambert-Beer law [28,29]), where T (v) represents the measured absolute transmissivity of the sample, I 0 the neutron beam intensity behind the empty sample container, I(d n ) the neutron beam intensity behind the container filled with sample n, N v the particle number density of the sample, d n the sample thickness, and σ tot (v) the total UCN loss cross section of the sample bulk for the neutron velocity v in the medium, i.e., taking into account the neutron-optical potential of the sample. Equation 2 is solved for σ tot , which is then corrected for the absorption of neutrons in the sample, sample impurities, and other side effects to yield the total scattering cross section σ scatt = σ tot − σ abs,D − σ abs,H . It should be noted that, in the case of the liquid ortho-deuterium used here, these corrections amounted to less than 1% over the entire UCN range. The equipment for this experiment was set up at the "Turbine" [30] beamline PF2-EDM of the Institut Laue-Langevin (ILL) in Grenoble, France. Before the measurements were carried out, the neutron flux at that beamline was characterized [31]. This provided a solid foundation for the data treatment. The experiment itself was performed using a neutron time-of-flight (TOF) setup similar to that described earlier [18]. This time, the chopper had titanium shutters and was placed upstream of the liquid ortho-deuterium sample, see Fig. 1. The chopper's opening function was verified in a measurement with light, where its full width at half maximum (FWHM) was determined to be τ = 13.6 ms. The total flight path was 455 mm long -a compromise between having a maximum path length while not losing too many slow UCNs due to them falling in the gravitational field and being absorbed by or scattered on the flight tube's walls. A monitor detector enabled the tracking of possible changes of the UCN flux, which were less than 1% over the course of the experiment. The TOF geometry of the experiment demands that all neutrons travel the same path length. Therefore, they need to be collimated before they impinge on the sample. Neutrons diverging from the flight path need to be effectively removed from the experiment. This point was addressed by using a polyethylene foil as a liner in the flight path between the sample and the detector, which up-scatters UCNs to thermal energies, thus removing them from the measurable UCN spectrum. The collimator used here was a disk made from titanium instead of one made from poly(methyl methacrylate) (PMMA), as used earlier. UCN absorption in titanium leads to a significantly reduced background in the measurement as compared to UCN up-scattering in PMMA. Furthermore, the body of the sample container was made from copper to reduce the temperature gradient across the sample volume to a minimum compared to the previously used aluminum sample container. As we have shown earlier, rough surfaces cause tremendous scattering of UCNs [32]. In transmission measurements, this can translate to significant experimental errors. Furthermore, previous experiments by various groups have shown that aluminum windows tend to bulge under pressure [15] leading to poorly defined sample thicknesses and introducing avoidable errors to the data treatment procedure. To reduce these potential er-ror sources and to improve counting statistics, we developed and used in this experiment a sample container with low-roughness, transparent windows [33]. In the flight tube between the cryostat and the neutron detector, a horizontal viewport was installed. In conjunction with a movable mirror on a rod and the transparent sample cell windows, it allowed observation of the sample preparation process in real time along the neutron flight path. When the sample was ready to be measured with neutrons, the mirror was raised out of the flight path. All improvements over our 2012 experiment [18] led to a tenfold increased signal-to-noise ratio [34]. The deuterium gas in our experiment had a purity grade of N30, meaning at least 99.90% deuterium. The temperature gradient across the sample container was ∆T = ±0.25 K and the liquid samples were held at the vapor pressure of deuterium for their respective temperature during the entire measurement run. We used Raman spectroscopy as described by Silvera [6] to determine the ortho-deuterium fraction in the sample, which was c ortho = 97.2 ± 0.02%. For the registration of UCN counts [35], a boron-lined gas electron multiplier (GEM) detector of the Cascade type [36] was used with a flushing and quenching gas mixture of 90% argon and 10% carbon dioxide. A. Deconvolution and data treatment As required by Eq. 2, two UCN transmission spectra were recorded per measurement -one with an empty sample container and one with liquid deuterium at the respective temperature. Both measured spectra are convolved with the chopper's resolution function. To account for this, the spectra needed to be deconvolved, for which a standard procedure is lacking. The deconvolution method presented here improves upon the data presented in Ref. [34]. The spectra were prepared for deconvolution by subtracting background counts, normalizing them to the measurement time, binning them using a sliding frame of 8 ms width, transforming the time into the velocity domain, and correcting for neutron reflection at interfaces. The velocity-dependent transmission equation of the measured spectra I(d n ) and I 0 , i.e., convolved with the resolution function G(v), can be expressed as where T expt (v) = e −Nvσtotdn is the true neutron transmission from Eq. 2 with σ tot ∝ 1/v [17] in the liquid phase, I 0 = M (v) is the known Maxwellian spectrum of the neutron source PF2-EDM [31], and G(v) is the chopper's resolution function represented by a normalized Gaussian function with σ G the standard deviation of the Gaussian, which relates to its FWHM by 2 √ 2 ln 2σ G . In our TOF experiment with neutrons, the neutron pulse had a length of τ = 13.6 ms (FWHM) at the position of the chopper. The standard deviation σ G of the Gaussian is velocity-dependent due to the dispersion of the neutron pulse as it travels along the flight path s 0 = 0.455 m, To calculate the deconvolution factors D(v) for our experimental datasets I 0 and I(d n ), see Eq. 3, we first convolved both the known unperturbed neutron spectrum I 0 = M (v) of the neutron source and the calculated neutron spectrum behind the sample, I(d n ) = I 0 T th (v), with G(v), the Gaussian resolution function. The scattering cross section σ tot = σ scatt for calculating T th (v) was taken from the calculation model [18], see Fig. 2. The convolution factors C(v) reproduce the effect of the convolution on both spectra I 0 and I(d n ), and depend on the neutron velocity. They are defined as (6) Since the convolution has a slightly different effect on I 0 and I(d n ) = I 0 T th (v), the transmission T th is changed by the convolution as well. Now we can apply the inverse of the convolution factors C(v), namely the deconvolution factor D(v) = C −1 (v), to deconvolve the measured spectra and retrieve the corrected transmission, D(v) is a smooth function with D = 1 around v = 8 m/s, the maximum of the incoming neutron spectrum. The corrected transmission T (v) enters into Eq. 2, from which σ tot is calculated. In the case of our experiment, the total cross sections of neutrons with a velocity below 9.5 m/s are virtually unaffected by the deconvolution, changing them only by 1% or less. For neutron velocities between 10 and 15 m/s, the deconvolution suppresses σ tot from 2% to 15%, respectively, due to the large velocity uncertainty as shown by Eq. 5. After deconvolution and calculation of σ tot , the results were corrected for absorption by deuterium (σ abs,D = 2.28/v[b × m/s]) [37] and hydrogen impurities (σ abs,H ) in the sample to obtain σ scatt . Details are described in Ref. [34]. B. Experimental results of scattering cross-section measurements The UCN scattering cross sections σ scatt of liquid ortho-deuterium (c ortho = 0.972) for the temperatures 19.3 K, 20.5 K, 22.0 K, and 23.0 K, corrected for absorption in the medium and for reflection at the sample interfaces, are shown in Fig. 2. They include quasi-elastic contributions arising from the heat conductivity and particle diffusion, as well as one-phonon up-scattering in orthodeuterium (σ 00 ). Contributions from para-deuterium impurities, which play a negligible role due to the low para-D 2 concentration, are quasi-elastic and inelastic scattering (σ 11 ) as well as up-scattering due to the rotational relaxation of para-deuterium (σ 10 ). The scattering cross sections obtained from the measurement and after corrections are composed of the following constituents (8) where c ortho and c para are the ortho-and para-deuterium concentrations in the sample and σ JJ the deuterium molecular scattering cross sections for the respective rotational states of the deuterium molecule before (J) and after scattering (J ) [18,38]. [18] was calculated for the same temperatures as in this experiment using c ortho = 0.972 (solid colored lines) and the selfdiffusion coefficient for liquid deuterium from O'Reilly and Peterson [39]. The temperature uncertainty in the experiment was ∆T = ±0.25 K. The total cross section data from Atchison et al. [15] are shown for comparison (PSI 2005, yellow stars) and match both our measured data and the calculation model well. Our scattering cross sections were corrected for absorption on H2 and D2, and the Atchison et al. data were not. This correction falls within the error bars and is thus negligible. C. Comparison with model calculations and previous experimental results The experimental data presented in Fig. 2 agree very well with the values obtained for the same four temperatures using for the double-differential scattering cross section d 2 σ/dΩdE the calculation model published by Döge et al. [18]. It calculates the coherent and incoherent scattering contributions in full according to where k i is the wave vector of the incoming neutron, k f that of the scattered neutron, σ coh,inc are the coherent and incoherent molecular scattering cross sections, and S coh,inc (q, ω) are the coherent and incoherent scattering laws, respectively. Both experimental and calculated scattering cross sections are inversely proportional to the neutron's velocity. It must be noted that instead of the self-diffusion coefficient based on unpublished data, which was used in Ref. [18], now the experimentally determined values from O'Reilly and Peterson [39] were used in this model. Quantum centroid molecular dynamics simulations by Guarini et al. [25] confirm these values, e.g., D s = 3.5 × 10 −5 cm 2 /s [26] and D s = (3.7 ± 0.4) × 10 −5 cm 2 /s [39], both for T = 20 K. The values of the other thermophysical properties of deuterium, upon which the calculation model relies, were taken from Souers [40]. These experimental data on liquid deuterium published here should be considered an improvement of the experimental data published in Refs. [18,41], which showed a large temperature gradient across the sample container and were, in retrospect, probably taken at a higher temperature than the average between the two measurement points in the sample container. Our scattering cross sections shown in Fig. 2 were corrected for absorption on H 2 and D 2 , and the Atchison et al. data (PSI 2005) were not. This correction falls within the error bars and does not impair the comparability of both datasets. V. DISCUSSION Considering the high degree of agreement between the experimentally measured scattering cross sections and the calculated ones at four different temperatures covering the entire liquid phase of deuterium, we can confidently say that our calculation model proved accurate. The agreement between our data for 19.3 K (taken in a sample container with low-roughness windows) and the data from Atchison et al. for 19 K (taken in a sample container with machined aluminum windows that had a high surface roughness) implies that sample containers with rough surfaces cause only negligible errors in the measurement of liquid samples. This is in stark contrast with their adverse effect on solid samples [34]. In our previous paper on liquid deuterium [18], we concluded that the incoherent approximation [42] was valid for ultracold-neutron scattering in liquid deuterium. After recalculating the scattering model from Ref. [18] for both coherent and incoherent scattering contributions, see Eq. 9, using the diffusion coefficient from O'Reilly and Peterson, we cannot uphold this conclusion. In fact, the incoherent approximation overestimates the scattering cross sections of ortho-deuterium by a factor of 3 and that of para-deuterium by a factor of 4 over the entire temperature range of the liquid phase. These factors are comparable to the overestimation that the incoherent approximation delivers for ultracold-neutron scattering in solid ortho-deuterium [43]. VI. CONCLUSION We have provided new experimental scattering cross section data of liquid ortho-deuterium for ultracold neutrons (UCNs) at four different temperatures. Our results for 19.3 K are very similar to those obtained by Atchison et al. at 19 K [15]. This implies that sample containers with rough surfaces cause only negligible errors when measuring liquid samples, whereas they have detrimental effects on solid samples. Furthermore, our experimental data show a high degree of agreement with -and therefore confirm -a calculation model published by Döge et al. [18], which has no free parameters and uses only the material properties of deuterium. One key variable in this calculation model is the self-diffusion constant of liquid deuterium, for which we took the measured value from O'Reilly and Peterson [39]. This value was confirmed by quantum centroid molecular dynamics simulations by Guarini et al. [25]. Both experimental and calculated scattering cross sections are inversely proportional to the neutron's velocity. We conclude that the calculation model of Döge et al. is suitable for calculating the scattering cross sections of liquid deuterium in the ultracold and very cold-neutron ranges for arbitrary ortho and para concentrations, and for all temperatures of the liquid phase of deuterium. It can, therefore, be used to model scattering kernels S(α,β) closer to the underlying physics of liquid deuterium and to extend existing kernels, e.g. from Bernnat [44], to the low-energy UCN range. In addition, we have shown that, applying scattering kernels, the incoherent approximation cannot be used to calculate the coherent scattering cross sections in the UCN range. The oft applied simplification called the incoherent approximation, which neglects interference effects in the neutron-scattering process, overestimates the ultracoldneutron scattering cross sections of liquid deuterium by a factor of 3-4. This is in line with our previous calculations and experiments on solid deuterium [34,43]. The results for this paper were produced as part of the Ph.D. thesis of Stefan Döge [34].
4,726.4
2022-08-08T00:00:00.000
[ "Physics" ]
Prevalence, species distribution, and risk factors of fungal colonization and infection in patients at a burn intensive care unit in Vietnam Background and Purpose : Burn patients are at a higher risk of infections caused by different organisms. This study aimed to address the prevalence, causative species, and factors related to fungal colonization or infection in patients with acute severe injuries admitted to the intensive care unit (ICU) of a burn hospital in northern Vietnam. Materials and Methods: This prospective study was conducted on 400 patients in a burn ICU between 2017 and 2019. Clinical samples were weekly collected and screened for fungi, and relevant clinical information was obtained from medical records. Results: According to the results, 90% of the patients were colonized with fungi. Out of this group, 12.75% of the cases had invasive fungal infection (IFI). Eleven yeasts and six mold species were isolated from the patients, with the most common species being Candida tropicalis (45.56%) and C. albicans (41.94%). Among the eleven species causing fungal wound infection (FWI), the most common agents were Candida (66.7% of FWI patients) and Aspergillus (38.5%) species. Three Candida species isolated from blood were C. tropicalis (66.7%), C. albicans (20.0%), and C. parapsilosis (14.3%). No factors were found to expose the patients to a higher risk of fungal colonization. However, hyperglycemia, prolonged ICU stay, and heavy Candida species colonization were found to be independently predictive of IFI. Conclusion: Burn patients are at the risk of fungal infection with Candida species (especially C. tropicalis) and Aspergillus as the most frequently responsible agents. Continuous surveillance of fungi and appropriate management of pathophysiological consequences are essential to prevent fungal infection in burn patients. Introduction urn injuries are among the most common and detrimental types of all injuries and account for 180,000 deaths annually [1]. The leading cause of mortality among burned patients is infection (75%) [2]. Burn wounds provide an ideal port of entry for organisms and induce substantial immune dysfunction [3]. Accordingly, burn patients are at a higher risk of developing infections, compared with other hospitalized patients [4]. The most common agents of infection among burn patients are bacteria and fungi [2,5]. Due to the better care of burn patients and availability of effective antibiotics, there is a decreasing trend in infection due to drugsensitive bacteria. However, the number of infections caused by multi-drug resistant bacteria or fungi is on an increasing trend [5,6]. Fungal infections usually occur after a period of colonization with fungal strains either from hospital B Curr Med Mycol, 2020, 6(3): 42- 49 43 environment or patients [7][8][9][10]. The effective prevention and treatment approaches of fungal infection should be based on updated and local epidemiological data because the prevalence, species distribution, and factors related to colonization or infection vary from time to time and place to place [11]. Vietnam is a developing country; regarding this, people living there are at a higher risk of burn injuries, compared to those living in developed countries [12]. However, no data on fungal colonization and infection among burn patients in Vietnam have been published. Regarding this, the aim of the present study was to address the prevalence, causative species, and factors related to fungal colonization or infection in patients with acute severe injuries admitted to the intensive care unit (ICU) of a burn hospital in northern Vietnam. Ethics statement The study was approved by the Ethics Committee of Vietnam National Institute of Malariology, Parasitology and Entomology (Decision number 1172/QD-VSR, 28 th October 2016). Written informed consent was obtained for participation in the study. In cases where patients were not able to give consent themselves, the consent was obtained from their legal representatives. Patients and data collection The current investigation was conducted in the National Hospital of Burn (NHB), a teaching hospital of Vietnam Military Medical University (VMMU) during January 2017 and December 2019. Agent isolation and species identification were performed in the laboratories of the Department of Microbiology, NHB and Department of Parasitology, VMMU. All patients in the ICU of NHB during 2017-2019 who agreed to participate in this prospective study were enrolled in the research. The inclusion criteria were admission within 7 days of injury and presence of a thermal burn. Burn injuries were managed at the NHB according to standard care for severe burn patients, such as aggressive resuscitation, use of silver sulfadiazine cream, wet wound dressing, surgical excision, and empirical antibiotic therapy. The data were obtained from inpatient charts and electronic medical records using a standardized case report form. Data collection was performed by senior clinicians who were blind to the study purpose and design. The Wallace rule was used to estimate the percentage of the total body surface area burned (% TBSA) and full-thickness surface area of patients [13]. The acute physiology and chronic health evaluation (APACHE) was used upon ICU admission to assess the injury severity for each patient [14]. Fungal surveillance Non-sterile samples (i.e., mouth swabs, rectum swabs/feces, wound swabs, urine, and tracheal aspirates) were collected weekly to detect the presence of microorganisms as part of the hospital standard of care. Wound biopsy and blood were taken in cases with clinically suspected fungal wound infection (FWI) (presence of pus, increased exudation, leathery appearance, change in wound color, inflammation, or graft lysis) or fungemia [2,15]. The non-sterile samples were taken by two ready-made sterile cotton swabs. Blood was collected from a clean, unburnt site following routine protocols. Furthermore, biopsy (a 0.5×0.5-cm full-thickness sample) was taken from the most apparently infected site of the wound up to the healthy layer and then transferred to the laboratory in normal saline and 10% formaldehyde solution. Fluids and secretions were microscopically examined after being processed by 10% potassium hydroxide. The biopsy samples fixed in 10% formaldehyde solution were sent to the pathology laboratory for hematoxylin and eosin (H&E) and periodic acid shift (PAS) staining, followed by an examination by pathologists. Non-sterile and tissue samples were cultured onto Sabouraud dextrose agar (SDA, BioMerieux, France) and slants containing gentamicin (0.02 mg/mL) to test the fungal growth. Blood samples were placed in the Bactec (BD Diagnostic Systems, Sparks, MD) system and daily checked for fungal growth. The identification of the yeast isolates was accomplished by comparing the features of colonies on Brilliance Candida Agar (Oxoid) using the Vitek2 Compact (Biomerieux, France) following the manufacturer's instruction. Molds were identified based on morphological characteristics with the help of commonly used keys. The strains isolated from the sterile samples were further reconfirmed by sequencing the internal transcribed spacer (ITS) regions, and the obtained nucleotides were compared to database sequences in GenBank [16]. Definitions Fungal colonization (FC) was defined as the isolation of fungi from a non-sterile body site, while invasive fungal infection (IFI) was referred to the presence of fungal elements in sterile sites (wound biopsy or blood) [17]. Colonization index (CI) was defined as the ratio of the number of distinct nonsterile samples colonized by Candida to the total number of body sites cultured; accordingly, the patients with a CI of ≥ 0.5 were considered heavily colonized [18]. The adult (aged 16-65 years) who had the TBSA of ≥ 50% or FTBS of ≥ 10% were considered severely burned. The patients aged < 16 or > 65 years with a TBSA of ≥ 30% or an FTBS of ≥ 5% were defined as severe burn patients. All patients with respiratory burns or burns associated with multiple injuries were also considered to have a severe injury. The diagnosis of renal failure was based on variations in creatinine and urine output according to the risk, injury, failure, loss of function, and end-stage 44 Curr Med Mycol, 2020, 6(3): 42-49 renal disease criteria [19]. In addition, the patients with a fasting glucose level of > 126 mg/dl were considered to have hyperglycemia. The diagnosis of severe infection was performed based on the international guidelines for the treatment of sepsis and septic shock [20]. The patients who stayed in ICU for more than 14 days were regarded as having prolonged ICU stay [21]. Statistical analysis The data were analyzed in SPSS software, version 16.0. Continuous variables were reported by means and standard deviation and compared by Student's t-test. Categorical variables were expressed as case number (n) and percentages and compared by the Chi-square test or Fisher's exact test as appropriate. Univariate analysis (linear regression) was used to investigate the possible correlations of independent variables with dependent ones (FC or IFI). Factors with a p-value of < 0.05 in univariate analysis were included in the model of the multivariate analysis. Odds ratios (OR) and corresponding 95% confidence intervals (95%CI) were calculated. Significance was set at a p-value of 0.05 (two-tailed). Results During the study time, there were 400 patients with a mean age of 29.7 years (range: 1-94) included in the research. The two most affected age groups were 1 ~ 10 (25%) and 31 ~ 40 (23.0%) years. Based on the obtained male: female ratio (3.5/1), males were found to be more affected than females. A total of 69 patients had inhalation injury, and 78 patients passed away during their ICU stay (Table 1). All infected patients were also colonized with fungi; accordingly, 360 (90%) patients were diagnosed with FC. Among 51 (12.25%) patients diagnosed with IFI, there were 3 patients with isolations from both blood culture and tissue biopsy ( Table 2). The frequency of fungal species isolated from burned patients is listed in Table 3. Among the 11 identified yeast species, C. tropicalis, C. albicans, and C. parapsilosis were the most prevalent ones, accounting for 94.6% of all isolates. Some representative ITS2 sequences of the isolates were submited to Genbank (C. tropicalis MN067757, C. albicans There were 28 patients colonized or infected with molds, with Aspergillus species as the predominant agent (27/28), followed by Fusarium. The most common Aspergillus species was A. fumigatus (39.29%; Table 4). The frequency of fungal species responsible for IFI is listed in The univariate analysis showed that severe infection, prolonged ICU stay, hyperglycemia, dialysis, and parenteral nutrition were significantly correlated with FC. However, in multivariate analysis, no factor was found to expose the patients to a higher risk of FC (Table 6). However, this analysis revealed many factors related to IFI. The results revealed hyperglycemia, prolonged ICU stay, and heavy Candida species colonization as the independent predictors of IFI (Table 7). Figure 1 shows the positive results of the specimens cultured within weekly intervals postburn. The frequency of IFI steadily increased during the stay in the hospital, while the prevalence of FC was nearly unchanged during the study period. Discussion This study was targeted toward examining the prevalence and related factors of fungal colonization/ infection to provide guidance on the prevention and treatment practices of fungal infection for ICU burn patients. To the best of our knowledge, this is the first study that assessed the risk factors for fungal colonization/infection in burn patients residing in ICU in Vietnam. Our study revealed that ICU burn patients have some risk factors for IFI, which seemed to be remarkable for this set of vulnerable patients. In the present study, the diagnosis of fungal colonization or infection was based on the detection of fungi in clinical samples by microscopic examination or culture. Microscopic potassium hydroxide preparation was used as it facilitates the visualization of fungal elements during direct microscopic examination. The H&E staining is the most common type of histopathological examination. Furthermore, PAS staining is useful for screening fungi in tissue sections as it binds with carbohydraterich macromolecules commonly found in fungal cell walls. Culture remains one of the key methods for diagnosing fungal infection, and in case of suspected fungemia, blood culture is currently considered the "gold standard" [22]. The other benefit of culture is the identification of the causative agents by conventional or molecular methods. The definite diagnosis of IFI, in the present study, was provided with histological and cultural evidence from biopsies or blood as required [23]. The results showed that males (77.75%) were more affected by burn injury, compared to females; moreover, the most common age group involved was 1-10 years. These findings are in concordance with previous reports in Vietnam [24], as well as China, a developing, neighbor country [25,26]. The mortality rate in the study was low (19.5%) and comparable to the value reported by Mundhada SG et al. (18%) [27], indicating the reducing trend of burn-related mortality in various regions [28]. Our findings showed that FC among the ICU burn patients was very common (90%), with 100% of the colonized patients having Candida isolation. The high rate of Candida colonization among those in ICU is in accordance with the data obtained by Caggiano et al. (2011) reporting the colonization of Candida species in 92.3% of the patients 15 days after ICU admission [29]. The very high rates of colonization by Candida species expose the patients to a high risk of IFI because the responsible agents for invasive candidiasis were mainly endogenous in origin [7], [30]. The FWI rate obtained in the present study is comparable to those reported by Capoor [31]. In the current research, The prevalence of fungemia was 3.75%, which is in line with previous results [32][33][34]. This finding is indicative of the higher risk of fungemia in ICU burn patients, compared to that in the patients residing in other types of ICUs [35,36]. There is also a study reporting a higher incidence of fungemia in ICU burn patients (11%) [37]. In the present study, all colonized patients were found to be colonized with yeast species. This is comparable to previous reports describing the frequency of yeast (mostly Candida species) in burn patients [38][39][40]. It should be noted that the isolation rates of species belonging to CTG clade (C. glabrata and C. krusei) that were usually resistant to fluconazole were low (1.5% and 0.5%, respectively). This finding is in accordance with a previous study performed on Vietnamese patients [41]. In the present study, the results also revealed some other rare yeasts, such as C. famata, C. duobushaemulonii, and Pichia (Kodamaea) ohmeri. Those species have gained importance nowadays due to the emergence of nosocomial pathogens and their rapidly increased resistance to antifungal agents [42][43][44][45][46]. In line with the previous studies [40,47], the rate of colonization caused by mold was low (7%). Candida tropicalis was the predominant agent responsible for both FWI and fungemia. In Vietnam, there are no prior data regarding the prevalence of IFI in ICU patients for comparative purpose. However, our results are in line with those of previous research with C. tropicalis being the most common agent causing Curr Med Mycol, 2020, 6(3): 42-49 47 candidaemia (50.5%) [41]. In concordance with the data reported in an Iranian study [31], in the present research, more than 40% of FWIs were caused by molds. The high rate of FWI caused by mold is disproportionate to the low prevalence of mold colonization among burn patients (7%). This signifies the importance of measures to protect the wound from mold colonization [48]. The source of mold infection is mostly exogenous when conidia as filamentous fungi fall onto the wound from the surrounding air and invade the wound immediately [8]. The most common mold was Aspergillus species (15/16 patients). This may demonstrate the strong virulence of Aspergillus among many other molds [49]. It is worth noting that there was an FWI case infected with F. solani. Although invasive infection caused by Fusarium is rare, it has caught much attention for its angioinvasive property, resistance to conventional antifungal therapy [50], and high mortality rate [51]. Consistent with other reports [29,52,53], the univariate analysis revealed severe infection, prolonged ICU stay, hyperglycemia, dialysis, and parenteral nutrition as the significant predictors of FC. However, based on the multivariate analysis, there were no factors predicting FC in burn patients. In our opinion, this is reasonable because most of the colonized patients had isolation upon the ICU admission, and the rate of FC was nearly steady over the period of the patients' ICU stay (Figure 1). Hedderwick et al. (2000) also found that most of the colonized patients were already colonized with yeast upon admission to the ICU, and only some of them became colonized after admission [54]. The results of our study showed that hyperglycemia, prolonged ICU stay, and heavy Candida species colonization were independently predictive of IFI. Hyperglycemia during acute hospitalization is a common pathophysiological phenomenon among severely burned patients [55]. Hyperglycemia is an adaptive response but could lead to some adverse outcomes, especially with increased risk of infections, including fungal infection [5,[56][57][58][59][60][61]. In the current study, hyperglycemia was observed in 42.75% of the patients. Moreover, those with hyperglycemia were at 3.984 times higher (adjusted) risk of IFI, compared with those without hyperglycemia. Multiple site colonization is a wellknown risk factor for invasive fungal infection and has been mentioned in many established Candida Prediction Rules [57,[62][63][64][65]. In line with the results of previous studies [53,66,67], prolonged ICU stay was also found to be an important factor; in this regard, there was a clear rise in the infection rate from the second week (Figure 1) of hospital stay. The other factors found to be related to IFI in univariate analysis included severe injury, severe infection, renal failure, hemodialysis, total parenteral nutrition, mechanical ventilation, catheter, and immunosuppressive therapy. However, these factors were not significant in multivariate analysis. Although these factors present in some Candida Prediction Rules, their impact may not be significant as they are absent in some other rules [57,[62][63][64][65]. Therefore, their contribution as confounding factors in this study was rational. In the present research, high APACHE II score was not associated with the increased risk of IFI, which is in line with the findings reported by Escrig AIR et al. (2016) [68]. These findings are significant for clinicians to take timely precaution and prophylactic practices against fungal infections for high-risk patients. Conclusion In conclusion, severely burned patients are at a risk of fungal infection mainly because of the pathophysiological consequences of injury and longer hospital stay. The most common agent was found to be Candida species (especially C. tropicalis), followed by Aspergillus. To ensure early and appropriate management measures, it is required to perform close monitoring on the presence of microbial agents in burn patients. It is also needed to well control blood glucose concentration in burn patients, especially in those staying in ICU for a long period. Conflicts of interest The authors declare no conflicts of interest regarding the publication of this paper. Financial disclosure This work was partially supported by the Vietnam Ministry of National Defence (grant No. 2017.75.054, to TAL). The funding agency does not have any role in data collection, analysis, and writing of the manuscript . This research is part of a thesis submited in partial fulfilment of the requirement for a Ph.D. degree in Literature and Philosophy in Health at the National Institute of Malaria, Parasitology, and Entomology of Vietnam.
4,393.2
2020-09-01T00:00:00.000
[ "Medicine", "Biology" ]
Synthesis, α-glucosidase and α-amylase inhibitory activities, acute toxicity and molecular docking studies of thiazolidine-2,4-diones derivatives Abstract In the present study, a series of thiazolidine-2,4-diones derivatives (3a–3e) and (4a–4e) were synthesized and characterized by 1H NMR, 13C NMR and ESI-MS spectrometry. All compounds were screened for their α-glucosidase and α-amylase inhibitory activities. In vitro biological investigations revealed that most of compounds were active against α-glucosidase with IC50 values in the range of 43.85 ± 1.06 to 380.10 ± 1.02 µM, and α-amylase with IC50 in the range of 18.19 ± 0.11 to 208.10 ± 1.80 µM. Some of the tested compounds were found to be more potent inhibitors than the clinical drug Acarbose (IC50glucosidase = 97.12 ± 0.35 µM and IC50amylase = 2.97 ± 0.004 μM). The lead compounds were evaluated for their acute toxicity on Swiss mice and found to be completely non-toxic with LD > 2000 mg/kg BW. Furthermore, the Structure–activity relationship (SAR) and the binding interactions of all compounds with the active site of α-glucosidase and α-amylase were confirmed through molecular docking and stabilizing energy calculations. This study has identified the inhibitory potential a new class of synthesized thiazolidine-2,4-diones in controlling both hyperglycemia and type 2 diabetes mellitus. Furthermore, the theoretical binding mode of the target molecules was evaluated by molecular docking studies against the 3D Crystal Structure of human pancreatic α-amylase (PDB ID: 1B2Y) and α-glucosidase (PDB ID: 3W37) Communicated by Ramaswamy H. Sarma Introduction Over 90% of diabetic patients suffer from type 2 diabetes mellitus (T2DM), which is a multifactorial metabolic disorder, characterized by an uncontrolled chronic hyperglycemia, related to failure of insulin function (insulin resistance) over extended periods in peripheral tissues (Hameed et al., 2015). This chronic high blood glucose level leads to severe complications such as cardiovascular diseases, neuropathy, kidney failure and obesity (Abbas et al., 2019;Hanefeld et al., 1996). This makes it necessary to maintain a normal blood glucose level, especially postprandial hyperglycemia, as a solution for the control of this pathology and prevent their complications. One of the approaches to manage diabetes is to enhance the functionality of pancreatic a-amylase and intestinal a-glucosidase (Joshi et al., 2015;Rafique et al., 2020), which are two enzymes positioned in the digestive system. These enzymes are responsible for the degradation process of starch and oligosaccharides respectively into monosaccharides such as glucose and fructose (Joshi et al., 2015). In this regard, inhibition of a-amylase and a-glucosidase play a critical role in modulating the absorption of carbohydrates and preventing postprandial hyperglycemia, making inhibitors a useful solution in the management of type 2 diabetes (Leroux-Stewart et al., 2014). Thiazolidine-2,4-diones (Glitazones) are an important class of oral antidiabetic agents that have attracted the community attention for their excellent control of glucose increasing effect without causing hypoglycemia (Bloomgarden et al., 2018). Due to their flexible and diverse nature, thiazolidinediones (TZDs) show also a wide range of pharmacological activities which include anti-hyperglycemic (Naim et al., 2017), anti-proliferative (Patil et al., 2010), anti-obesity (Bhattarai et al., 2010), anti-microbial (Avupati et al., 2012;Sun et al., 2019) and anti-inflammatory effects (Prabhakar et al., 1998). Furthermore, TZDs derivatives also reported as a-glucosidase inhibition (Chinthala et al., 2013). Despite their diverse therapeutic profile and excellent antidiabetic properties, thiazolidinediones possess some severe side effects, such as weight gain (Fonseca, 2003), plasma-volume expansion, bone related disorders and edema (Berlie et al., 2007;Forman et al., 2000;Okazaki et al., 1999;Willson et al., 2000). One of the marketed glitazone drugs (Figure 1), Troglitazone, has been removed due to its hepatotoxicity (Smith, 2003). In this context, extensive research on TZD has been going around the globe to develop novel or improved derivatives better and safer without compromising the diabetic potential. In a recent publication, our research team has reported work on the design and synthesis of new thiazolidine-2,4-dione derivatives, in search of potential lead compounds, which can demonstrate promising results. The aim of this study consists of the evaluation of the in vitro antidiabetic profile of a library of newly synthesized thiazolidine-2,4-dione derivatives (3a-3e) and (4a-4e) by showing their potential to reduce postprandial blood hyperglycemia by the inhibition of digestive enzymes (a-glucosidase and a-amylase), the molecular docking studies were performed on the two enzymes to predict the mechanism of inhibition and the orientation within a targeted binding site. Chemistry First, we examined the survey of the Knoevenagel condensation reaction by using TZD 1 and benzaldehyde 2a as models adsorbed on mineral support under solvent-free MW conditions. All minerals solid supports used are commercially available. These conditions, which are collected in Table 1, included nature of solid support, reaction time and temperature. Therefore, we found that the reaction carried out in basic alumina irradiated in a microwave digestion system in open vessel afforded the desired product 3a as a single (Z)diastereoisomer in high yield 95% yield at 108 C for short time (5 min) as compared with reaction carried out under classical synthetic conditions (70%, 5 h) ( Table 1, entries 4 and 9). However, the use of the use of closed vessel microwave assisted Knoevenagel reaction has not favored the reaction for 10 min at 110 C. Reversible dehydration condensation under pressure is in principle a difficult transformation because the amount of water removed from the product in a closed system pushes the equilibrium in favor of the hydrated compounds (Brahmachari, 2016;Murase et al., 2012). Moreover, the condensation reaction realized without support or by using of calcined alumina or K10 bentonite clay, did not work under microwave irradiation even at same temperature and longer reaction time (entries 2, 4 and 8). The condensation reaction has been improved also on acidic alumina and silica (entries 3 and 6), and the reaction requires a few minutes with improved yields using MW. It is interesting to note that the reactivity increases remarkably in the presence of hydroxylated supports, we observed a significant effect leading to an increase of yield of 3a in a short reaction time (5-10 min) (Table 2, entries 5-7) The involvement of surface hydroxyl groups on no calcined alumina, and silica supports plays a determinant role in the chemical activation of methylene group condensation with aromatic aldehydes (Khalafinezhad et al., 2001;Kwon et al., 1997;Syassi et al., 1997). After optimization of the reaction conditions, this methodology was extended for the synthesis of 5-arylidenethiazolidine-2,4-dione via Knoevenagel condensation reaction between TZD 1 and various aromatic aldehydes. It was found that basic Al 2 O 3 catalyzes the synthesis of 5-arylidenethiazolidine-2,4-diones (3a-3e) in an efficient manner and excellent yields (90-95%) of the desired products were obtained (Table 2, entries 1-5).The synthesized products (3a-3e) were characterized by 1 H NMR, 13 C NMR and ESI-MS, and melting point, which were in accordance with the literature data Zhang & Zhou, 2012). Then, the compounds 3a-3e were submitted to N-allylation with NaOH as base in EtOH/H 2 O (v/v, 2:1), the desired products 4a-4e were obtained in 49-68% yields at 75 C for 5 to 6 h ( Table 3). a-Glucosidase inhibition In an ongoing project which is aimed to develop new enzyme inhibitors, all newly synthesized thiazolidine-2,4dione derivatives (3a-3e) and (4a-4e) were screened for their in vitro a-glucosidase inhibitory potential. The inhibitory effects were plotted against the log of concentrations using non-linear regression curve approach from which IC 50 values were calculated. The all tested compounds exhibited a good to moderate inhibitory activity with IC 50 values between 43.85 ± 1.06 and 380.10 ± 1.02 mM, when compared to standard inhibitor acarbose (IC 50 ¼ 97.12 ± 0.35 mM) ( Table 4). Five compounds 3a, 3e, 4a, 4d and 4e were found to be the most potent inhibitors than the rest of tested compounds, with IC 50 values of 98.88 ± 1.11, 84.95 ± 1.01, 98.45 ± 0.54, 101.9 ± 0.43 and 40.67 ± 1.81, respectively, in comparison with the standard drug acarbose. Furthermore, Compounds 3b, 3c, 3d, 4b and 4c showed moderate inhibitory activity with IC 50 values of 380.10 ± 1.11, 339.70 ± 1.02, 156.3 ± 7.93, 120.8 ± 1.10, 207.9 ± 4.76 and 211.6 ± 3.25. Based on these results, and to develop the structure-activity relationship (SAR) study, we have divided the synthesized molecules in two groups; Compounds (3a-3e), with different substitutions at variable positions of benzene and (4a-4e) obtained by the N-allylation of compounds (3a-3e). The general structural features of the synthetic molecule comprise of thiazolidine-2,4-dione (TZD) ring, aryl ring and allyl group. All these parts are playing important role in the activity, however, a slight variation in the structure of the synthesized molecules such as the position and the nature of the substituents on the aryl or the N-allylation of TZD could make a great variation in the a-glucosidase inhibitory activity. In fact, within the inhibitory activity of 5-arylidenethiazolidine-2,4-dione derivatives (3a-3e), compound 3a (IC 50 ¼ 98.88 ± 1.11 mM) bearing benzene ring without any substitution displayed good activity among the serie studied. Compound 3d (IC 50 ¼ 156.3 ± 7.93 mM) bearing bromo substituent in para position of phenyl ring showed moderate activity. Additionally, 215-217 a Experimental conditions: TZD 1 (1 mmol) and aromatic aldehyde 2 (1 mmol) were adsorbed on basic alumina (1 g/mmol of TZD) and irradiated under MW for 5 min at 108 C, temperature measurement in microwave-assisted open teflon reactors. b Yield of isolated product. (1 mmol), and the reaction mixture was stirred and heated at 75 C for 5-6 h. b Yield of isolated product. the insertion of a methyl or fluoro groups in para position afforted compounds 3 b (IC 50 ¼ 380.10 ± 1.11 mM) and 3c (IC 50 ¼ 339.70 ± 1.02 mM), resulted in a sharp decrease in inhibitory activity against a-glucosidase enzyme. However, compound 3d (IC 50 ¼ 380.10 ± 1.11 mM) substituted with chloro group in ortho and para positions showed the best inhibitory activity compared to compound 3a and acarbose. In general, the presence of electron-releasing (Me) or either electron-withdrawing (Br or F) substituents on the phenyl of 5-arylidenethiazolidine-2,4-dione moiety reduced the inhibitory activity as compared with the unsubstituted phenyl (compound 3a), and there was no clear difference between them. In fact, several compounds characterized by the presence of substituents with opposite electronic effects showed the same potency. For example, compound 3 b contains the electron-releasing methyl group showed similar activity against a-glucosidase enzyme as product 3c containing the electron-withdrawing fluoro group. In the serie of N-allyl-5-arylidene-thiazolidine-2,4-dione derivatives (4a-4e), compound 4e (IC 50 ¼ 43.85 ± 1.81 mM) with two electron withdrawing atom Chlorine on benzyl have in ortho and para positions exhibited more inhibition potential, in comparison with their analog non-allylated 3a and acarbose. In fact, the compound 4e obtained by N-allylation of 3e proved to be 2-fold more active than the corresponding analog 3e. Compounds 4 b (IC 50 ¼ 120.8 ± 1.10 mM), 4c (IC 50 ¼ 211.6 ± 3.25 mM) and 4d (IC 50 ¼ 101.9 ± 0.43 mM) obtained by N-allylation of 3b, 3c and 3d, showed a considerable increase in inhibitory activity, compared to their non-allylated analogs. For example, compound 4b obtained by N-allylation of 3b proved to be 3-fold more active than the N-unallylated analog 3b. Therefore, it can be concluded that the ally group on N-3 of thiazolidine-2,4-dione system once again proved to play a critical role in the binding of compounds (4a-4e) to the active site of a-glucosidase enzyme by resulting in the reduction of IC 50 values up to one three times, in comparison with N-unallylated compounds (3a-3e). a-Amylase inhibition In the pursuit of exploration new anti-diabetic compounds with enzymatic inhibition, we have screened our thiazolidine-2,4-dione compounds (3a-3e) and (4a-4e) for human pancreatic a-amylase inhibitory activity. The inhibitory concentrations of the synthesized inhibitors were determined using acarbose as a standard inhibitor with an IC 50 value of 2.975 ± 0.004 lM. The bioactivity results are presented in Table 4. Among the evaluated molecules, compounds 4e and 3e revealed to be the most active analog in the series with a strength inhibitory effect, providing an IC 50 value of 18.19 ± 0.103 lM and 47.09 ± 0.037 mM, respectively. The selected substituents inserted on the phenyl ring in different positions were found to produce opposite effects on the a-amylase inhibitory activity of the two series of compounds (3a-3e) and (4a-4e). In fact, within the inhibitory activity of 5-arylidenethiazolidine-2,4-dione derivatives (3a-3e), the effects triggered by the introduction of a methyl (3b, IC 50 group in the para position of the phenyl ring proved to be indifferent or negligible for the affinity with a-amylase in comparison with the corresponding unsubstituted derivative 3a (IC 50 ¼ 126.67 ± 2.10 mM). These results confirm clearly that the presence of electron releasing substituents (Me) or electron attracting substituents (Br or F) on phenyl ring reduced the inhibitory activity compared to unsubstituted phenyl, in good agreement with the results found in the a-glucosidase test. On the other hand, the presence of two chloro groups in the ortho and para position (compound 3e) has been found to induce a clear advantageous effect reducing the IC 50 value to 47.09 ± 0.04 mM. Our results demonstrate that the substitution of the phenyl ring by two chloro groups in the ortho and para position and N-allylation of thaizolidine-2,4-dione ring are very closely related to the ability of these compounds to inhibit a-glucosidase and a-amylase enzymes. Molecular docking studies To understand the observed activities concerning the different molecular structures of inhibitor tested, the molecular docking studies were carried out to shed light on the binding modes between docked ligands and enzyme targets (Hafidi et al., 2021). According to the molecular structures of the ligands (Figure 2), they can be divided into two series with a common carbon skeleton (mentioned in blue color), the change of the organic group (R ¼ CH 3 , F, Br and Cl) it will be the challenges of the discussion below. The likeliest docked poses of ligands with the best binding affinity for ligands complexes in the active site of the targeted enzyme a-amylase and a-glucosidase are shown in Figures 3 and 4. The docking results of the synthesized compounds with the enzyme's targets have given good information about the nature of the binding mode. The analysis of the binding site revealed that the synthesized ligands are stabilized by several favorable interactions including polar, hydrogen bond, hydrophobic and electrostatic interactions (Tables S1-S4). It was observed that all the compounds exhibited high free energy binding between À6 and À8 kcal/mol. For the reference drug Acarbose, we can see from Table 5 that is obvious from the docking results in that the high activity of the reference drug acarbose compared to the others studied compounds is due mainly to the stability of the complex acarbose with target enzyme and the high number of hydrogen bonds formed between the acarbose and the active residues. 2.3.1. Against a-amylase enzyme target (PDB ¼ 1B2Y) We can see from Table S1 that the 3a, 3 b, 3c, 3d and 3e compounds of the series 1 present the same categories of interaction (hydrogen and hydrophobic interactions) with the same residues: ARG195, HIS299, ASP300, LEU165 and, TYR62. The number of interactions brought into contact between the inhibitors and the residues shows a dependence on the molecular structure. The two additional hydrophobic interactions Pi-Alkyl type have been observed against TRP95 in the 3b interaction mode which essentially due to the existence in its molecular structure of a methyl group (ÀCH 3 ). The halogen grouping (Fluorine) does not present any modification concerning the number and the type of interaction, in the structure of 3c, the fluorine group affects just the interaction distance against the same residue presented in the 3a interaction mode. The contribution of the sulfur atom was observed just in the interaction modes of 3d and 3e against residues TRP58, TYR62, HIS299 and ASP197. The only electrostatic interaction has been observed for 3d against ASP300 residue, further, the halogen Br and Cl groups appear in the chemical structure of 3e and 3d participates just in hydrophobic interaction against the following residues TRP59, TYR62, LEU165 and LEU162. The 3d does not present any hydrogen interaction, on the other hand, two Hydrogen interactions have been observed in the mode of interaction of 3d against the residue ASP300 and HIS299. From Table S2, by comparison with 4a molecular structure, the addition of a methyl group (-CH 3 ) in the structure of 4 b affecting a decrease of the participation of the two groups Ethylene (C ¼ C) and Phenyl (Ring(C6)) in the hydrophobic interaction. Moreover, the molecular structure of 4 b promotes the contribution of the sulfur atom in three Pi-sulfur interactions with the residues TRP58, TYR62 and HIS299. Two hydrogen bonds have been observed in the mode of action of 4c, in which the carbonyl (C ¼ O) and H (CH 2 ) are in contribution against ARG195 and GLU233. With comparison to 4 b, the same Pi-sulfur interaction with the same residue was observed in the interaction of 4c. Moreover, just hydrophobic interactions have been observed in the 4d and 4e interaction modes against residues TRP59, ALA198, LEU162 and LEU165. Against a-glucosidase target enzyme (PDB¼ 3W37) From Table S3, all the ligands of series 1 exhibit, in their interaction mode different interactions, hydrogen, hydrophobic, electrostatic and halogen bonding with different residues. The presence of -CH 3 in 3b structures hinder the contribution of the two common active site S (sulfur) and the hydrogen of NH, on the other hand, these two sites represent their activity for the ligands 3a and 3c. The absence of the contribution of the NH site has noted in the 3d and 3e mode of interaction. 3d and 3e exhibit only one hydrogen interaction in which the carbonyl group is in contribution against the residue ARG552. Similar to the 1B2Y enzyme, against a-glucosidase, the contribution of Br and Cl for 3d and 3e is limited only in the hydrophobic interaction. Among series 1, the 3 b is the most one that makes more of the hydrogen interaction with different residue ASP232, ASN496 and SER497. Compared to the 3a, the presence of the methyl group in 3 b molecular structure increases the number of hydrophobic interactions with different residues PHE476, ALA231 and ILE233. The specificity of the existence of Halogen interactions has only been observed for 3c interaction mode against residues ASP357 and ASP469. Different types of interaction have been observed in the interaction modes of the ligands of series 2. According to Table S4, among the ligands of this series, the 4c with a Fluorine group is the only one that contributed to three hydrogen bonding against the residues ARG552 and ASP469. The contribution of the C ¼ C group in hydrophobic interactions was observed in the interaction modes of all ligands. For 4d and 4e, their interaction mode exhibits no hydrogen interaction, the presence of Br and Cl in the two structures of 4d and 4e precisely increases their hydrophobic interaction character. The only halogen interaction was observed of 4c ligand against the residues ASP357 and ASP469. According to the behavior of the ligands tested against the two enzymes, we can build a general idea concerning the contribution by which each fragment participated in the interaction modes. From the results presented in Tables S1-S4 of docking results, we note that at the level of the common carbon skeleton between all the ligands (mentioned in blue color Scheme 1), the two reactive rings sites with 6 and 5 carbons atoms participate in the majority of cases in the hydrophobic interactions types, sometimes we can see their contribution in the interaction of Pi-anion types. Between the two series 1 and 2, the substitution of the hydrogen atom of nitrogen by a propene group leads to a decrease of participation of the common carbon skeleton in the hydrogen interactions type, but this replacing group has a very important role to increase the number of hydrophobic interactions against the numerous residues in the two enzyme target. Among the halogen groups (Br, Cl and F) only the fluorine which exhibits the halogen interaction types precisely against the 3W37 enzyme target for the ligands 3c and 4c, the other halogens such as bromide and chloride their main contribution are hydrophobic interactions in the two proteinic pockets 3W37 and 1B2Y. It is important to mention here that the residues Asp197, Glu233 and Asp300 have been observed in the interaction modes for certain ligands against the target of the enzyme 1B2Y, each residue among those motioned having a very important role, according to the work of Rydberg and Zhang (Rydberg et al., 2002;Zhang et al., 2009), For the residue Asp197, it was previously reported as a nucleophilic catalytic in the hydrolysis reactions of polymeric substrates as food starch, besides (Rydberg et al., 2002;Zhang et al., 2009), the Asp300 residue has been identified as the player for optimizing the orientation of the substrate molecule using the hydrogen bonding interaction and as a steric conflict regulator for better binding conformation of the substrate (Williams et al., 2012). The last residue Glu233 is known to act as an acid-base catalyst during substrate hydrolysis reactions (Li et al., 2005;Williams et al., 2012). Acute oral toxicity The maximal dose (2000 mg/kg B.W) of each thiazolidinde-2,4-diones derivatives do not shows any related signs of mortality or toxicity tested animals of each group, during the 14 days of test. No weight loss or abnormal changes in the behavioral pattern or any undesired pathologic changes of the animals have been observed during the treatment. Therefore, the oral lethal doses of these products are greater than 2000 mg/kg (Table 6). Conclusion A variety of synthesized thiazolidine-2,4-dione derivatives were screened for their in vitro a-glucosidase and a-amylase inhibitory activities and acute oral toxicity on Swiss mice in order to get more potent and non-toxic molecules for treatment of type 2 diabetes mellitus. In the present investigation, we identified lead compounds that have shown to be dual inhibitors ofa-glucosidase and a-amylase. The enzymatic screening revealed compounds 3e, 4a and 4e were significantly highly potent against two enzymes a-glucosidase (IC 50 : 84.95 ± 1.01; 98.45 ± 0.54; 43.85 ± 1.06 mM, respectively) and a-amylase (IC 50 : 47.09 ± 0.04; 108.14 ± 2.05; 18.19 ± 0.11 mM, respectively) as well as standard acarbose (IC 50glucosidase ¼ 97.12 ± 0.35 mM; IC 50amylase ¼ 2.975 ± 0.01 mM). The most potent compounds were also found to be non-toxic at concentration of 2000 mg/kg. Additionally, the molecular docking studies were also carried out on all the molecules, in order to obtain an insight into the binding interactions with the active sites. Some molecules showed high binding affinity with enzymes due to their high interaction with some sensible amino acid residues, thus indicating their high stability along with activity potentials similar to control Acarbose. In vitro a-glucosidase and a-amylase inhibition assay The a-glucosidase and a-amylase inhibitory activities of synthesized compounds were determined according to the method described in our previous work Pillai et al., 2019). Molecular docking studies In order to reveal the binding modes of synthesized 5-arylidene-2,4-thiazolidindione (3a-3e) and (4a-4e), molecular docking simulation was carried out as a significant tool for computer-aided drug design and molecular structural biology using AutodockVina (Trott & Olson, 2010). It comprised a series of steps that includes a selection of protein and its preparation, receptor grid generation, preparation of ligands, and there docking to the receptors. The 3 D Crystal Structure of human pancreatic a-amylase (PDB ID: 1B2Y) (Nahoum et al., 2000) and a-glucosidase (PDB ID: 3W37) (Tagami et al., 2013) complexed with the carbohydrate inhibitor (acarbose) was used as target enzyme. The pretreatment of the target crystal structure (1B2Y and 3W37) was done using PYMOL (http://www.pymol.org) (Chen et al., 2017), for the removal of water molecules, heteroatom, ions, and ligands of origin in the crystalline structure (Ladbury, 1996;Lu et al., 2007). All the ligands are drawn using Chemdraw12.0 software (Li et al., 2004). To select the most stable conformation, the geometry of these ligands was subsequently optimized using Molecular Force Field (MMFF94) as implemented in the same Software. The Discovery Studio Client (version 17.2.0) was used for graphical visualization for the best interactions of complex protein-ligand conformations. To evaluate and validate the binding prediction of docking protocol, the co-crystallized ligand Acarbose was redocked into the active site of 1B2Y and 3W37 targets. The precision evaluation for protocol docking was based on the Root-Mean-Square Deviation (RMSD) parameter, the prediction is acceptable if its value does not exceed 2 angstroms (Kadukova & Grudinin, 2018). The superimposition ( Figure 5) of the redocked and the cocrystallized acarbose show the low RMSD value of 0.50 Ð and 0.40Ð for 1B2Y and 3W37, respectively. Acute oral toxicity The thiazolidine-2,4-diones derivatives that showed the best inhibitory effect of the two enzymes have been assessed for their acute oral toxicity, according to the guideline of the Organization for Economic Testing of Chemicals no 423, the assay was tested on swiss females mice . Six mice for each group were received a maximal single oral dose of 2000 mg/kg B.W, after fasting overnight. A follow-up of the mortality rate, macroscopic parameters (neurological, autonomic behaviors) and toxic sings for 5 h after product administration and daily for 2 weeks was carried out. Body weight was recorded daily for 14 days. The control group (n ¼ 6) received distilled water as a vehicle. Statistical analysis All the experiments were carried out in triplicate (n ¼ 3). The data were described as the mean ± standard deviation (SD) of mean and expressed by one-way analysis of variance (ANOVA), followed by Duncan's new analysis to identify significant differences between means using the Multiple Range Test (p < 0.05). All statistical analysis was determined using GraphPad Prism 8.0.2. Funding This work is supported by CNRST-Morocco, UM5R, and UM6P.
5,993.4
2021-04-13T00:00:00.000
[ "Chemistry", "Medicine" ]
Graphene-based mid-infrared room-temperature pyroelectric bolometers with ultrahigh temperature coefficient of resistance There is a growing number of applications demanding highly sensitive photodetectors in the mid-infrared. Thermal photodetectors, such as bolometers, have emerged as the technology of choice, because they do not need cooling. The performance of a bolometer is linked to its temperature coefficient of resistance (TCR, ∼2–4% K−1 for state-of-the-art materials). Graphene is ideally suited for optoelectronic applications, with a variety of reported photodetectors ranging from visible to THz frequencies. For the mid-infrared, graphene-based detectors with TCRs ∼4–11% K−1 have been demonstrated. Here we present an uncooled, mid-infrared photodetector, where the pyroelectric response of a LiNbO3 crystal is transduced with high gain (up to 200) into resistivity modulation for graphene. This is achieved by fabricating a floating metallic structure that concentrates the pyroelectric charge on the top-gate capacitor of the graphene channel, leading to TCRs up to 900% K−1, and the ability to resolve temperature variations down to 15 μK. D etecting thermal infrared (IR) radiation of room temperature (RT) objects (with spectral peak emittance B10 mm (refs 1,2)) is increasingly important for applications in astronomy 3 , healthcare 4,5 , smart energy systems 6 , security 7 , pollution monitoring 8 , fire sensing 9 , automotive 10 and motion tracking 11 . In this spectral region, thermal photodetectors (PDs) that can operate at RT with no need for cooling are highly desirable 2,12 . Pyroelectric detectors are low-cost, uncooled thermal PDs for the mid-infrared (MIR) 1,2,12 . They are capacitor-like structures where a pyroelectric crystal is sandwiched between two metal electrodes 2 . Pyroelectric crystals are materials with a T-dependent spontaneous polarization, P (C m À 2 ), i.e., surface density of bound charge 12 . Around RT, a linear relation links the T variation, DT, with the changes of P 1,2,12 : where p (mC m À 2 K À 1 ) is the pyroelectric coefficient (for the crystallographic direction perpendicular to the electrodes). The two metal electrodes are connected through an external load resistor R L . At thermal equilibrium (dT/dt ¼ 0), no current flows in the external circuit, because P is constant and the charges on the electrodes compensate the bound charges at the pyroelectric surface 12 . However, when the detector is illuminated, the absorbed radiation heats the crystal and P changes according to equation (1) 12 . The variation of the bound charge surface density will induce a current I p in the external circuit 12 : where A (m 2 ) is the electrode area 1,13 . I p flows only as long as T changes (i.e., when the impinging optical power changes). Bolometers are another class of uncooled thermal PDs, where T variations due to incoming photons produce a change in the resistance (R) of a sensing element. This can be a thin metal layer 14 , a semiconductor 15 or a superconductor 1 . Common metallic bolometers for RT operation are made of Ti 16 , Ni 17 or Pt 14 . Polysilicon 1,2 , amorphous silicon 18 or vanadium oxide 15 are usually exploited for semiconducting bolometers. For fixed bias, V d , the resistance change of the sensing element translates in a measurable change in current (I). The temperature coefficient of resistance (TCR in units of % K À 1 ) is a key performance indicator for a bolometer and is defined as 2 : The TCR represents the percentage change in resistance per Kelvin around the operating point R 0 and corresponds in module to the normalized current change per Kelvin around the operating current I 0 (Equation 3). The TCR in metallic bolometers is B0.4% K À 1 (ref. 2), whereas for semiconducting bolometers it is B2-4% K À 1 (refs 1,2). It follows that the output of a bolometer (measured current) is proportional to T, in contrast to the output of a pyroelectric detector (measured current) that depends on the derivative of T, see equation (2) 13 . However, although the TCR of a bolometer does not depend on the device area, in pyroelectric detectors the output current is a function of the electrode size, as for equation (2) 13 . Larger electrodes allow the collection of more charge, increasing the pyroelectric current, therefore leading to a larger signal. These differences have an impact on the suitability of both technologies for different applications. As pyroelectric detectors are alternating current devices that rely on a variable impinging radiation, they require a chopper at 25-60 Hz 12 to detect stationary objects and are thus preferentially used to detect moving targets (e.g., for automatic lighting systems 6 , electrical outlet turn-off 11 , unusual behaviour detection 19 , home invasion prevention 12 and so on), where they are not only able to detect the presence of warm bodies 2 but also to extrapolate parameters such as distance, direction or speed of movement 11 . Such information can be obtained by processing the analogue signals of only a couple of large (B1 cm 2 ) detectors 11 . On the other hand, bolometers can be scaled to smaller sizes without any loss in TCR to make arrays of pixels for stationary imaging. Resistive micro-bolometers used in high-resolution thermal cameras range from 17  17 to 28  28 mm 2 in size 2 . Graphene is ideally suited for photonic and optoelectronic applications [20][21][22] , with a variety of PDs in the visible [23][24][25][26][27] , nearinfrared (NIR) 21,22 and THz reported to date 28 , as well as MIR thermal detectors [29][30][31] . Refs 32-34 previously reported graphenebased bolometers at low T (o10 K). However, these are not viable for practical applications in the mass market, where RT operation is needed. At RT, single-layer graphene (SLG) is not competitive as the sensing element of a bolometer, as it shows a maximum TCR B0.147% K À 1 (ref. 35), lower than both the metallic and semiconducting bolometers discussed above. Ref. 36 used reduced graphene oxide films with TCR B2.4-4% K À 1 at RT, whereas (Ref. 37) exploited vertically aligned graphene nanosheets to produce infrared bolometers with TCR B11% K À 1 at RT, the largest, to date, for carbon nanomaterials 37 . In these films, however, conduction is modulated by thermally assisted hopping between different sheets or localized defect sites 36 and not by any intrinsic property of SLG. SLG, however, can play a key role when integrated on polarizable materials (pyro, piezo or ferro-electric) [38][39][40][41] . For example, SLG can be used as a transducer for the pyroelectric polarization due to its field-effect response 38 . If a SLG field-effect transistor (GFET) is fabricated on a pyroelectric substrate, the channel resistance is modulated by the substrate polarization and can thus represent a direct T readout. This is the electrical equivalent of a bolometric response with area-independent TCR (the charge density of the pyroelectric 'gate' is constant and does not depend on the size and shape of the SLG channel). Ref. 38 reported a TCR B6% K À 1 for GFETs on lead zirconate titanate, a material with one of the largest pyroelectric coefficient known to date (up to 780 mC m À 2 K) 42 . This indicates that the pyroelectric charge density generated by lead zirconate titanate underneath the SLG channel does not yet allow to outperform state-of-the-art bolometers 38 . Here we demonstrate a RT PD for MIR by integrating a dualgate SLG amplifier with a pyroelectric material. As this is a two-terminal device whose resistance changes proportionally to T, mimicking the intrinsic material property of a bolometric resistor 2 , we can measure an 'effective' TCR as a metric for its net electrical output. Internally, the PD comprises a floating metallic structure that concentrates the charge generated by the pyroelectric substrate over an integrated GFET. As charge cannot escape from the floating structure (i.e., there is no load resistor), the PD can be operated in direct current (dc) and there is no need for chopping. We call this structure 'graphene-based pyroelectric bolometer', as it combines a pyroelectric sensor with a SLG transducer to deliver a dc bolometric response. We emphasize that the proximity of the SLG amplifier is essential to minimize the parasitic capacitances that would remove the pyroelectric charge, should the FET amplifier be implemented as an external separate component. The total pyroelectric charge generated on a variation in T increases with area, delivering effective TCRs up to 900% K À 1 for a footprint of 300  300 mm 2 , i.e., two orders of magnitude larger than state-of-the-art IR PDs having any similar or larger area 1,2,36,37 . The TCR scaling is sub-linear for smaller footprints. We discuss the origin of this behaviour and conclude that our device performance is competitive even in the limit of small pixels (B10  10 mm). Results Device architecture. Figures 1a,b show the layout of a single device and the corresponding electrical model. A SLG channel with source and drain contacts is fabricated on the pyroelectric substrate (500 mm-thick z-cut lithium niobate (LN)) as described in Methods. A 10 nm-thick Al 2 O 3 dielectric layer isolates the SLG from an H-shaped floating Au structure designed to overlap the oxide-coated SLG in the centre, whereas lateral pads are placed in direct contact with the substrate. Both uniform and patterned Au pads have been studied, the latter in the form of finger-like structures (to enhance light absorption at selected wavelengths, see Supplementary Note 2). The design is such that the SLG channel conductivity can be modulated by a dual-gate capacitive structure. From the bottom, there is the pyroelectric polarization (and associated electric field) generated directly by the substrate (C 1 in Fig. 1b), which we refer to as the 'direct effect' on SLG conductivity (previously exploited in ref. 38). From the top, there is a gate C 2 connected in series with capacitor C 3 as a floating circuit branch, with C 3 4C 2 . The perimeter of the pads defining C 3 sets the overall pixel size, from which only the source and drain contacts stem out to interface with the measurement electronics. In first approximation, the generated pyroelectric charge DQ is uniformly distributed on the substrate on a T variation 1,2 . Therefore, the direct effect from C 1 does not depend on the channel area A C1 , as the bottom-gate field depends on the pyroelectric polarization, which is constant over any area. For the floating gate in Fig. 1b, DQ accumulating on C 3 depends on area as (from equation (1)) DQ ¼ pDTA C3 . Being the structure electrically floating and free from external parasitic capacitances, DQ is entirely provided by C 2 , because of the conservation of charge. A charged C 2 generates for the SLG channel an effective top-gate voltage (in module): where C 2 ¼ e 0 e r A C2 t À 1 , e 0 and e r are the vacuum and relative permittivity, and t is the oxide thickness. Hence, for fixed t and DT, the geometrical ratio A C3 /A C2 controls the gain of the integrated SLG amplifier and therefore the TCR (DI where I is the current and g m is the transconductance of the GFET). Figure 1c shows an optical micrograph of a device with patterned pads. We illuminate this device with MIR radiation at 1,100 cm À 1 (B9 mm) using a laser spot matching the pixel size (300  300 mm 2 ). The resulting modulation of the channel drain current is shown in Fig. 1d over nine ON/OFF laser cycles. A responsivity B0.27 mA W À 1 is obtained, for a drain current in the dark (I OFF ) B1.3 mA (V d ¼ 10 mV). The responsivity can be increased by applying a larger V d , which, in turn, increases I OFF . The normalized current responsivity (% W À 1 ) is defined as: 2 where I ON is the current under illumination and P in is the optical power of the incoming radiation. R pn,N is a better parameter to compare photoconductive detectors. R pn,N in Fig. 1d is B2  10 4 % W À 1 , over two orders of magnitude higher than ref. 38, where only the direct effect was exploited (B1.2  10 2 % W À 1 ) 38 . Photomapping and wavelength dependence. By reducing the laser spot to B10 mm and using a lock-in, we produce photocurrent maps of a single pixel to assess where the maximum signal is generated (Fig. 2). At the slowest chopper frequency (f ¼ 36 Hz; Fig. 2a) the photocurrent map shows two broad peaks (B700 mm), which largely overlap and extend beyond the pixel area. When f is increased, the two peaks become progressively resolved until, above 500 Hz (Fig. 2d), they match the location of the lateral pads defining the pixel. These are not necessarily the areas where the strongest absorption occurs, but those where T changes are detected providing the highest photocurrent I ph (where I ph ¼ I ON À I OFF ). Figures 2a-d also show that the signal decreases at higher f. To better quantify this trend, we plot I ph on full illumination (300  300 mm 2 spot size; Fig. 2e) as a function of f. I ph scales linearly with f À 1 and is measurable up to 1 kHz. This PD can be described by a thermal model (see Supplementary Note 1). When an increase in illumination time per cycle (i.e., a reduction in f) results into a proportional T increase, the system is far from a dynamic equilibrium with the thermal sink (the chip carrier) within a single cycle. This is the behaviour we observe for f460 Hz. As for slow chopping speeds (Fig. 2a,b) there is more time for heat to laterally spread away from the illuminated spot, the T within the pixel becomes more homogeneous, resulting in blurred photomaps. Figure 2f plots the wavelength dependence of the photocurrent for devices with lateral pads patterned with a finger-like design (as in Fig. 1c) with different pitches. The fill ratio (i.e., Au finger area per total available pad area) is kept constant at 0.5, meaning that, e.g., fingers with an 8 mm pitch are 4 mm wide and separated by a 4 mm gap. Although for a uniform Au pad the photoresponse to parallel and perpendicular light is the same for all wavelengths (black data in Fig. 2f), for patterned pads a peak arises in the parallel/perpendicular photoresponse ratio at the wavelength that matches the fingers pitch. We then simulate the total parallel/perpendicular absorption for finger-like Au structures on thick LN ( Fig. 2g; see Supplementary Note 2 for details). This shows that a peak is expected at the wavelength corresponding to the fingers pitch. This is consistent with Fig. 2f, as more absorption results into a larger T increase and therefore a larger signal. In the calculations we assume that all light entering the bulk LN substrate is eventually absorbed, contributing to the T rise. The parallel-polarized light gets more absorbed overall (i.e., in the Au fingers plus LN substrate) than the perpendicular-polarized light, despite the fact that at the resonant frequency the perpendicular-polarized light is absorbed more inside the Au fingers. The reason for this is that resonant absorption in the fingers is also accompanied by resonant reflection, which lowers the overall delivery of light into the Au-LN system as 1 À resonant reflection. Things would change if these devices were fabricated on pyroelectric layers having a thickness B1 mm (rather than a 500 mm-thick LN crystal). The absorption of the resonant structures would become dominant over the intrinsic absorption of the substrate and reflectance would play a minor role. Figure 2f proves that our device layout is well suited for the implementation of photonic structures to engineer photon absorption and that a spectrally selective MIR response is feasible. The data in Fig. 2 and the thermal model presented in Supplementary Note 1 suggest that better results could be obtained for devices with an optimized thermal management compared with that offered by a 500 mm-thick pyroelectric substrate. Indeed, the PDs in Figs 1 and 2 spend most of the photons to heat the bulk rather than the surface, limiting the overall T increase for a fixed incident power. The fabrication of isolated pixels with lower heat capacity, using thin suspended bridges or membranes (with typical thickness B1 mm, as routinely done for microbolometers) 2 , would improve the responsivity, speed and wavelength selectivity. Thermo-electrical characterization. We now consider the performance as local thermometer, independent from the conversion of photons to T (linked to the emissivity and to the thermal properties of the device, such as thermal capacity and thermal conductance) 2,13 . Each pixel is a two-terminal device whose resistance represents a readout of the local T. As for bolometers 2 , we consider the TCR as the figure of merit. Being a normalized parameter, the TCR does not depend on V d . For our devices, it depends on pixel area, hence we will link each TCR to the size of the corresponding pixel. Furthermore, as our variations in resistance are the result of a gain mechanism, we consider how such variations compare with the device noise. We introduce the noise equivalent substrate temperature (NEST), i.e., the pixel T change needed to produce a signal equal to the amplitude of the noise. This is not to be confused with the noise equivalent temperature difference 1 , often used to indicate the smallest detectable T change in an IR-emitting body imaged by a PD, also dependent on the photon-to-T conversion 1 . In Figs 3a,b we investigate the thermo-electrical characteristics of a representative device (pixel size: 100  100 mm 2 , A C3 /A C2 ¼ 22) by placing the sample on a chuck with T control. In these measurements the sample is in thermal equilibrium in the dark and photon absorption plays no role, thus uniform Au pads are chosen to accommodate (when needed) an electrical probe on the gate pads and apply an external gate voltage (V g ) to the device. Figure 3a shows a typical Dirac curve for a GFET at T ¼ 20°C. When an active electrical probe is connected to the gate pad, it acts as a sink neutralizing all the charge generated by the pyroelectric material (C 3 ) on any T change. Because of this, we measure the same transfer characteristics at all T, except for the small shift induced by the direct bottom gating (C 1 ) of the substrate (B5% K À 1 ; see Supplementary Note 3). The vast majority of our devices are slightly p-doped (hole density B2.5-3  10 12 cm À 2 , in agreement with our Raman data, see Methods), which is the ideal condition to achieve a maximum signal on heating. This is linked to our choice of placing SLG on the positive face of z-cut LN, where heating reduces the net dipole moment, equivalent to a negative V g 12 . We stress that the slightly p-type behaviour is the reproducible result of our transfer method, with a B80% yield in terms of consistent doping. Because of the passivation offered by the Al 2 O 3 gate dielectric, the initial SLG doping does not show significant variations over several months (see Supplementary Fig. 8). After removing the gate probe to leave the gate structure floating, we monitor the GFET drain current, while T is raised by 0.2°C, kept constant for 10 min and then decreased to its original value. The resulting plot in Fig. 3b shows that the drain current increases by B50% for a 0.2°C T change (TCRB250% K À 1 ), is stable over time and then returns to its original value with negligible hysteresis. The red star markers on the electrically driven Dirac curve in Fig. 3a show how the SLG conductivity evolves when the gate is thermally driven as in Fig. 3b. This stable dc response over several minutes indicates that no appreciable leakage occurs through the pyroelectric crystal and/or the GFET gate within a practical measurement timeframe. The initial bump to 3.6 mA in Fig. 3b is due to the small overshoot of the chuck T at the end of the ramp. To appreciate what these numbers mean in practice, we show in Fig. 3d the photoresponse of a large device (300  300 mm 2 , TCR B600% K À 1 ) illuminated by the IR radiation emitted by a human hand at a distance of B15 cm. In one test the sample is placed directly on a large (200 mm diameter) metal chuck (with heat sink, blue data) and in another it is placed in a concave plastic box that keeps it suspended, thus more thermally isolated (without heat sink, black data). Without heat sink, the PD heats up more (hence larger device responsivity), but its response and recovery are much slower. Even with heat sink, the proximity of the hand is easily detected. The saturation signal is B3%, corresponding to a T increase B5 mK. To the best of our knowledge, the only RT SLG detector working at 10 mm and able to allow human hand detection is described in ref. 29. This was fabricated on a suspended and very thin (o1 mm) SiN membrane, measured in vacuum and with a lock-in with 10 s integration time 29 . Here we achieve the same result on a bulk (500 mm-thick) substrate with a resistive measurement in air with 200 ms integration time, indicating that our SLG-based pyroelectric bolometer can provide far better performance (in terms of responsivity and speed) in equivalent conditions. Finally, we discuss how our TCR scales with A C3 /A C2 . Figure 4a plots the measured TCR for 18 devices fabricated by keeping A C2 constant (22  20 mm 2 ) and varying A C3 from 25  25 mm 2 to 300  300 mm 2 . Under our design assumptions (and disregarding C 1 ), the TCR should be proportional to the pyroelectric charge generated by C 3 . Hence, from the pyroelectric law of equation (1) one would expect a linear relation TCR BA C3 . Our data, however, fit a square root dependence, indicated by the blue line. This behaviour cannot be explained by invoking the direct effect, whose contribution appears on a much smaller scale (TCR B5% K À 1 , see Supplementary Note 3). Rather, we have to consider that the pyroelectric substrate does not end at the pads edge. Such pads are thus not driven just by the crystal below them (as assumed by a linear dependence on area), but can also be affected by the exposed polarization of their surrounding areas (an effect scaling linearly with perimeter, hence the square root dependence on area). To better quantify this behaviour, we prepare Au pads of different sizes (0.01-0.3 mm 2 ) on our pyroelectric substrates and measure the total charge generated on heating (Fig. 4b). This is accomplished by placing an electric probe on each pad, connecting the probe to ground and integrating the pyroelectric current flowing through the probe over the whole temperature ramp. In one case, the pads are kept isolated on the 1  1 cm 2 pyroelectric surface and independent from each other. In another case, the whole surface around the pads is coated with Au and grounded during all measurements (with only a gap of 5 mm uncoated around the pads). This is meant to suppress any contribution from areas beyond the pad footprint. Figure 4b shows that a square root dependence on pad area is observed in both cases. The total pyroelectric charge decreases by a factor B2-3 on screening, but is still above what would be expected from the model 2 Q DT À 1 ¼ p A even for relatively large pads (using P ¼ 77 mC m À 2 K, as measured for a LN sample fully covered with Au and consistently with literature 2,12,45 ). This result has major technological implications, as it proves that a substantial contribution to the observed TCR enhancement for small A C3 arises within the first few micrometres from the pad edge. It is then possible to harvest an enhanced pyroelectric charge in a dense array of small pixels, with only a tiny gap of few micrometres separating two adjacent devices. Discussion In principle, A C3 /A C2 410 is desirable, because it can deliver TCRs up to 900% K À 1 (Fig. 4a and Supplementary Fig. 7), but this upscaling is bound by the maximum gate voltage variation allowed for the GFET (dynamic range). When probed electrically (Fig. 3a), our GFETs show no gate leakage up to ± 5 V (B5 MV cm À 1 ). Beyond this value, dielectric breakdown can occur. This determines the maximum thermal shock a device can sustain without failing, inversely proportional to the TCR. However, this is not a concern if the environment T is drifting on a timescale much larger than the measurement timeframe, e.g., during a day/night indoor T cycle. Although internal pyroelectric leakage can be neglected over a few minutes, it can still discharge a device completely over longer timeframes. This will always leave the GFET at the best operating point to respond to sudden signals (see Supplementary Note 4). If one wants to scale down the pixel size while maintaining the same area ratio, the channel area must be decreased accordingly. As the 1/f noise scales with channel area 43 , we can expect the GFET noise to increase and cancel the benefit of a large TCR when the NEST is evaluated. However, Fig. 4 shows that the TCR scales sub-linearly with area. For pixels approaching the scale required for high-resolution IR cameras (20  20 mm 2 ) 2 , it is better to make a large (several micrometres) channel and accept a lower area ratio (e.g., A C3 /A C2 o10), because the small price paid in terms of TCR will be more than compensated by lower noise, a less critical lithographic process and a detector more resilient to sudden thermal shocks. In conclusion, we presented a graphene-based pyroelectric bolometer operating at room temperature with TCR up to B900% K À 1 for a device area B300  300 mm 2 able to resolve temperature variations down to 15 mK at 1 Hz. For smaller devices, the TCR scales sub-linearly with area, due to an enhancement of the collected pyroelectric charge in close proximity to the metallic edges. When used as MIR PDs, our devices deliver very promising performance (in terms of responsivity, speed and NEP) even on bulk substrates and are capable to detect warm bodies in their proximity. Spectral selectivity can be achieved by patterning resonant structures as part of the pixel layout. This technology is competitive on a number of levels, ranging from high-resolution thermal imaging (small pixel limit) to highly sensitive spectroscopy in the MIR and far-IR (large pixel limit). Methods Device fabrication. SLG is grown by chemical vapour deposition on 35 mm-thick Cu following the process described in ref. 46. The quality of the material is monitored by Raman spectroscopy using a Renishaw InVia equipped with a  100 objective (numerical aperture ¼ 0.85). We use an excitation wavelength of 514.5 nm and a laser power below 300 mW to avoid any possible damage. Figure 5b (green curve) shows the Raman spectra of SLG on Cu. The 2D peak is single-Lorentzian, a signature of SLG, with full width at half maximum (FWHM(2D)) ¼ 26 cm À 1 (ref. 47). The D to G intensity ratio, I(D)/(G), is B0.1, indicating a defect density B2.5  10 10 cm À 2 (refs 47-50). SLG is then transferred on the positively charged surface of z-cut LN (Roditi International Ltd) by spin coating a 500 nm layer of polymethyl methacrylate (PMMA) and then etching the Cu foil with an aqueous solution of ammonium persulfate 46 . The resulting SLG/PMMA film is rinsed in water and picked up with the target substrate. After drying, the sample is placed in acetone to dissolve the PMMA, leaving a film of SLG on LN. Figure 5a plots the Raman spectrum after transfer on LN (black curve). The spectrum for the bare LN substrate is also reported (red curve). The D peak region at B1,350 cm À 1 is convoluted with a band at 1,200-1,450 cm À 1 arising from optical phonons in LN 51 . An additional LN peak is also present at 1,744 cm À 1 , which does not overlap with any of the characteristic features of SLG 51 . The fabrication of top-gated GFETs on LN presents additional challenges compared with Si/SiO 2 . Owing to the pyroelectric nature of LN, a significant static charge can build on both surfaces. To preserve our devices from discharge-induced damage, we initially prepare all metallic features on the LN surface as an electrically connected pattern, i.e., source, drain and floating gate contacts of the GFET are shorted together by means of metallic lines. In this configuration, the device can undergo all the required high-T (up to 120°C) processing steps without failing. The shorts are then removed in the last step, when no further heating is required aside from normal sensor operation. The complete device fabrication process is outlined in Fig. 6. First, SLG channels are patterned (Fig. 6b) using optical lithography and dry etching in O 2 (20 W for 20 s). A second lithographic step defines the metal contacts (source and drain), as well as the floating gate pads directly in contact with the substrate (Fig. 6c). All features are shorted together as explained above. Before the deposition of a 40 nm-thick Au layer via thermal evaporation, a mild Ar plasma (Moorfield NanoETCH, 0.5 W, 20 s) is used on the exposed SLG areas. This is crucial to achieve a good contact resistance (o100 O), as defects induced by the plasma ensure a good bonding with the metal 55 . Further, a 10 nm Al 2 O 3 layer is deposited by atomic layer deposition at 120°C, to serve as gate dielectric (Fig. 6d). Two nm of Al are used as a seed layer for atomic layer deposition 56 . Optical lithography is again used to define apertures in the Al 2 O 3 , to expose the contact pads (source and drain), part of the shorting lines and a small section of the lateral pads where the top electrode needs to be anchored. The Al 2 O 3 is then wet-etched in an alkaline solution (D90:H 2 O 1:3) for B6 min, leaving the structure in Fig. 6e. Another lithographic step is then used to finalize the top-gate via thermal evaporation and lift-off of 2/60 nm of Cr/Au. Bonding pads are also prepared in this step, overlapping those deposited with the contacts (Fig. 6f). Finally, the electrical shorts are removed with a lithographic step followed by wet-etching of the Au lines in an aqueous solution of KI:I 2 (Fig. 6g). An optical picture of the final device is shown in Fig. 6h, the arrows indicating where the Au shorts have been etched. Thermo-electrical characterization. T-dependent electrical characterization is performed with a Cascade probe station with a T-controlled chuck, coupled to an HP4142B source meter. The spectral density of the current fluctuations (S I ) is the Fourier transform of the drain current recorded during 100 s with a sampling of 1 ms. The normalized S I I À 2 exhibits the same f À 1 dependence for all drain voltages applied. Device characterization. Devices are illuminated by a linearly polarized quantum cascade laser with a frequency range from 1,000 to 1,610 cm À 1 (B6.2-10 mm) scanned using a motorized xyz stage. The laser is modulated using a chopper and the current measured using a current pre-amplifier and lock-in amplifier. The light polarization is controlled with a ZnSe wire grid polarizer. The light is focused using ZnSe lenses with numerical aperture B0.5. The power for each frequency is measured using a bolometric power meter and the photocurrent spectra are normalized by this power to calculate the responsivity. Simulations. Optical calculations are performed with a finite-difference time-domain method 57,58 assuming an infinite array of infinitely long Au fingers (40 nm-thick) on top of a semi-infinite LN substrate. Thermal transient calculations are performed with a finite element method (see www.comsol.com) assuming a 500 mm-thick LN substrate on top of a 3 mm-thick Au block (heat-sink) whose back surface is kept fixed at RT. More details can be found in Supplementary Notes 1 and 2. Data availability. All data generated or analysed during this study are included in this published article (and its Supplementary Information files).
7,674.8
2016-08-01T00:00:00.000
[ "Materials Science", "Physics" ]
Drawn onto a Skybox: An invitation to collaborative immersive drawing using the Spheri platform We describe an installation that invites the audience to collaborate on the performance of immersive drawings that will be visualized on-the-fly using the custom hardware-software platform Spheri. These drawings are made on a flat surface using traditional physical materials and following a modular tracing process that helps the user conform to the rules of cubical spherical perspective. Concurrently with the drawing performance, the Spheri platform captures the drawing and converts it into an immersive VR visualization on-the-fly, while tracking the drawing motion and choosing the camera viewpoint accordingly. The installation explores the new media of immersive handmade perspectives as well as new forms of intuitive and performative visualization. Countering the expectations of digital media as instruments for efficiency and ease of production, it aims instead at leading the user into slow and pondered geometrical thinking through handmade immersive drawing. spherical perspectives and a modular tracing method that facilitates the drawing process, through a custom-made platform that enables on-the-fly visualization of VR panoramas.In all of this, it is a fundamental matter that all drawing is handmade with pencil on paper (or marker on glass) and that the platform is strictly used to translate the handmade drawing into a virtual environment and to help visualize it seamlessly.The machine is not to take center stage but to be as invisible as possible. The present work must be understood as invitation to immersive drawing, in the context of a multi-front effort by the authors to disseminate the use of handmade immersive drawing methods.The authors have been involved for some years in the creation of these methods and in their teaching at multiple levels and to multiple audiences [3], [4], [6].At the fundamental level, the authors have created both geometrical methods and drawing schemes, as well as software and hardware platforms to draw, visualize, interact, and teach spherical perspectives.This was mostly directed at audiences that have a reason to invest some time in learning spherical perspective drawing.The present installation is aimed at a different audience: general audiences that may not have an incentive to go through long explanations or analytic dissections of perspective, or that may not have the background to understand them, but that may still gain some insights from a more casual, playful approach to the subject.The approach also may have applications to more formal teaching applications, as will be discussed ahead. SPHERICAL PERSPECTIVES AND IMMERSIVE VISUALIZATIONS Spherical perspectives are flat handmade drawings that create an immersive visualization through VR software and hardware.They have applications in the visual arts [5], [13], architecture, and cultural heritage documentation [18] as well as in visual education [9].The technical aspects of these drawings have been extensively addressed and developed elsewhere [4].For our purposes here the following suffices: a spherical perspective is a handmade drawing that is constructed on a flat surface following a set of perspective rules; if these rules are followed then the drawing can be scanned and digitally folded around a sphere.Then the viewers, standing virtually at the center of the sphere, have the visual illusion (through anamorphosis [3]) of looking at an immersive environment that surrounds them on all sides.There are many different spherical perspectives, each with their own set of rules for drawing.Each such set of rules defines a different drawing language with its own aesthetics and graphical expression possibilities [5] but all of these flat drawings, though looking very different, carry the same information and generate the same immersive environment when seen in VR.The reason there are many spherical perspectives is that a spherical perspective is a two-step process: in step one we project the 3D environment radially onto a "visual sphere" around the eye; in step two, we flatten the sphere onto a plane using a cartographic map.Of course, there are many cartographic maps to flatten the sphere, and for each such map, we get a different perspective. Two of these are especially well known: these are the azimuthal equidistant (or 360-degree fisheye) and the equirectangular perspectives (Figure 1), the former having characteristics familiar from wide-angle photography and the latter being a common projection for storage and processing of digital 360-degree photographs.Both are what we expect from "curvilinear perspectives" in the sense that they transform spatial lines into smooth curves on the perspective plane.Ahead we will deal with another variant which is less common and somewhat peculiar but useful for our purposes. WHY DO WE CARE ABOUT SPHERICAL PERSPECTIVES Drawing can be seen as a utilitarian skill whose purpose it is to create works of art for consumption by an audience.But drawing is also a mode of thinking -both a tool of introspection, and of analysis of the external world.As a way of producing objects for consumption, handmade drawing has been challenged by digital art.For instance, 2D hand-drawn animation has declined in market relevance and been replaced by what is essentially 3D sculpting and puppetry.Yet, handmade drawing remains essential in the process of the artists who create the digital sculpts and their actions.Similarly, architects create their designs using CAD and BIM, but ideation still often starts as a handmade sketch [19].In fact the premature employment of digital tools is sometimes criticized as leading to cookie-cutter designs or to forcing the architect to think inside the limiting parameters of the software [7].Right now, AI creates images mostly from text prompts, but an effort is being made to create them from sketches, which can better transmit the artists intentions.This too may push handmade drawing to a functional adaptation.The death of handmade drawing is constantly announced, but what is seen instead is a concentration of drawing on its role of conceptual activity and mode of thought.Digital art has not led to the destruction of handmade drawing, but it has displaced it to new grounds and forced it to repeated transformations.This is an ongoing process of challenge and adaptation, and it is in this context that we see the importance of spherical perspective.It is a new way to extend handmade drawing in one of its most conceptual aspects, that tradition of rational drawing that includes linear perspective and that intersects so well with computer graphics.It was argued in [2] that spherical perspective is a concept with educational value for artists, especially those who work in digital media.It is there argued, in much the same vein as in [14], that digital artists tend to adapt their thought processes to the interfaces they are presented with rather than the other way around.These interfaces abstract and encapsulate away the fundamental concepts (geometrical, mathematical, algorithmic) that make the machine work, leaving only a black box with an interface.Each choice of interface creates a language to deal with the fundamental object, which enables some interactions and blocks others, but more than blocking them, hides them from consideration.This abstraction and encapsulation is necessary and is itself a creative act; nevertheless, it is beneficial to cyclically peer again inside the box and rethink its interfaces.It was argued in a previous work [2] that spherical perspectives can be used as an educational device to expose a surprising number of the inner workings of these digital black boxes through the discipline of technical drawing.This has been tested both with art students at the university level and with young students being introduced to perspective [9].With the advent of AI, however, the role of black boxes and their interfaces in shaping the very language of art creation has increased beyond what was predicted even a few years ago.AI image generators are profoundly opaque black boxes that cannot be "opened" by the methods proposed in [2], but this has only made the act of drawing more urgent as a way of grounding human artists as more than button pushing clients. We argue for the artist's need to focus on activities that directly create rather than request creation.If in [2] it was claimed that digital art pushed a process that was bureaucratic in nature, forcing the artist to jump through unnatural hoops to talk to the machine, learning interfaces to make high level requests rather than dealing directly with fundamental algorithmic thinking, AI has now created quite the opposite -the artist now literally talks to the machine but there is no clear map of what to do; instead of the stifling but predictable steps of a techno-bureaucracy there are just the verbal incantations of a hopeful mendicant in front of an oracle.Immersive drawing through spherical perspectives is an activity that is grounded in physicality and at the same time pushes the brain to understand vision and space.It is a deeply contemplative act that requires hard thinking and physical activity, that is slow and deliberate and in which each mark carries meaning.It is also extremely visual rather than verbal.Consider what a peculiar predicament it is that the visual artist who wishes to embrace AIassisted processes must deal with the fact that AI art is generated by a verbal rather than visual process -and through stilted prose, no less.This is not just learning a new tool; it is switching to a completely distinct -even antagonistic -thought process.The artist risks taking the role of patron or of art director, making requests and giving feedback rather than directly creating.In our view, immersive drawing is just what the human brain, eye and hand are aching for in the present juncture: to exercise natural intelligence rather than begging for favors from the artificial one; to make images with the visual brain rather than stilted prose. There is a need for this grounding, even and especially if one wishes to explore AI art while avoiding its pitfalls, just like the 3D animation artist regularly needs to re-ground himself in pure drawing. THE PROBLEM WITH PERSPECTIVE There is however a problem blocking this goal of spherical perspective drawing.The problem is that these drawings can be difficult to visualize in the head of the untrained, and even harder to explain.Spherical perspective drawings look very deformed in their flat state.This is required to cover the sphere but presents a problem.The experienced draughtsman can "read" (and "write") these drawings even in the flat.The casual observer cannot.The draughtsman is required to draw using the shapes of "geodesics" [4], which can be quite complex and require technical drawing abilities to be rendered and geometrical thinking to be understood.This effort is precisely what makes them such an interesting exercise, but it presents a barrier to entry; the casual observer will enjoy the contemplation but mostly will give up on understanding them and therefore will not draw in them.The authors have performed workshops on these perspectives and the minimum time requirement for a basic understanding is around 90 minutes, which is much more than a gallery visitor will spend on a piece. One way to solve this is through interfaces that allow the user, as much as possible, to avoid the need for a knowledge of the perspective rules [12], [21].There is a place for that approach, but it carries its own difficulties, and in our case, it is the knowledge of perspective -and its possibilities as a thinking mode -that we wish to explore for its own sake, so it would defeat our purpose to go that way. In this installation we instead aim to prop the user over the initial barrier in three ways: first, by the choice of the cubical perspective, as an especially intuitive type of spherical perspective; second, through the use of a custom on-the-fly visualization platform called Spheri; third, through a set of modular pre-drawn examples that guide the audience without explicit instruction.Let us address these points separately. CUBICAL PERSPECTIVE As we mentioned there are many spherical perspectives, as each choice of flattening creates one.Cubical perspective is a rather peculiar sub-type of spherical perspective. Its flattening is obtained in two steps: first we project radially from the sphere to a concentric cube, and then we cut the cube open and flatten it onto six squares (Figure 2).We get a cross-shaped figure composed of a horizontal row of four squares intersecting a vertical row of three squares.Polyhedral spherical perspectives like this one project spatial lines not onto smooth curves like the equirectangular or fisheye cases but onto sets of line segments that can change direction or even break at the edges of the faces. From the viewpoint of computer graphics, cubical perspectives are the same as skyboxes [11].See also tetraconic perspective [1] for another example of a polyhedral-based perspective. Advantages of the Cubical Perspective Cubical perspectives are just spherical perspectives with peculiar characteristics.These can be summarized thus: locally (i.e., on each square of the cross-shaped projection) they are simply linear perspectives.Globally, on the other hand, they are like the other spherical perspectives: each spatial line has two vanishing points (always located in distinct cube faces). Take the example of Figure 3.For reference, call "front face" to the square where the vertical column of faces intersects the horizontal row.Notice that the green-white wall is seen straight on in this face but is seen to vanish to two different vanishing points at the faces to its left and right.Similarly, the red and white column is seen straight on in the front face but is seen to vanish to the zenith in the square above it (top face). Locally, each line projects as a line segment; globally, a line's perspective image will be made up of several such segments, scattered around the picture in peculiar ways.In Figure 3 we see three red columns on the top face that appear seemingly out of nowhere; they are coming from the back view, that is, from the face on the furthest right of the cross-shape.See also how the ramp on the front view changes angle subtly as it crosses the edge to the right view; it takes some theory to know by how much.These transitions between faces of the cube are better understood in terms of the geodesics of the sphere which project onto the cube.The full classification of these geodesics cube has been done in [6].This is a complicated casuistic that can be more confusing that the equirectangular case for instance.This is the tradeoff of cubic perspective.But in our present work we do not aim for such exact (and exacting) methods.The audience will be invited to draw in an exploratory way by being presented with examples that can be traced, combined, and extended. Consider the example of Figure 3, composed of simple structures like walls, columns, and ramps.This example was designed by the authors to deliver a short workshop on cubical perspective.In that context the participants would draw up to this scene from nothing, using fundamental rules.By contrast, in the present installation, such simple examples are instead used to set a global stage with a scaffolding that can be grasped intuitively: you can see which side is up, you can guess how big things are relative to one another and how their size changes with distance.Even if you do not understand exactly where each line will go because you have not been taught the fundamental rules, you can still follow simple transitions from contextual cues: a wall that in one cube face is seen straight on and then abruptly changes direction to go to a vanishing point is a typical example.Intuition is enough to grasp that the green-white wall is a straight wall and only the point of view has changed when crossing from the front to the right view; this allows one to intuit how further features -say, a sketch of a human figure -should be fitted in in the faces of the cube to be consistent with the local perspective of the scaffolding.And this gives the audience -duly coaxed and helped along -the confidence to try and think about the global perspective and how it integrates the six views.In this way a fun collaborative drawing leads into the first steps in understanding a non-trivial perspective.Ahead we will explore such simple constructions in a modular way as a stimulus for the drawing instinct to lead into an unstructured exploration of spherical perspective drawing. THE SPHERI PLATFORM The second ingredient in our mix is the Spheri platform (https://www.spheri.art).Spheri is a prototype hardware and software platform (originally programmed on top of TouchDesigner [20], and more recently on a mix of JavaScript, MediaPipe [22], and Three.js[23]).The platform consists of: • a drawing surface, • a computer, • an image capture system that can involve multiple cameras, and which captures both the gestures of the performers and the drawing being made, • a program that interprets hand gestures through bodytracking, • a program that converts the drawn images onto immersive visualizations on-the-fly. Spheri was created to enable on-the-fly visualization of the VR environment created by a spherical perspective [16].The spherical perspective can be captured live while being created with physical or digital media, or it can also be pre-existing media stored in the computer [17].One of the latest features of Spheri is the control of the VR camera through hand gestures, such as pinching in the air for zoom or moving the hand for panning the camera.These gestures in midair are captured through a camera and control the immersive display in a very seamless way.Not only is this very practical, avoiding the breaking of flow for the person doing the drawing or distractions from using mouse or keyboard-based interfaces, but it also actually lends a rather dramatic flair to a drawing performance. The platform can be run in two ways, depending on how many cameras are being used, which we can call collaborative and performative: In collaborative mode, the setup requires two cameras: the first one for capturing the drawing, and the second one for capturing hand gestures.The functioning implies a user (e.g., an artist, or an instructor) drawing a spherical perspective on a flat surface (e.g., pencil over paper or marker over glass) while the platform displays the VR visualization of the perspective in real time.The same user can also control the virtual camera's viewpoint with hand gestures, for example drawing with the right hand and operating the platform with the left hand through the second camera.Although the user can do both drawing and controlling of the camera, it is not difficult to see that this is not the most practical workflow: it is usually better to assign the camera control to a second user, hence the term collaborative. In Figure 4 we see an example of this mode.The presenter on the left is drawing while the one on the right is controlling the camera with hand gestures.Behind them, we see a live feed of the flat drawing while the VR view was being projected on another wall (not shown in the photo).The camera control may also be assigned to a member of the audience. In performative mode the setup includes only one camera, and the hand gestures are simplified.In fact, the platform uses the hand of the person who is drawing to orient the camera within the VR scene.To be more specific, the VR camera follows the position of the tip of the index finger (Figure 5, bottom).In this case, the drawing must be made over a transparent surface (e.g., a glass table), which in turn must be between the hand and the camera.In Figure 5 (top) we see a user drawing on glass under which is the camera, Spheri is detecting the hand of the draughtsman, and simultaneously assembling and displaying the VR scene to the audience. In this modality, there is no need for a "cameraman", and the focus is fully on the drawing being performed.This removes a cognitive burden from the user, who can then concentrate on the drawing.In the present installation this is essential, as we want the invitation to collaborate on the drawing to be as simple to take up as possible, and the focus to be on the drawing process itself rather than on learning the camera gestures.We have found in the past that the pinch and pan functions are so amusing to the audience that they might lose focus from the drawing [15].In this new mode the user needs to touch the drawing with the index finger to even pan through the picture, as hovering it over the surface is the only way to get a new view.This serves as a ruse -once the user is so close to the picture, the tendency will be to actually engage with the surface and draw something. MODULAR TRACING As we stated above, the authors have a long experience of teaching spherical perspectives.But this teaching is slow and analytical.With the proposed installation we are instead aiming at a playful way to draw the casual visitor into immersive drawing not through careful instruction but through a playful, puzzle-like activity. Among the spherical perspectives, the cubical one has a puzzlelike nature all of itself.On each face of the cube the image is perfectly understandable, and the viewer only struggles on how to fit together the various views that contain a feature in order for the whole to make sense.This leads us to play on the notion of jigsaw puzzle as a way to construct drawings in cubical perspective without explicit instruction.We offer visitors a set of premade sketches on transparent paper, as well as clear sheets of the same media, on top of which users may assemble their own drawings.The simple local character of cubical perspective makes it easier to make drawings through layering of pre-built elements.We prepared drawings of floor grids, walls, arches, stairs, etc., made so as to fit together in several configurations.Examples of these modules include: a floor grid that extends towards infinity in every direction; a wall seen straight on the front view of the cube that then crosses to the left and right faces at appropriate angles; a stairway that starts on the The visitors may for instance layer the sheet of the wall on top of the floor grid, and, finding a stairway that happens (by design) to fit the wall, they may then layer this on top.On each layering step users will use each layer as a guide and trace the desired element onto their own drawing sheet.At each step the tracing will be only partial and will involve interpretation and selection.For instance, if a wall is layered on top of the floor grid then only the tiles that are not obscured by the wall will be copied (Figure 6, Figure 7).This act of tracing, forces users to make sense of the scene in spherical perspective, to interpret visually what is happening on the global scene being constructed.All while the user can count on the Spheri platform to visually check on the immersive scene being constructed.The layering and drawing are done on top of a glass table with a camera underneath.The camera captures the layered scene in real time and the Spheri platform transforms it into a VR scene that is projected onto a monitor on-the-fly.Users may control the view by hovering their pen (in fact their index finger) over the drawing and check the result of their experiments as they are being done.This live visualization creates a feedback loop of trial and error that naturally "teaches" an intuitive notion of the spatial relations in a cubical perspective drawing.At some point a basic environment will have been constructed by this layering and tracing.Then, the users may have a notion of the overall space and of the transition rules, with which they can start creating their own original elements and adding them to the picture alongside the tracings.For instance, we encourage users to add human characters and try to see how their apparent size and orientation should change according to their position on the scene, seeing characters from below when drawing them on the top face of the cube. All this layering requires the drawing media to be erasable.The installation can work with several different materials, as long as they permit enough transparency for both users (to see what they are doing) and for the camera (to capture a meaningful rendering).The installation may be done in several ways: with layers of tracing paper, acetate or mylar (polyester drafting film), using graphite pencil or appropriate erasable markers.The camera may be placed under a glass table or on top of a transparent box (e.g., made of acrylic) placed on top of any regular table.Several setups are feasible as long as the position of camera, drawing and hand are preserved in sequence, to avoid occlusion of the drawing by the body of the user and allow the camera tracking to follow the drawing hand. At the end of their participation, visitors are encouraged to pin their drawings to a growing gallery on the walls of the installation.This gallery is initially seeded by examples drawn by the authors but will grow with the visitor's work and gradually create a collaborative exhibit, cyclically curated by the authors. INSTALLATION SCHEMA AND RIDER The following schema is appropriate for the installation (Figure 8): • A transparent or translucid table of size 1.2 x 0.8 m., where layering and drawing will take place.• Another table to hold the set of modular transparencies. • A projection screen. • A computer to run Spheri. • An external camera placed under the table, with a minimum HD resolution (1920x1080px).This will track the drawing hand through the transparent paper.• A support to hold the camera under the table. The total space necessary for setting up the installation is around 3x3 meters, although if available, a deeper space would be useful for obtaining a larger projection. APPLICATIONS TO FORMAL TEACHING Recent works have reported on an experiment to bring immersive perspectives to the classroom at the Portuguese 9 th grade level [8], [9]. These experiments had very positive results but one of the difficulties was exactly with the fact that the precise rules of spherical perspective are inadequate for what is intended in these 9 th grade classes.Instead of formal methods, a scheme of trial-and-error With Spheri, this verification can be done in real time, greatly facilitating a trial-and-error approach with an instant feedback loop.Also, the "jigsaw puzzle" approach using modular tracings may be another useful contribution to develop a quick intuitive understanding of perspective drawing basics with playful activities adequate for the target audience.These are avenues to pursue in future work. CONCLUSION We consider that drawing, as a mode of thought, is in some ways being neglected even in the creative disciplines.We aim at using the same technology that its often taken as a threat to drawing to instead stimulate it and bring it innovation and new modalities of expression and enjoyment. We have described an installation that draws the visitor gently and playfully into a new form of immersive drawing.Our aims are primarily didactical -we aim at an aesthetical experience that is at the same time a dissemination of the idea of spherical perspective as an emerging sub-discipline of drawing. Furthermore, this experimental installation -using modular transparencies together with Spheri -can be presented in several different ways, some of which seem adaptable to more formal education activities aimed at younger students. We count on refining its presentation over several upcoming exhibitions and classroom tests to make it a tool for both the enjoyment and the learning of spherical perspective drawing. Finally, Spheri is being converted from a local prototype to a web resource available to all users.Its final version will be located at https://www.spheri.art.This open access to the platform as a web-based application will ensure greater compatibility between devices, and expand the options to learn about spherical perspectives anytime, anywhere. Figure 1 : Figure 1: Two examples of spherical perspectives: On top we have an Azimuthal Equidistant spherical perspective, colloquially known as a 360-degree fisheye.On the bottom we have the same view but in equirectangular spherical perspective.Original artworks by A. B. Araújo. Figure 2 : Figure 2: The same view as in Figure 1 but now in cubical perspective.Drawing by A. B. Araújo. Figure 3 : Figure 3: Example of a simple cubical perspective.Drawing by L. F. Olivero of a collaborative design made by the authors for a Bridges Conference workshop at Aalto University, Finland. Figure 4 : Figure 4: live cubical drawing in a workshop in Aalto University, Finland. A. B. Araújo (left) is drawing on paper while L. F. Olivero (right) controls the VR camera through hand gestures. Figure 5 : Figure 5: User draws cubical perspective while Spheri detects the hand and displays the VR view (note how in the VR view the top face -where the pen is touching -is seen joined with the four faces adjacent to it on the cube). Figure 6 : Figure 6: Architectural fantasy mounted by superposition of several modular layers: floor tiling, stairs, ramps and arches.Drawing by A. B. Araújo. Figure 7 : Figure 7: Two tracing modules (floor grid and stairway) used for the construction of the picture in Figure 6. Figure 8 : Figure 8: Scheme of the installation
6,731.2
2023-11-28T00:00:00.000
[ "Computer Science", "Art" ]
Multi-wavelength detectability of isolated black holes in the Milky Way Isolated black holes in our Galaxy have eluded detection so far. We present here a comprehensive study on the detectability of isolated stellar-mass astrophysical black holes that accrete interstellar gas from molecular clouds in both the local region and the Central Molecular Zone. We adopt a state-of-the-art model for the accretion physics backed up by numerical simulations, and study the number of observable sources in both the radio and X-ray band, as a function of a variety of parameters. We discuss in particular the impact of the astrophysical uncertainties on our prediction for the number of bright X-ray sources in the central region of the Galaxy. We finally consider future developments in the radio domain, and assess the potential of SKA to detect a population of astrophysical black holes accreting gas in our Galaxy. INTRODUCTION A large population of Stellar-mass black holes (BHs) is believed to exist in our Galaxy: several works (Shapiro & Teukolsky 1983;Samland 1998;Caputo et al. 2017) agree on an order-of-magnitude estimate of ∼ 10 8 BHs produced by the collapse of massive stars in the Galaxy. This large number of BHs has eluded detection so far, with the exception of 60 objects 1 that are part of binary systems and accrete a significant amount of mass from a companion star: such systems are known as black-hole X-ray binaries (BH-XRBs) and appear as bright X-ray sources in the sky. Here, we focus on isolated BHs, whose detection represents a major challenge in modern astronomy. The problem has been discussed in several recent studies focused on the future detectability of the multi-wavelength radiation possibly emitted by these objects when they accrete matter from interstellar clouds either in the vicinity of the Sun (Maccarone 2005;Fender et al. 2013) or in the regions near the Galactic centre, as recently discussed in Tsuna & Kawanaka 2019;Tsuna et al. 2018. The latter region seems particularly promising. In fact, an extensively studied complex of giant molecular clouds usually called Central Molecular Zone (CMZ) ★ E-mail<EMAIL_ADDRESS>† E-mail<EMAIL_ADDRESS>1 http://www.astro.puc.cl/BlackCAT/ characterize a vast region that extends up to 200 pc away from the Galactic Centre, and provides a huge reservoir of molecular hydrogen. The high-density clumps within these region appear as ideal targets for astronomical searches of compact objects based on gas accretion. In order to perform these analyses, it is crucial to develop an accurate modeling of (i) the accretion process of baryonic matter onto a compact object, in order to compute the expected accretion rate as function of the environmental conditions and the BH mass and speed; and (ii) the non-thermal spectrum of the radiation emitted by the accreted gas in different wavelengths, from radio all the way up to the hard X-ray band. Regarding the former aspect, a simplified model for the accretion physics is adopted in the aforementioned papers, based on a rescaling of the Bondi-Hoyle-Lyttleton (BHL) formula. However, the accretion problem exhibits a richer phenomenology. Radiation feedback plays a crucial role, and is expected to shape the surrounding environment in a remarkable way, forming a cometary-shaped ionized region around the compact object. A state-of-the-art treatment of accretion in the presence of radiative feedback, backed up by numerical simulations, was presented in a series of works (Park & Ricotti 2011, 2012Sugimura &Ricotti 2013, PR13 hereafter) and showed a more complicated behaviour of the accretion rate with respect to the BH speed compared to the BHL scaling. This model features an increase at low velocities, and an asymptotic decrease to the BHL rate at large velocities. The PR13 results were recently applied to compact object searches in the context of the quest for BHs of primordial origin (Manshanden et al. 2019). However, a comprehensive study of this model related to the population of astrophysical BHs is still lacking. As for the prescription for the multi-wavelength (radio/X-ray) non-thermal emission, the absence of any observational detection of isolated accreting stellar-mass BHs means any physical model one can develop carries large uncertainties. However, one can utilize the vast knowledge gathered in the studies of both accreting BH-XRBs and accreting supermassive BHs to inform such a prescription. In particular, the Fundamental Plane relation between radio and Xray emission (Plotkin et al. 2012a) derived from the study of the aforementioned systems can be a precious tool to characterize the relation between radio and X-ray emission. The aim of this paper is to characterize the detection prospect of a population of astrophysical black holes in view of the future development of radio and X-ray astronomy (with particular reference to the SKA experiment for the former), taking into account the state-of-the-art accretion formalism from PR13. The paper is organized as follows: In Secs. 2 and 3 we review the main aspects of accretion physics with particular focus on the PR13 model, and discuss the emission mechanisms that are most relevant in the radio and X-ray domain. Then, we present our methodology in Sec. 4 and 5 and turn our attention to the astrophysical black holes in the vicinity of the Sun in Sec. 6 estimating the number of detectable X-ray sources. We then focus on the Central Molecular Zone in Sec. 7, and present a comprehensive parametric study of the expected X-ray emission from astrophysical BHs in this region. Finally, in Sec. 8, we present a forecast about the number of radio sources potentially detectable by SKA in association with the population of isolated astrophysical BHs accreting gas in the CMZ. SETTING THE STAGE: STATE-OF-THE-ART ACCRETION PHYSICS In this section we describe in detail the accretion model introduced in PR13 and compare it to the textbook Bondi-Hoyle-Lyttleton model. The Bondi-Hoyle-Lyttleton model According to the BHL accretion model (Hoyle & Lyttleton 1939;Bondi & Hoyle 1944), the rate of accretion onto an isolated compact object of mass moving at a constant speed v BH is given by: where and s are, respectively, the density and the sound speed that characterize the ambient medium and is the gravitational constant. This formula is usually re-scaled by introducing a suppression factor . Fender et al. (2013) for instance, found that values larger than = 0.01 are excluded, under realistic assumptions, by observations of the local region, where a significant population of isolated BHs should be present. Tsuna & Kawanaka (2019) and Tsuna et al. (2018) consider values between = 0.1 and = 10 −3 . The parameter may effectively capture the outflow of material that is expelled from the Bondi sphere, as suggested by several authors (Blandford & Begelman 1999). Its introduction is also supported by the non observation of a large population isolated neutron stars in the local region (Perna et al. 2003) and the studies of nearby AGNs (Pellegrini 2005) as well as the supermassive BH at the Milky Way centre, Sagittarius A* (Baganoff et al. 2003). In conclusion, it seems that the BHL accretion formula may overestimate the accretion rate by orders of magnitude, although the physical mechanism behind this deviation is still disputed. One such mechanism is introduced in PR13, as we discuss below. The Park-Ricotti model The hydrodynamical simulations performed in PR13 show that, when radiative feedback is taken into account, a cometary-shaped ionized region is formed around the BH as it moves through the interstellar medium. The model proposed in the same work combines the BHL formula with the modelling of the ionization front to obtain an analytical formula in agreement with the results of the simulations. We review it here. First, BHL accretion is assumed to hold within the ionized region, so that the PR13 accretion rate can be written as: where s,in and in are the sound speed and density of the ionized medium, and v in is its velocity relative to the BH. The sound speed s,in is a free parameter of the model (of the order of few tens of km/s). Now Euler's equations can be applied to the ionization front to express in and v in in terms of the corresponding quantities referred to the external neutral medium: its sound speed , its density and the relative velocity of the BH, v BH . We briefly discuss here the derivation of these relations. Assuming a one-dimensional steady flux, the jump conditions obtained by applying Euler's equations for mass and momentum conservation at the ionization front are: where to obtain the second equation we have further assumed an ideal gas. These equations have the following solutions: In the high-velocity range, v BH ≥ v R , the correct solution is given by − in , while for v BH ≤ v D , the front is best described by by + in (Park & Ricotti 2013). In the intermediate-velocity range, v < v BH < v , no real common solution can be obtained for Eqs. 3 and 4. Correspondingly, as velocity lowers below v , the pressure wave building behind the ionization front detaches from it as a shock into the neutral material. Part of the flux of matter is deviated and the velocity of the gas beyond the shock is reduced to below v . This implies on the one hand that Eq. 3, stating the mass conservation through the front along the direction of displacement of the BH, is no longer valid. On the other hand, since the velocity past the shock is now lower than v , the jump conditions at the ionization front can be solved. One should now consider, however, two sets of jump conditions associated to the two fronts: the shock and the ionization front. Park and Ricotti instead observed through simulations that, in this regime, the velocity inside the ionized region is v in ≈ s,in . This relation, promoted to an equality, can be used together with Eq. 4 to compute the density in . This way, for v ≤ v BH ≤ v , we obtain: Thus we have in summary: and Plugging these equations back into Eq. 2 finally gives the desired accretion rate expressed in terms of the BH speed and the properties of the neutral medium it is moving through. Notice in particular that in the velocity range v D ≤ v BH ≤ v R we get: which increases quadratically with the BH velocity, in sharp contrast to the behaviour of the BHL rate, which decreases with velocity. This behaviour is the main feature introduced by the PR13 model, and is due to the formation around these objects of the aforementioned bow shock that deflects part of the gas away from the BH. The velocity dependence of the BHL rate is recovered in the high velocity regime, v BH > v R , where both rates present a ∝ v −3 BH dependence. The complete velocity dependence of the PR13 rate is shown in figure 1, for varied gas densities, BH masses, and sound speeds of the ionized region. For comparison, the BHL rate with = 1 is also shown. We can observe in this figure how the BHL rate decreases monotonically with velocity, whereas the PR13 rate peaks at v BH = v R = 2 s,in and is suppressed at lower velocity by the presence of the bow shock. For < v D , the rate increases again. However, notice that this transition typically happens at velocities of ≈ 0.1 km/s (see Eq. 6), which are of little relevance for this work and not shown in figure. The different velocity dependence of the PR13 rate compared to the BHL rate has important consequences when studying the emission properties of a BH population characterized by a given velocity distribution. According to the BHL prescription, the lowvelocity tail of the population is the easiest to detect. Following PR13, the highest emissions are instead associated to BHs with intermediate velocities. Furthermore, BHL can predict very high accretion rates if Figure 1. Accretion rate as a function of the BH speed. We show the accretion rate obtained according to the PR13 and BHL models, as a function of the BH speed and other relevant parameters. For the BHL rate, we set the suppression factor = 1, to allow for a more direct comparison. The to models agree at high velocity, but predictions differ by many orders of magnitude in the low velocity range. the speeds are low enough, which can easily lead to overshooting experimental bounds, while the highest rates predicted by PR13 are orders of magnitude smaller. As a final remark, using Eq. 10 to express the peak of the PR13 rate in terms of the Eddington accretion rate Edd : one can see that the accretion rate will always be highly sub-Eddington in the range of BH masses BH , gas densities and ionized sound speeds ,in we consider in this work. This is taken into account when defining the prescription for the luminosity in the next section. EMISSION MECHANISMS In order to translate the predicted accretion rates of a population of isolated BHs into a prediction of detectable sources we need estimates of the associated bolometric luminosity of a given source, and the Spectral Energy Distribution (SED). Our focus is on the Xray and radio luminosity, X and R , of the accreting BH. We can estimate both by considering the SED of the BH at varying accretion rates. We are particularly interested in emission from highly sub-Eddington accreting BHs (as predicted by Eq. 11). Therefore in this section we describe our simple framework of the properties of sub-Eddington flows onto isolated BHs based on our prior understanding of known weak accretors in nature: Galactic BH-XRBs, and lowluminosity AGN. Radiatively Inefficient Accretion Flow (RIAF) models have spearheaded many studies regarding the emission mechanisms associated with such sub-Eddington accretion flows, with a focus both on Galactic BH-XRBs at low accretion rates, and the supermassive BH in the Galactic center, Sgr A* (Narayan & Yi 1994;Yuan et al. 2003). The regularly cited RIAF models which first considered the emission processes in inefficiently radiating accreting BHs are classed as advection-dominated accretion flows (ADAFs; Narayan & Yi 1994;Esin et al. 1997). In the ADAF model, Bremsstrahlung radiation dominates the observed spectrum at the lowest accretion rates, and thus the spectrum has some curvature and its peak energy depends on the thermal gas temperature. As the accretion rate and gas density increase, Inverse Compton (IC) scattering begins to dominate (likely via SSC, i.e., Synchrotron photons become scattered), though if densities remain low enough, the spectrum will still show come curvature. At higher accretion rates, the IC scattering process is more efficient, resulting in a power-law-like spectrum in the X-ray band. The key thermodynamic property of RIAFs is the inefficient cooling of ions due to the Coulomb decoupling of electrons and ions in the accreting, low-density plasma. Due to this decoupling, such flows are likely well described by a two-temperature electronion plasma, with only electrons radiating via Bremsstarhlung, Synchrotron, and inverse Compton scattering Esin et al. (1997). The models, as well as a plethora of observational evidence, show that such flows display radiative inefficiency, with = 0.1 / crit , where is defined by: and crit is the accretion rate below which we have a RIAF. We assume crit is the accretion rate corresponding to 1% of the Eddington luminosity with Edd = 0.1 (Fender et al. 2013).The bolometric luminosity in such inefficient states is thus given by thus exhibiting a quadratic scaling ∝ 2 (Esin et al. 1997), as opposed to the linear behaviour observed at higher efficiency. A significant fraction of this bolometric luminosity is typically assumed to fall in the X-ray band. The / fraction is usually assumed to be 30% (Fender et al. 2013). Radiatively inefficient Galactic BH-XRBs also almost ubiqitously launch outflows. Such outflows are typically in the form of steady, self-absorbed relativistic jets (Fender 2001). Ballistic, transient jets are also observed during spectral state transitions at higher accretion rates. Such jets are typically identified via their radio emission, and are shown to be present during quiescence, at luminosities below 10 −8 Edd (Plotkin et al. 2015). These steady jets exhibit a flat-to-inverted spectrum from radio through to IR frequenciesa consequence of self-absorbed synchrotron emission through the optically-thick regions of the jet (see, e.g., Markoff et al. 2005). In addition, multiple studies have now established that the luminosity of such radio-emitting jets scales in a consistent and predictable way with the X-ray emitting plasma in the inner regions of the flow. This scaling, R ∝ 0.7 x (Corbel et al. 2000(Corbel et al. , 2003Gallo et al. 2003;Corbel et al. 2008;Miller-Jones et al. 2011;Gallo et al. 2014), is known as the radio-X-ray correlation, and applies to Galactic BH-XRBs (though other compact objects have been tracked in this phase space, showing similar but distinct trends of their own). Therefore, by using this analogy between isolated BHs and their low-accretion rate counterparts in binaries, we can infer that isolated BHs will similarly launch these steady jets, and thus their radio and X-ray luminosities will scale according to this radiatively inefficient track we observe in Galactic BH-XRBs. However, in order to invoke a mass scaling and capture lowluminosity accretion onto a mass-variable population of isolated BHs, one must refer to the established connection between Galactic BH-XRBs and AGN-the Fundamental Plane of Black Hole Activity (FP; Merloni et al. 2003;Falcke et al. 2004;Plotkin et al. 2012b). The FP is, in a sense, an extension of the radio-X-ray correlation discovered in Galactic BH-XRBs to their supermassive counterparts. The FP is an empirical, parameterized relation between the X-ray luminosities, radio luminosities, and masses of hard state Galactic BH-XRBs with steady jets, and AGN of types which display similar X-ray emission characteristics. These include low-luminosity AGN (LLAGN), low-ionization nuclear emisssionline regions (LINERS), Faranoff-Riley type I (FRI) and BL Lacs. It is understood that the fundamental connection between these types of AGN and Galactic BH-XRBs is simply their sub-Eddington accretion rates, and the presence of radio-emitting jets. All sources in the FP display something resembling a power law spectrum in the X-ray band, and a flat-to-inverted radio spectrum. Thus, by invoking this scaling relation, and assuming that an isolated population of low-luminosity accreting BHs will display similar spectral properties, one can adopt the FP relation to scale their X-ray and radio luminosities with BH mass. We therefore adopt the following scaling relation, determined in one of the most recent such FP studies (Plotkin et al. 2012b): Where is the X-ray luminosity in the 2-10 keV band and is the radio luminosity at 5 GHz. Thus we have a simple prescription for both the X-ray luminosity, X , and radio luminosity, R , of an isolated accreting BH. ESTIMATING THE NUMBER OF VISIBLE SOURCES Having described our prescription for the accretion rate and the associated non-thermal emission, we now turn at assessing the number of isolated astrophysical black holes that are potentially detectable by the current (and forthcoming) generation of X-ray (with particular focus on the hard X-ray band) and radio experiments. In this section we summarize the main points of our methodology. Our main observable is the number of sources associated to a radiation flux above detection threshold, i.e. that satisfy > * , where is the flux at Earth defined as: = /(4 2 ) with r the distance of the source from Earth, and * is the threshold value. Previous studies (Fender et al. 2013;Tsuna et al. 2018; Tsuna & Kawanaka 2019) obtained their estimates through Montecarlo simulations of a BH population. Here we adopt instead a semi-analytical approach which allows us to perform a comprehensive parametric study associated to the physical model of accretion physics described above. Here we describe the general prescription for computing the number of visible sources, which we will apply with some variation to the two region of interest for this work: the local region (section 6) and the central molecular zone (CMZ, section 7). We obtain the probability for a BH to emit above threshold by integrating the joint p.d.f. of the relevant randomly distributed variables over the volume defined by the condition > * . Regarding the BH population, the random variables entering the flux expression are: speed, mass and distance from Earth. We neglect any possible correlation between these variables. In addition, we also treat as a random variable the density of the insterstellar medium at each BH location. In both analysis we present below, we make the assumption that the gas density and BH position are not correlated. Therefore, we obtain the expected number of luminous sources sources by computing the following integral: Here (v BH ), ( ), ( ) and ( ) are the normalized p.d.fs describing respectively the BH speed, mass, distance from Earth and the interstellar medium density at its location. With { } we indicate the free parameters entering the expression for the flux, and which we consider to be fixed for all BHs. These are: the sound speed of the neutral medium , the sound speed in the ionized region , the fraction of bolometric luminosity / and, in the case of BHL accretion, the suppression factor (regarding the first of these, we remark that variations of the sound speed of the neutral medium have a negligible impact on the flux magnitude). Finally, the parameters that define the p.d.f.s for the the random variables, which we indicate with { }, should also be regarded as free parameters of the model. We discuss them in the next section. CHARACTERIZING THE BLACK HOLE POPULATION Mass. We assume BH masses to follow a normal distribution of mean mass = 7.8 and mass = 1.2 . This distribution was obtained from the study of X-ray binaries by Özel et al. (2010) and confirmed in an independent analysis by Farr et al. (2011). It has also been employed by previous studies on this topic such as the works of Fender et al. (2013) and Tsuna et al. (2018). Speed. The BH velocity is given by the combination of two components: the velocity of the progenitor star, v star , and the kick the BH receives at birth due to the supernova explosion, v kick . As for the former, we are concerned with the velocities relative to the molecular gas, so we ignore the rotational component along the Galactic disk and consider exclusively the velocity dispersion . This is well measured for both nearby stars and stars in the Galactic centre. On the other hand, little is known about the magnitude of the natal BH kicks. A study of pulsars proper motions by Hobbs et al. (2005) concluded that neutron star kicks obey a Maxwell Boltzmann distribution with = 265 km/s. Re-scaled with the average masses (Fender et al. 2013), this gives kick ≈ 50 km/s, corresponding to a mean of kick ≈ 75 km/s. We consider here as reference values kick = 50 km/s ("low kick") and kick = 100 km/s ("high kick"). The BH speed, resulting from the combination of these two independent components, is then distributed following a Maxwell Boltzmann of mean: BH = √︃ 2 star + 2 kick ,. Hence, the uncertainties in both the kick and the progenitor star's velocity dispersion are enclosed in only one parameter, BH . Distance from Earth. We will assume the BHs to be distributed uniformly in both regions under analysis and derive the distance distribution accordingly. Density of the interstellar medium. The interstellar medium is usually described as composed by three different components: the ionized component, the neutral gas, and the molecular clouds. In the PR13 picture the probability of obtaining large enough fluxes from a BH found in the first two of these components turns out to be negligible. We will therefore consider only densities associated to molecular clouds. SEARCHING FOR BLACK HOLES IN THE LOCAL REGION We start by applying our model to the study of the innermost 250 pc around Earth, following the work of Fender et al. (2013). In this study, the authors applied the Bondi formula to model accretion and estimated the number of visible sources through Montecarlo simulations. They considered a few combinations of the values of the mean BH speed BH and of the suppression factor , obtaining a bound on the combination of these parameters. Here we reproduce the same setup to obtain a prediction for the number of detectable Xray sources, verifying the result for the BHL model and comparing it to the predictions of the PR13 model. Our setup Regarding the BH population, we assume local = 3.5×10 4 objects are present in the region between 70 pc < d < 250 pc (Fender et al. 2013). We assume a uniform spatial distribution, a normal mass distribution and a MB velocity distribution of average BH , as described in the previous section. As for the interstellar medium, we assume two types of molecular clouds to be present, each type with uniform density. These are: a) warm clouds, of number density 10 2 cm −3 , sound speed 0.9 km/s, filling factor 5 × 10 −2 , and b) cold clouds, of number density 10 3 cm −3 , sound speed 0.6 km/s and filling factor 5 × 10 −3 (Fender et al. 2013). This combination results in an average density avg = 10 −3 . The integral in Eq. 15 becomes: Here clouds indicates the number of BHs we expect to find in each type of cloud: where clouds is the associated filling factor. We obtain clouds ≈ 175 in the cold clouds and clouds ≈ 1750 in the warm ones. Finally, the bolometric fraction of luminosity in the X-ray band is set to 0.3. Results Let us first consider the X-ray flux associated to a generic isolated BH located in the local region and accreting gas from a molecular cloud as a function of the BH speed. In figure 2 we show this flux for both the PR13 accretion scenario and the conventional modeling based on the BHL formalism, the latter suppressed by a factor = 0.01. The two colored bands correspond to the two densities considered for molecular clouds. The width of the band is obtained considering distances between 70 and 250 pc and masses between 5 and 11 solar masses. We show for reference the detection threshold associated to the 4th INTEGRAL IBIS/ISGRI soft gamma-ray survey catalogue (17-100 keV; Bird et al. 2009) , which is approximately 10 −11 erg/cm 2 /s. We want to emphasize once again the distinct behaviour associated to the two accretion scenarios. In the Bondi picture, the slower sources are very luminous. This is particularly relevant in this context, since velocities in the local region are expected to be on the low end of the range shown in this figure: the unsuppressed BHL scenario predicts a huge number of sources (see figure 3) and can easily overshoot the bound. As a consequence of the introduction of a suppression factor, only very slow sources are expected to be visible. This is no longer true when the PR13 scenario is considered, since it naturally predicts a flux suppression for the slower sources. Instead, the population of BHs emerging above threshold and showing up in the X-ray sky will present relatively high speeds (around 50 km/s). Furthermore, we can notice how, based on the Bondi picture, we expect to detect BHs in both types of clouds. According to the PR13 model, however, only BHs located in the denser clouds can be detected. Following the procedure described in the previous section, we can now integrate the distributions that characterize the BH population to obtain the number of sources visible in the X-ray sky with the IBIS/IGRI survey. In figure 3 we show the dependence of number of sources on the average speed BH . This prediction is obtained for three accretion scenarios: a) the PR13 model; b) the BHL model with suppression factor = 0.01 c) the BHL model with perfect efficiency = 1. We indicate with orange and green arrows the reference values given by the "low kick" the "high kick" scenarios, considering a velocity dispersion of the progenitor stars of 15 km/s (corresponding to an average speed of ≈ 25 km/s). For comparison, the average speeds corresponding to the "no kick" and "high kick" ( kick = 75 km/s) scenarios considered in Fender et al. (2013) are also shown and indicated with A and B, respectively. Our results can be summarized as follows: • As far as the BHL scenario is concerned, our findings are in agreement with the ones obtained by the results reported in Fender et al. (2013): i.e., a few tens of visible sources in scenario A (no kick, = 0.01 ). Larger values of are excluded unless very high kicks are present. This is, once again, due to the rapid increase of the BHL rate at low speeds. • In the PR13 scenario, the introduction of a suppression factor is not necessary. While being compatible with the experimental bound, our model nevertheless predicts a significant number of bright sources. This prediction corresponds to a population of ∼ 40-100 accreting isolated black holes in the existing catalogues, taking as reference the "no kick" and "high kick" scenarios . We will discuss in more detail the consequences of this result in the next Sections, and compare this prediction with the one associated to the Central Molecular Zone. SEARCHING FOR BLACK HOLES IN THE CENTRAL MOLECULAR ZONE In this section we turn our attention to the inner part of the Galactic bulge. This region is a particularly promising target for BH searches because of the presence of a large reservoir of molecular gas called central molecular zone (CMZ), a cloud complex with asymmetric shape that extends up to 200 pc away from the Galactic centre. The CMZ has been the object of extensive multi-wavelength observational campaigns over the years, aimed at characterizing its structure and physical properties, including star formation rate (see for instance Kruijssen et al. (2019) and references therein for a recent analysis). The abundance of gas in the form of giant molecular clouds, and the location close to the gravity centre of the Galaxy clearly makes it the ideal target for our study. Our setup To model the CMZ, we employ a simplified version of the model by Ferrière et al. (2007). We describe it as a cylindrical region of half height 15 pc and radius 160 pc. The number of BHs contained in this region is of course very uncertain, but we can make a naive order-of-magnitude estimate, as follows. We assume the BHs to be generated following the distribution of stars in the Galaxy. Here we are interested in the bulge component, which accounts for around 15% of the total and can be modelled with a spherical exponential with scale radius bulge = 120 pc (Sofue 2013). Integrating the bulge spherical exponential distribution over the CMZ volume gives us the fraction of ABHs born in this region: around 2% of the bulge component. Assuming a total of 10 8 BHs present in the whole Galaxy, this corresponds to 3 × 10 5 BHs. We must however take into account that, due to the large initial natal kicks, the initial spatial distribution is modified (Tsuna et al. 2018). We performed a simulation of the evolution in the Galactic potential (Irrgang et al. (2013), model II) of 1000 BHs. These are initially distributed uniformly in the CMZ region with an average speed of 130 km/s and given an average natal kick of 75 km/s. The simulation shows that only ≈ 25% of the BHs remain in the region: we observe in particular a spreading in the direction perpendicular to the Galactic plane. On the other hand, we do expect some BHs born outside of the CMZ to enter the region under the effect of the gravitational potential. However, these objects cross the CMZ close to the periastron of their orbit, with very large velocities and hence suppressed accretion rates. We can therefore neglect the latter effect. However the former is significant and we take it into account. We thus update our naive estimate to CMZ ≈ 7.5 × 10 4 . The molecular clouds in this region are known to be denser than the Galactic average. We assume an average number density of molecular hydrogen of avg = 150 cm −3 (Ferrière et al. 2007) and consider two types of clouds: warm, less dense clouds, and cold, denser ones. The warm clouds are of secondary importance given the large distance from Earth (see figure 4). We set their density to = 10 2.5 cm −3 and consider a filling factor of 14% (Ferrière et al. 2007). Regarding the cold clouds, we choose to adopt a more refined modelling based on a power law density distribution between min and max : Based on Ferrière et al. (2007), we set min = 10 3.5 cm −3 and max = 10 5 cm −3 , while we treat as a free parameter. The filling factor for the cold clouds is then obtained by requiring avg = 150 cm −3 . For a reference value of = 2.4 (He et al. 2019(He et al. , 2020 we obtain a filling factor of ≈ 1% and an average cloud density of ≈ 8 × 10 3 . The temperature of the clouds enters the accretion rate through the sound speed . As we mentioned before, this variable has little impact on the predictions and we set it to 1 km/s. Regarding the speed distribution, the average speed BH is We show the X-ray flux associated to a BH orbiting in the Central Molecular Zone molecular cloud complex, for different values of the relative velocity with respect to the gas cloud, and the gas density in the cloud. We consider the Park-Ricotti and the Bondi-Hoyle-Littleton formalism. As a reference, we show the detection threshold associated to the NuSTAR Galactic Center survey (Hong et al. 2016). treated as a free parameter. Nevertheless, we can make some estimates of reasonable values. The most recent observations suggest a velocity dispersion of the central bulge stars of average bulge ≈ 130 km/s (Sanders et al. 2019, Brown et al. 2018. This gives, for reference, = 140 km/s in the "low kick" scenario and = 160 km/s in the "high kick" scenario. Finally, we approximate the distance distribution with a delta peaked at 8.3 kpc and we assume the normal mass distribution discussed in Section 5. The integral to be computed becomes: The expected number of black holes in the clouds is obtained as: where clouds is the associated filling factor. In summary, the set of relevant parameters that play a key role in our calculation are: 1) the average BH speed, BH 2) the slope of the power law density distribution, 3) the sound speed within the ionized region around the BH, in ; 4) the fraction of bolometric luminosity irradiated in the hard X-ray band x / ; 5) the number of black holes in the CMZ, CMZ . Results We start by considering the X-ray flux associated to a generic isolated BH located in the Central Molecular Zone and accreting gas from a molecular cloud as a function of the speed and of the molecular cloud density. We show this observable in figure 4 for the two accretion scenarios discussed above. As a reference, we show the detection threshold associated to the NuSTAR Galactic Center survey in the 3-40 keV band Hong et al. (2016). The same trends highlighted in Sec. 5.2 can be noticed in this plot: In particular, we point out that, if the cloud density is high enough, there is a wide range of BH speed associated to emission above threshold within the PR13 formalism. On the other hand, no sources are expected to be detected in the lower density clouds. With the X-ray flux for a CMZ source at hand, we apply the procedure described in the previous Sections, and compute the number of X-ray sources associated to accreting BHs in this region. The results are shown in figure 5. In this panel we show the impact of the different free parameters on this key observable. In particular, we show in panel (a) the number of X-ray sources in the CMZ region as a function of the average speed of the BH population adopting both the BHL and the PR13 accretion models. We show the result for different choices of the sound speed in the ionized region, and for different choices of the parameter in the BHL case. We can notice how, in this setting, the PR13 scenario predicts as many sources as the = 1 BHL, due to the fact that high speeds are prevalent among the BH population of the CMZ. However, we have seen that the the un-suppressed BHL scenario is excluded by previous studies on nearby compact objects and complementary studies focused on AGN populations as mentioned in Sec. 2.1. The comparison should therefore be made between the predictions of the PR13 model and the suppressed BHL model, also shown in figure for = 0.01. Then, the PR13 model predicts a significantly greater number than those obtained assuming BHL accretion. (See also the work of Tsuna et al. (2018), in which O(1) sources were predicted in the Galactic centre using BHL accretion). In panel (b) we focus on the dependence of the same observable with respect to the fraction of the bolometric luminosity that is radiated in the X-ray band of interest; in panel (c) we consider the differential contribution of the clumps of different density, again for different reference values of the ionized sound speed in the BH vicinity . We can conclude from the results visualized in figure 5 that, despite the relevant uncertainties associated to the modelling of such a complex setup (and despite the existence of a threshold effect due to the peak in the accretion rate), the prediction of few tens of sources is solid with respect to these uncertainties. In particular, we notice from panel (a) that a number of bright X-ray sources comprised between 10 and 20 is expected for our reference value of and for = 150 km/s. As far as the speed distribution is concerned, we remark that the two reference values we have chosen to bracket the uncertainty both lead to similar predictions, while large deviations from this range would require to assume unrealistically high speeds. The same line of thought applies to the slope associated to the clump density distribution. From panel (b) we also notice a lower number of sources may only arise if we were to assume either a very high temperature in the ionized region, or else a very low fraction of bolometric luminosity radiated in the X-ray band. We remark,however, that the prediction scales linearly with the expected number of BHs in the region, CMZ , which carries a large uncertainty. In summary, the PR13 model (together with our rather conservative assumptions regarding the X-ray emission) leads to the prediction of a significant number of X-ray sources in the CMZ region associated to isolated BHs accreting from molecular clouds. The total number of sources is of the same order of magnitude as the one predicted in the local region and discussed in the previous Section. However, while the previous Section was dealing with a full-sky analysis, in this case we are focusing on a specific region of interest (ROI) with small angular extent, which is ideal for multiwavelength observational campaigns. We recall that NuSTAR has detected 70 hard X-ray sources sources in this region interest. The nature of most of the sources in the catalogue in not precisely known, although a significant population of cataclysmic variables and Xray binaries is expected. Our finding seems to suggest the need of a careful data analysis, in order to identify the possible presence of a relevant population of isolated accreting black holes in the already existing data. Following this line of thought, and in the prospect of a discovery, a multi-wavelength analysis is compelling, so we will turn our attention to the radio domain in the next Section. MULTI-WAVELENGTH PROSPECTS FOR THE SQUARE KILOMETER ARRAY In the previous section we have shown that, in a wide portion of the parameter space associated to our problem, a large number of bright X-ray sources are expected in the Galactic Center region. However, in order to pinpoint a source as an accreting black hole, a careful multi-wavelength study has to be performed. The GHz radio band is particularly interesting in this context. As in previous works, (Gaggero et scaling between X-ray and radio flux (Plotkin et al. 2012b) discussed in detail in Section 3 (eq. 14) is employed to obtain the radio flux. A remarkable increase in the sensitivity is expected in the radio domain over the coming decade, thanks to the development of the Square Kilometer Array (SKA) project. This experiment has a huge potential towards shedding light on key problems of fundamental physics, cosmology and astrophysics (Weltman et al. 2020). Here we focus on the discovery potential of the population of astrophysical black holes in the CMZ at 1.4 GHz band, and provide estimates of the number of potentially detectable sources by the SKA1-MID facility. Assuming gain = 15 K/Jy, receiver temperature rx = 25 K, sky temperature towards the Galactic Center sky = 70 K, and bandwidth = 770 MHz (as in Calore et al. 2016), we obtain an instrumental detection sensibility of 2.7 Jy for one-hour exposure. In the following, we assume an optimistic 1000 h exposure time and consistently adopt a 85 nJy as potential detection threshold. In figure 7 we show the radio flux associated to a generic BH in the CMZ cloud, and compare it with the prospective SKA sensitivity and with the threshold of the VLA catalogue (Lazio & Cordes 2008). Focusing on the PR13 scenario, we can conclude from this plot that, while we do would not expect to detect any isolated BH give the VLA sensitivity , SKA would be able to unveil a huge population of isolated BH in the Galactic centre. In fact, most of the BHs accreting from the cold clouds would emit above the SKA threshold. We show our prediction for the number of radio sources detectable with SKA in figure 8 and figure 9, obtained with the same setup described in section 7.1. In particular, in figure 8 we show the number of radio sources detectable by SKA as a function of the BH speed, for different choices of the parameters discussed in the previous Section. We can notice how our model leads to predict thousands of visible sources the two reference values we have chosen to bracket the uncertainty on the speed distribution. Let us now widen our perspective and promote CMZ , the total number of BHs in the CMZ to a free parameter. In figure 9 we show the number of bright radio sources in the ( , CMZ ) space. In this parameter space, we may use the comparison with NuSTAR observations discussed in the previous Section to obtain a bound: in particular, we can identify an upper limit on the number of black holes present in the CMZ, by requiring not to overshoot the number of sources observed by that experiment. We find that, in the allowed portion of parameter space, a very large number of radio point sources will be detectable by SKA. In summary, the radio searches for isolated BHs in the CMZ constitute a promising science case for this experiment. DISCUSSION Comments on the parameter space. We have concluded that our predictions for the CMZ are stable in a very wide region of the parameter space of the model. However, we should note that we made a number of assumptions regarding the distribution of black holes in the CMZ. On the one hand, as we already discussed, our estimate of the total number of BH in the CMZ is obtained with a simple model for the initial distribution and subsequent orbit evolution. Furthermore, we have assumed that both the BHs and the clouds are uniformly distributed in the CMZ. The presence of a positive or negative correlation between the two distributions could in principle impact the results. We further assumed a specific model for the cloud density distribution. These simplifications eventually affect the number of BHs in the clouds, and therefore the final predictions on the number of sources, which scale linearly with this quantity. The accretion scenario. We have shown in Section 6 that, based on local observations, the PR13 accretion model does not require the introduction of a suppression factor . However, since the PR13 model relies on the BHL model to describe accretion within the ionized region (see Sec. 2), one may wonder whether the BHL suppression factor should be introduced at that stage. With respect to this, we should take into account that this suppression factor is usually associated to a very strong outflow of matter from within the Bondi radius, associated to a fraction of 1 − of the total matter initially accreted. Such a large outflow is not observed in the simulations of Park & Ricotti (2013), where most matter crossing the Bondi radius within the ionized region ultimately reaches the BH. We conclude from this that introducing a suppression factor in the expression would not be physically justified. We remark that the simulations considered here represent the state of the art in the field. However, some physical ingredients are still not captured: we mention in particular the role of magnetic fields. Future works will assess the potential impact of a full magneto-hydrodynamical treatment in this context. The emission mechanism and spectrum. In Sec. 7.1 we also assume that ∼ 30% of the total bolometric luminosity of all our isolated BHs falls within the observable X-ray band-X = 0.3 bol . This assumption was based on previous estimates, for example those presented by Fender et al. (2013). However, it is worth assessing the accuracy and precision of this assumption. We assumed a flat radio spectrum ( ∝ 0 ), and then an additional inverted power law in the X-ray band. The assumption that 30% of the observed bolometric luminosity falls in the X-ray band depends on the spectral index of this additional power law, , the high-energy cut off of the power law, cut , the absorbing hydrogen column density along the line of sight, H , and the observing energy band (which we conservatively adopted as 3-40 keV for the NuSTAR X-ray telescope). Inefficiently radiating Galactic BH-XRBs are dominated by power law emission with a typical range for the spectral index given by 0.6 ≤ ≤ 1.0 (or a photon index of 1.6 ≤ Γ ≤ 2 in common X-ray astronomy parlance). However, the power law spectra of Galactic BH-XRBs can become softer in intermediate spectral states, or very quiescent states, so we can extend this out to an upper bound of ≤ 1.4 or Γ ≤ 2.4. Adopting a rough estimate of the jet break frequency of ∼ 10 14 Hz (infrared frequencies; Russell et al. 2013), a typical highenergy cut off in the X-ray of 100 keV, and a high hydrogen column density of 10 23 cm −2 , we calculate that X ∼ (0.24-0.35) bol . The corresponding unabsorbed range of fractional luminosities is then 0.27-0.46 bol . These predictions, however, depend strongly on the cutoff energy. For example, just reducing the cutoff energy from 100 keV to 80 keV can increase the fractional luminosity to ∼ 60%-70%. Therefore, our assumption that X ∼ 0.3 bol is rather conservative, even if we assume strong absorption along the line of sight. It is also based upon a relatively hard power law spectrum in the X-ray band. Nevertheless, we have shown in Section 7, that our predictions are solid with respect to variations of this parameter, even for values as large as 70%. For the radio flux calculations through the Fundamental Plane of Black Hole Activity (FP; Merloni et al. (2003); Falcke et al. (2004); Plotkin et al. (2012b)) we had to assume the presence of a jet emitting in the GHz radio band, following Fender et al. (2013); Gaggero et al. (2017); Manshanden et al. (2019). This crucial assumption is motivated by results of Fender (2001), which established that all highly sub-Eddington accreting BHs in X-ray binary systems are accompanied by such radio emission. Further emission could come from other types of outflow that are less collimated or relativistic. Such alternative emission could constitute a lower limit on the expected radio emission, which would be of particular interest in the absence of a jet. Tsuna & Kawanaka (2019) assumed a spherically symmetric outflow based on losing a fraction (1-) of the accreting matter before reaching the BH, where is the suppression factor. Assuming a power-law radial dependence of the accretion rate, they used the escape velocity to conservatively estimate the power associated to such an outflow. It should be noted that this radial power-law assumption (with slope larger than 1) implies that most matter would be lost close to the BH, which requires a powerful outflow. The shock from the collision of this outflow with the ISM could accelerate non-thermal electrons through diffusive shock acceleration and subsequently produce radio-synchrotron emission. The precise emission profile will be highly dependent on the fraction of energy going into the electrons and the magnetic field. However, as highlighted above, such a relevant outflow is not observed in the simulations presented in Park & Ricotti (2013) and hence this scenario is hard to unify with the PR13 accretion model. Instead, it would be interesting to further investigate what type of emission could be associated to the shock on the boundary of the cometary region itself, which would likely require additional detailed simulations. Connection with PBH searches. We now want to widen the perspective and mention that the search for isolated astrophysical black holes (ABHs) can be considered a relevant problem per se, and also crucial as a background in the context of the quest for primordial black holes (PBHs) in the Galactic environment. The potential of this observational channel in constraining the abundance of PBHs in the inner part of the Galaxy, with particular focus on the CMZ region, was first pointed out in Gaggero et al. (2017), within the framework of the Bondi-Hoyle-Littleton formalism. In Manshanden et al. (2019) the impact of the Park-Ricotti formalism was assessed in detail. The main result of that work is a strong upper bound on the abundance of PBHs (in the context of a non-clustered distribution of PBHs with monocromatic, log-normal and powerlaw mass functions). The upper bound was set at the level of 10 −3 of the Dark Matter in the form of PBHs, which would correspond to roughly ∼ 10 8 objects in the Milky Way for a 10 M reference mass. Hence, the astrophysical population (which is expected to include the same order of magnitude of compact objects) is expected to be an irriducible background that could play the role of an astrophysical floor associated to the quest for a subdominant population of PBHs of tens of solar masses. Outlook. We have shown that a scenario based on the PR13 accretion model and our best estimate of the number of BHs in the CMZ naturally implies a large number of detectable X-ray and radio sources, for any reasonable choice of the other free parameters involved in the problem. This relevant result is highly suggestive that a clear identification of a population of isolated, accreting BHs in the Galactic Center region -by means of the current or forthcoming generation of X-ray experiments -may be around the corner. SUMMARY In this paper we presented a comprehensive study of the constraints and prospects of detection of a population of isolated astrophysical black holes in the solar vicinity and near the Galactic Centre. We have adopted the Park-Ricotti (PR13) accretion model that takes into account radiation feedback and is backed up by hydrodynamic numerical simulations, and compared the results obtained within this scenario with the ones obtained by exploiting the well-known Bondi-Hoyle-Littleton formalism. We found that, within the PR13 model, a large number (∼ 40-100) of bright X-ray sources associated to isolated accreting black holes is expected in the solar vicinity ( < 250 pc), and a significant fraction of the sources listed in current catalogues could be associated to such objects. We performed an extended parametric study about the number of detectable sources in the vicinity of the Galactic Centre, with particular focus to the promising Central Molecular Zone region. In this region of interest we found that, despite the large uncertainty associated to the free parameters involved in the calculation, the prediction of O (10) Xray sources above the detection threshold associated to the NuSTAR catalogue is solid for any reasonable choice of the parameters under scrutiny. In particular, a number of bright X-ray sources comprised between 10 and 20 is expected for our reference value of the ionized sound speed = 30 km/s and an average black hole speed BH = 150 km/s. A multi-wavelength analysis will be needed to clearly identify such a population. In this context, we pointed out promising prospects of detection for future generation experiments in the radio band, with particular reference to the Square Kilometre Array.
12,433.4
2020-12-18T00:00:00.000
[ "Physics" ]
SIMPLE SPINES OF HOMOTOPY 2 -SPHERES ARE UNIQUE . A locally flatly embedded 2-sphere in a compact 4-manifold X is called a spine if the inclusion map is a homotopy equivalence. A spine is called simple if the complement of the 2-sphere has abelian fundamental group. We prove that if two simple spines represent the same generator of H 2 ( X ) then they are ambiently isotopic. In particular, the theorem applies to simple shake-slicing 2-spheres in knot traces. Introduction In this article, unless otherwise specified, we work in the category of topological manifolds, and embeddings are assumed to be locally flat.Let X be a compact 4-manifold homotopy equivalent to S 2 .A spine of X is an embedded, oriented sphere S ⊆ X such that [S] ∈ H 2 (X) ∼ = Z is a generator.A spine is called simple if π 1 (X \ S) is abelian. It is straightforward to build many interesting compact 4-manifolds homotopy equivalent to S 2 .Indeed, given a knot K ⊆ S 3 and an integer n, the knot trace X n (K), formed by attaching a 2-handle D 2 ×D 2 to the boundary of the 4-ball, with attaching circle K and framing n, is an example of a 4-manifold homotopy equivalent to S 2 .Together with Feller, Miller, Nagel, and Ray, we gave algebraic criteria which hold if and only if X n (K) admits a simple spine [FMN + 21].Our main theorem gives the corresponding uniqueness result.Moreover, our uniqueness result holds not only for knot traces, but for arbitrary homotopy 2-spheres. Theorem 1.1.Let X ≃ S 2 be a compact 4-manifold.Let S 0 and S 1 be simple spines with [S 0 ] = [S 1 ] ∈ H 2 (X) a generator.Then S 0 and S 1 are ambiently isotopic via an isotopy that restricts to the identity on ∂X. To prove Theorem 1.1 we first show, using the methods of modified surgery theory [Kre99], that the spheres S 0 and S 1 have homeomorphic exteriors rel.boundary.This implies there is an orientation preserving homeomorphism of pairs (X, S 0 ) ∼ = (X, S 1 ).We then deduce ambient isotopy by applying work from [OP22] to show that this homeomorphism is isotopic rel.boundary to the identity.In [OP22] we computed the topological mapping class group of compact, simply connected 4-manifolds with nonempty boundary.The relevant part of that computation for the present paper is the following. Theorem 1.2 ([OP22, Corollary C]).Let X be a compact, simply connected 4-manifold such that ∂X is connected and has the rational homology of either S 3 or S 1 × S 2 .Let F : X → X be a homeomorphism that restricts to the identity on ∂X and is such that F * = Id : H 2 (X) → H 2 (X).Then F is topologically isotopic rel.boundary to the identity map of X. Theorem 1.1 can be compared to other topological uniqueness results for surfaces embedded in 4-manifolds.The earliest example is the theorem of Freedman and Quinn [FQ90, Theorem 11.7A], which states that any pair of embedded 2-spheres S 0 , S 1 ⊆ S 4 with π 1 (S 4 \ S i ) ∼ = Z must be isotopic.Lee-Wilczyński [LW90, Theorem 1.2 and Corollary 1.3] and Hambleton-Kreck [HK93, Theorems 4.5 and 4.8] extended this to give conditions implying homotopic, simple embeddings of 2-spheres in arbitrary closed, simply connected, topological 4-manifolds are topologically isotopic.Results of a similar nature for higher genus surfaces were proven by Sunukjian in [Sun14,§7]. More recently, the second named author and Conway proved uniqueness results for slice discs in D 4 whose complements have fundamental group Z or Z ⋉ Z[1/2] [CP21a].In [CP21b], this was extended to higher genus surfaces in D 4 whose complements have fundamental group Z.These previous isotopy uniqueness results for embedded surfaces in 4-manifolds with nonempty boundary were restricted to studying the ambient 4-manifold D 4 .The reason is that in D 4 the required isotopy to the identity can be produced by applying the Alexander trick.Theorem 1.2 is a new mechanism for producing isotopies in manifolds with boundary, making it possible for us to follow the proof strategy above when the boundary is nonempty and the 4-manifold is not D 4 .An initial example appeared in [OP22, Theorem F], where we observed that one can apply Theorem 1.2 to extend the results of [CP21b] to give ambient isotopies, under an additional condition. As one might expect, Theorem 1.1 contrasts with the smooth case.An example of this follows from the work of Hayden [Hay20].Let D 0 and D 1 be the exotic Z-slice discs from [Hay20], with common boundary K ⊆ S 3 .Let S 0 and S 1 be the simple spines in the knot trace X 0 (K) obtained from capping off D 0 and D 1 respectively with the core of the added 2-handle.By Theorem 1.1, the 2-spheres S 0 and S 1 are topologically isotopic.However, if they were smoothly isotopic, there would be a diffeomorphism of pairs (X 0 (K), S 0 ) ∼ = (X 0 (K), S 1 ).The results of surgery on S 0 and S 1 would therefore be diffeomorphic.But these surgeries result in D 4 \ νD 0 and D 4 \ νD 1 respectively, and the proof in [Hay20] showed that these 4-manifolds are in fact not diffeomorphic. Organisation.In Section 2 we recall the tools from Kreck's modified surgery that we will need.In Section 3 we begin to study simple 2-sphere spines of homotopy 2-spheres, by analysing the homotopy and spin types of their exteriors.As a consequence we determine the normal 2-type of the exterior.In Section 4 we begin the modified surgery procedure, fixing two simple spines S 0 and S 1 and building 3-connected maps from their exteriors to the Postnikov 2-type, that are moreover compatible with each other on the boundary.In Section 5 we apply modified surgery to show that S 0 and S 1 have homeomorphic exteriors.We complete the proof of Theorem 1.1 in Section 6. when Wh(π) is torsion-free and has trivial involution, these groups are the same (see Proposition 2.5).This is sufficient to show that L s,τ 5 (Z[π]) = L s 5 (Z[π]), for the cyclic groups π with which we work. Elements of modified surgery theory Many of the arguments we make in this article will use Kreck's modified surgery theory [Kre99].We now collect some definitions and results from this theory, for use later on.In this section, all manifolds are compact and oriented. A cobordism rel.boundary (W, G 0 , G 1 ) between n-dimensional manifolds with (possibly empty) boundary X 0 and X 1 , with ∂X 0 ∼ = Y ∼ = ∂X 1 , is an (n + 1)-dimensional manifold W , with a decomposition of the boundary into codimension 0 submanifolds Let B be a space with the homotopy type of a CW complex.A map from a manifold For an n-manifold X we will write the stable topological normal bundle via its classifying map ν X : X → BSTOP (see e.g.[FNOP19,§7] for the definition).Let (B, p) consist of a space B with the homotopy type of a CW complex and p : B → BSTOP a fibration.A map ξ : Let (Y, ζ) be a closed (n − 1)-manifold Y with normal B-structure.Given an n-manifold X, a normal B-structure rel.boundary (g, ξ) (with respect to (Y, ζ)) is a B-structure rel.boundary such that ξ is moreover a normal B-structure, i.e. ξ is a lift of the stable normal bundle up to homotopy.Given two n-manifolds X i , with respective B-structures rel.boundary (g i , ξ i ), a normal B-bordism rel.boundary (W, G 0 , G 1 , Ξ) is a B-bordism rel.boundary, such that Ξ is moreover a normal B-structure.In the case that Y = ∅, we call (W, G 0 , G 1 , Ξ) a normal B-bordism and denote the corresponding bordism group of closed n-manifolds with normal B-structure by Ω n (B, p). We record the following lemma for use later on; the proof is straightforward from the definitions and we omit it. Lemma 2.1.Suppose B is a space with the homotopy type of a CW complex and let p : B → BSTOP be a fibration.Let (Y, ζ) be a closed (n − 1)-manifold Y with normal B-structure.For i = 0, 1, suppose that X i is an n-manifold with normal B-structure rel.boundary (g i , ξ i ).Define a closed n-manifold with normal B-structure Then (X, ξ) ∼ 0 ∈ Ω n (B, p) is null-bordant if and only if (X 0 , g 0 , ξ 0 ) and (X 1 , g 1 , ξ 1 ) are normally B-bordant rel.boundary. A map of spaces is an isomorphism for k > m and is injective for k = m. Definition 2.2.A normal 2-type for a manifold X is a pair (B, p), where B is a space with the homotopy type of a CW complex, p : B → BSTOP is a 3-coconnected fibration with connected fibre, and (B, p) is such that there exists a 3-connected normal B-structure ν X : ) is moreover a normal B-structure rel.boundary, with respect to some (Y, ζ), we say (X, g, ν X ) is a normal 2-smoothing rel.boundary. Given a manifold X, the existence of a normal 2-type follows from the theory of Moore-Postnikov decompositions (see [Bau77]).This theory furthermore guarantees that any two normal 2-types for a given X are fibre homotopy equivalent to one another. Definition 2.3.An h-cobordism rel.boundary between X 0 and X 1 is a cobordism rel.boundary (W, G 0 , G 1 ) such that each map G i : X i → W is a homotopy equivalence.An h-cobordism rel.boundary is moreover an s-cobordism rel.boundary if each homotopy equivalence G i has vanishing Whitehead torsion. In classical surgery theory [Wal99], obstructions to doing surgery to improve certain classes of maps to simple homotopy equivalences are situated in the surgery obstruction groups.These are abelian groups L s n (Z[π]), depending on the dimension n, and fundamental group π, of the manifold.A variant of these obstruction groups, denoted L s,τ n (Z[π]), appears in Kreck's modified surgery theory.Kreck [Kre,Lemma 4.5] shows there is an exact sequence ) → Wh(π).Proposition 2.5.For any group π such that Wh(π) is torsion-free and has trivial involution, the inclusion map ) is an isomorphism. Proof.In [Kre, Lemma 4.5], Kreck identifies the image of the map L s,τ n (Z[π]) → Wh(π) from (4) as the subgroup U ⊆ Wh(π), generated by the stable classes of unitary matrices A over Zπ.When the involution : Wh(π) → Wh(π) is trivial, the unitary condition [A] + [A] = 0 implies 2[A] = 0, so this group is generated by 2-torsion elements.But if moreover Wh(π) is torsion-free, we deduce [A] = 0. Hence U = 0 and The following is the main result of modified surgery theory, for the purposes of this paper. It will be important to be able to compute the bordism groups Ω 4 (B, p), to which we now turn. The normal 2-types we will need later on will be of the following general type.Let P be a CW complex with finite 2-skeleton and let Write the canonical principal fibration γ : BTOPSpin → BSTOP and let p := γ • pr 2 : B → BSTOP be projection followed by γ. In this case a normal B-structure on a 4-manifold M consists of a map M → P and a spin structure on the stable normal bundle of M .A spin structure on the stable normal bundle determines and is determined by a spin structure on the stable tangent bundle.As a consequence, we can identify the group Ω 4 (B, p) with the usual spin bordism group Ω TOPSpin 4 (P ).The latter group can be computed using an Atiyah-Hirzebruch spectral sequence: where we recall that the coefficients in the range of interest are Ω TOPSpin s ∼ = Z, Z/2, Z/2, 0, Z for s = 0, 1, 2, 3, 4 respectively. The next proposition, due to Teichner, describes certain differentials in this spectral sequence in terms of Steenrod squares. Properties of simple spine exteriors and the normal 2-types Let X be a compact, oriented 4-manifold such that X ≃ S 2 .Fix once and for all an orientation of S 2 .Suppose S ⊆ X is the image of a locally flat embedding of S 2 that represents a generator of π 2 (X).Write W := X \ νS.Write n ∈ Z for the self-intersection of S in X.The closed tubular neighbourhood νS ⊆ X homeomorphic to D 2 × n S 2 , the D 2 -bundle over S 2 with Euler number n. Hence ∂W = ∂X ⊔ −L where L := ∂νS is homeomorphic to the lens space L(n, 1) (our convention is that L(0, 1) = S 1 × S 2 ).Assume in addition that π 1 (W ) is abelian, i.e. that S is simple. The purpose of this section is to compute a normal 2-type of W . Proof.Since X is simply connected it is orientable, and so H 4 (X) = 0 implies the boundary must be nonempty.Next, 0 = H 3 (X) ∼ = H 1 (X, ∂X) ∼ = Z r−1 , where r is the number of boundary components.So r = 1.□ Lemma 3.2.The manifold W is a Z-homology bordism from ∂X to L and ∂X → W induces a surjection on π 1 . Proof.Consider the Mayer-Vietoris push-out square of singular chain complexes with Z coefficients In the homotopy category of chain complexes every push-out square is also a pull-back square. Since the square is a push-out and a pull-back, the fibre (resp.cofibre) of the top horizontal map is chain equivalent to the fibre (resp.cofibre) of the bottom horizontal map.Since S is a spine, the inclusion νS → X is a homotopy equivalence, and so the lower horizontal arrow in the diagram is a chain equivalence.Therefore the fibre and cofibre of the bottom horizontal map are chain contractible.It follows that the same holds for the top map, so it is also a chain equivalence.Thus L → W is an integral homology equivalence.In particular, we have H * (W, L; Z) = 0.By Poincaré-Lefschetz duality the cohomology is H * (W, ∂X; Z) = 0.By the universal coefficient theorem the homology is H * (W, ∂X; Z) = 0. Consequently ∂X → W is an integral homology equivalence, and so W is indeed a Z-homology bordism. To see that ∂X → W induces a surjection on π 1 , consider that as π 1 (W ) is abelian, the map Proof.The proof is via Whitehead's Theorem, so we must show the inclusion induced maps are isomorphisms on all homotopy groups.Since π 1 (W ) is abelian, the Hurewicz theorem and Lemma 3.2 immediately imply the inclusion induces π 1 (L) ∼ = π 1 (W ).Now write p : W → W and p : L → L for the universal covers.Given the isomorphism on π 1 , and the relative Hurewicz Theorem, it is now sufficient to show that inclusion induces isomorphisms Similarly, as W is a 4-manifold with nonempty boundary, we have H i ( W ) = 0 for i ≥ 4. Thus we focus our attention now on proving homology isomorphism for i = 2, 3. Since, for each component of ∂W , the inclusion into W induces a surjection on π 1 , this implies that ∂ W = p −1 (∂W ) has exactly two connected components.There is thus an exact sequence The surjection The case n ̸ = 0: In this case H 2 ( L) = 0 and H 3 ( L) ∼ = Z so we will be done if we show H 2 ( W ) = 0 and the inclusion induced map H 3 ( L) → H 3 ( W ) is an isomorphism.As n ̸ = 0, we have that W is compact so, by Poincaré-Lefschetz duality and the universal coefficient theorem, we have Using Lemma 3.2 we compute that χ(W ) = 0, and as Euler characteristic is multiplicative under finite covers, we have as well that χ( W Now the long exact sequence of the pair gives an exact sequence in which the second map is the diagonal 1 → (1, 1).We may conclude from this that the map H 3 ( L; Z) → H 3 ( W ; Z) is an isomorphism as required (this is also true for H 3 ( ∂X; Z) → H 3 ( W ; Z), but we do not need this). The case n = 0: In this case, the Universal Coefficient Spectral Sequence for cohomology has ).The r = 0 column vanishes on the E 2 page as H 0 ( W , ∂ W ; Z) = 0.For the r = 1 column, recall that H 1 ( W , ∂ W ; Z) ∼ = Z.The standard cellular chain complex for the universal cover of S 1 ≃ BZ gives a free Z[Z]-module resolution of Z, and hence we compute that Ext s Z for s = 1, and vanishes for s ̸ = 1.Thus the only nonvanishing terms on the r+s = 2 line are at E 1,1 2 and E 2,0 2 , and as the bidegree of the d 2 differential is (−1, 2), the spectral sequence collapses here to yield a short exact sequence 0 ) is a free module by [BF14, Lemma 2.1].Therefore the short exact sequence splits and we have , as localisation is flat.We have already seen that E 1,0 2 = E 0,1 2 = 0, so we compute that H 1 (W, ∂W ; Z[Z]) = 0. Thus H 3 (W ; Z[Z]) = 0 by Poincaré-Lefschetz duality.We also have But as Euler characteristic can be computed with any field coefficients, this implies . By Lemma 3.2 and the fact that L is a closed, orientable 3-manifold, this number is 0, which proves the claim. Combining the fact that H Hur Hur where the downwards maps are the Hurewicz maps.By the Hurewicz Theorem, these downwards maps are both surjective, and are thus both surjections from the infinite cyclic group to itself.Hence they are both isomorphisms.By the naturality of the Hurewicz map, the square commutes and so π 2 (i) is an isomorphism as claimed. In other words, H 2 ( L; Z) → H 2 ( W ; Z) is an isomorphism as required. Along the way we have shown H 3 ( W ; Z) = H 3 (W ; Z[Z]) = 0, and as L ∼ = S 1 × S 2 we have H 3 ( L; Z) = 0. Hence the proof is complete.□ We now have enough information to calculate a normal 2-type of W . Definition 3.4.For n ∈ Z, define and define a fibration where pr 2 is projection to the second factor and γ : BTOPSpin → BSTOP is the canonical map. Proposition 3.5.Let n ̸ = 0 be an integer.If M is a compact, oriented, spin 4-manifold with and is therefore 3-coconnected as required. Let c : M → B(Z/n) denote the classifying map for the universal cover of M and let s : M → BTOPSpin denote a choice of spin structure.We claim c × s is a normal 2-smoothing.Certainly BTOPSpin) are clearly isomorphisms for k = 1, 2. We note as well that π 3 (B(Z/n)×BTOPSpin) = 0, and it follows that c × s is 3-connected as required.□ Proposition 3.6.If X is a compact, oriented, spin 4-manifold with π 1 (M ) ∼ = Z and π 2 (M ) ∼ = Z, then X has normal 2-type (B(0), p 0 ). Since H 3 (S 1 ; Z) = 0, the k-invariant of M is trivial and therefore the Postnikov 2-type is a product the 3-connected map associated to the Postnikov 2-type of M and let s : M → BTOPSpin denote the choice of spin structure.We claim c × s is a normal 2-smoothing.Certainly We note as well that π 3 ((S 1 × CP ∞ ) × BTOPSpin) = 0, and it follows that c × s is 3-connected.□ We have obtained the normal 2-type of W . Proposition 3.7.Let X be a compact, oriented 4-manifold, and suppose that X ≃ S 2 .Suppose S ⊆ X is the image of a locally flat embedding of S 2 that represents a generator of π 2 (X). Write W := X \ νS, and write n ∈ Z for the normal Euler number of S in X. Assume that S is simple, i.e. that π 1 (W ) is abelian.Then (B(n), p n ) from Definition 3.4 is a normal 2-type of W . Proof.By Lemma 3.3, W is homotopy equivalent to L(n, 1).As lens spaces are spin, and Steifel-Whitney classes of stable tangent bundles are homotopy invariants, we obtain that W is spin.When n ̸ = 0, Lemma 3.3 shows that W satisfies the hypotheses of Proposition 3.5, and so the result follows.When n = 0, Lemma 3.3 shows that W satisfies the hypotheses of Proposition 3.6, and so the result follows.□ Boundary-compatible maps to the Postnikov 2-type As before, let X be a compact, oriented 4-manifold with X ≃ S 2 .Fix an orientation on S 2 .Suppose for i = 0, 1 that S i ⊆ X are simple spines, which are the images of maps representing the same generator of π 2 (X).Write W i := X \ νS i and L i := ∂νS i .Write n for the algebraic self-intersection of S 0 (and therefore also S 1 ) in X. In this section, we begin the modified surgery programme for showing that S 0 and S 1 are ambiently isotopic.Write P (n) for the Postnikov 2-type of W i ; that is, the space such that B(n) = P (n) × BTOPSpin. We will prove that, given some map ∂X ⊔ L(n, 1) → P (n), it is possible to extend it to a 3-connected map W i → P (n).This is the first step to producing a full normal 2-smoothing rel.boundary for W i , which we will need later.It follows from Proposition 3.7 that the Postnikov 2-type of X is given by P (n) = B(Z/n) when n ̸ = 0 and P (0) = BZ × CP ∞ , so when n ̸ = 0 a 3-connected map is the same as a π 1 -isomorphism, and for n = 0 it is the same as an isomorphism on π 1 and on π 2 .To precisely phrase our main result of the section, we need a definition. − → L(n, 1) be orientation preserving homeomorphisms.For i = 0, 1 we define Using this definition, the following lemma will achieve the aim described above. Recall L i = ∂νS i .It will be important in later sections of the paper that the parametrisations g i : ∂νS i ∼ = L(n, 1) extend to orientation preserving homeomorphisms G i : Hence we also build this into the lemma. Lemma 4.2.Let n ∈ Z.For i = 0, 1, there are choices of orientation preserving homeomorphisms G i : νS i ∼ = D 2 × n S 2 , that preserve the 0-section, such that for the boundary parameterisations g i : L i ∼ = L(n, 1) obtained by restricting G i , there exists a map ℓ : W (g 0 , g 1 ) → B(Z/n) with the property that ℓ restricted to W i induces an isomorphism on π 1 for i = 0, 1. In the case that n = 0, the disc bundle structures, and hence the g 0 , g 1 , may be furthermore chosen such that there exists a map η : W (g 0 , g 1 ) → CP ∞ with the property that η restricted to W i induces an isomorphism on π 2 for i = 0, 1. Remark 4.3.Note that for n = ±1 we have L(±1, 1) ∼ = S 3 , and this lemma holds trivially.Nevertheless the proof goes through in this case without modification, so we shall not separate this case.In the case n = 0 recall that L(0, 1) ∼ = S 1 × S 2 . To prove Lemma 4.2, we will use the following technical result. Lemma 4.4.For i = 0, 1 let g i : L i ∼ = L(n, 1) be any two choices of orientation preserving boundary parametrisation.Consider the diagram (5) where ψ(g 0 , g 1 ) is defined to make the diagram commute.Then the map ψ(g 0 , g 1 ) is multiplication by ±1. We defer the proof of this lemma until the very end of this section.To take care of the sign ambiguity in ψ(g 0 , g 1 ) we will use the following self-homeomorphism of L(n, 1).Definition 4.6.Writing D 2 × D 2 ⊆ C 2 , we may write D 2 × n S 2 , the 2-disc bundle over S 2 with Euler number n, as U 1 ∪ U 2 where U i ∼ = D 2 × D 2 and we glue together along D 2 × S 1 ⊆ U i using φ(u, v) = (uv n , v).Note this restricts on the boundary of the total space of the disc bundle to a description of the lens space L(n, 1).Writing S 1 × D 2 ⊆ C 2 , the lens space L(n, 1) is the identification space V 1 ∪ V 2 where V i ∼ = S 1 × D 2 and we glue with the same formula along Note that τ that restricts to a homeomorphism τ : L(n, 1) → L(n, 1), using the same formula for (u, v) ∈ V i . Proof.The map τ is clearly orientation preserving as it comes from the composition of two complex conjugations. Under τ , each hemisphere of this 2-sphere is reflected by the complex conjugation (0, v) → (0, v).Thus τ is orientation reversing on this 2-sphere and the automorphism on H 2 (D 2 × n S 2 ) is multiplication by −1. A generator of H 1 (L(n, 1)) is given by the oriented submanifold S 1 × {0} ⊆ V 1 .Under τ , this circle is sent to itself by complex conjugation in the first factor, and so the orientation is switched.Thus the automorphism on H 1 (L(n, 1)) is multiplication by −1.□ Proof of Lemma 4.2 assuming Lemma 4.4.We choose orientation preserving homeomorphisms G i : νS i ∼ = D 2 × n S 2 that preserve the 0-section, with corresponding boundary parameterisations g i : L i ∼ = L(n, 1).By Lemma 4.4, the map in Diagram (5) is ψ(g 0 , g 1 ) = ± Id.By Lemma 4.7 the self-homeomorphism τ of D 2 × n S 2 is such that the restriction to the boundary L(n, 1) induces multiplication by −1 on H 1 (L(n, 1)).By postcomposing g 0 with τ , if necessary, we can and will assume that in fact ψ(g 0 , g 1 ) = Id. Choose a map α : L(n, 1) → B(Z/n) inducing an isomorphism π 1 (L(n, 1)) ∼ = Z/n.For i = 0, 1, choose a map β i : ∂X → B(Z/n) inducing the map For each of i = 0, 1, we may now extend * on π 1 .By commutativity of Diagram (5), and the fact that ψ(g 0 , g 1 ) is the identity, the restrictions ℓ 0 | ∂W0 and ℓ 1 | ∂W1 agree under the glueing map Id ∂X ⊔(g −1 1 • g 0 ), but only up to homotopy.Modify ℓ 0 by a homotopy in a boundary collar of ∂W 0 , to arrange that ℓ 0 | ∂W0 and ℓ 1 | ∂W1 agree under the glueing map.Now, together, ℓ 0 and ℓ 1 define a map ℓ : W (g 0 , g 1 ) → B(Z/n) with the desired properties.When n ̸ = 0, this completes the proof of the lemma.Now assume n = 0 and consider the part of the lemma that remains to be shown.We must produce a map η : W (g 0 , g 1 ) → CP ∞ = K(Z, 2) so that its restriction to W i is an isomorphism on π 2 , for i = 0, 1.But the method is entirely analogous to that in the previous two paragraphs, noting only that we have already shown the map ψ ′ (g 0 , g 1 ) of diagram (8) is the identity map.We omit further details.□ Remark 4.9.Later, in Lemma 5.3, we will modify the map η just constructed.For now, it is merely important to know that one possible map η exists. The remainder of this section is devoted to the proof of Lemma 4.4.We begin by considering how different choices of boundary parameterisations g 0 and g 1 affect the map ψ in Diagram (5).Suppose g i : L i ∼ = L(n, 1) and g ′ i : L i ∼ = L(n, 1) are choices of homeomorphism, for i = 0, 1.Then g ′ i = (g ′ i g −1 i ) • g i .In other words, g i and g ′ i differ by a self-homeomorphism of L(n, 1).So to prove Lemma 4.4, it will be important for us to understand the possible automorphisms of H 1 (L(n, 1)) induced by self-homeomorphisms of L(n, 1).Bonahon [Bon83] computed the mapping class groups of lens spaces (see also [HR85]).We use this to determine the possible automorphisms of H 1 (L(n, 1)) induced by self-homeomorphisms of L(n, 1). Proof.As g * is an automorphism of Z/n, it is given by multiplication by some unit µ ∈ (Z/n) × .When n = 0 the only units are µ = ±1, when n = 1 the only unit is µ = 1(= 0), and when n = 2 the only unit is µ = 1, so in these cases the statement is clear.For n > 2, the mapping class group of L(n, 1) is generated by τ [Bon83, Théorème 3(c)] so g is isotopic to either τ or the identity map, and thus by Lemma 4.7 the only possibilities are µ = ±1.□ Corollary 4.11.Up to a sign, the homomorphism ψ(g 0 , g 1 ) : Z/n → Z/n from Diagram (5) is independent of the choices of g 0 and g 1 . To prove Lemma 4.4, we will show that there exists some choice of parameterisations g 0 , g 1 such that ψ(g 0 , g 1 ) = ±1.Then Corollary 4.11 shows any choice will return ψ = ±1. Remark 4.12.We note that the map ψ(g 0 , g 1 ) must be multiplication by a unit in Z/n, thus for any n where (Z/n) × = {1, −1}, the objective just outlined is trivially achieved.These are the cases |n| = 0, 1, 2, 3, 4, 6.However, the proof below is the same in all cases, so we proceed in generality.We build up some technology.Recall that we fixed an orientation of S 2 .Let f : S 2 ↬ X be a generic immersion that is also a homotopy equivalence.In a standard abuse of notation, henceforth we will conflate f with the immersed submanifold given by its image f (S 2 ).A closed regular neighbourhood ν(f ) is homeomorphic to the effect of performing some number of selfplumbings, k say, of the Euler number n disc bundle over S 2 .A standard Kirby diagram for this plumbed disc bundle is given in Figure 1 (see e.g.[GS99,p. 202] or [Fre82, Diagram 2.2]), where the clasps C i are one of the two possibilities depicted (it will not be relevant for us which ones).We identify the tubular neighbourhood with this standard description. A loop γ in X is called a meridian to f if it is isotopic in X \ f to a meridian to the 2-handle curve in the Kirby diagram.Note that as both X and S 2 are oriented, there is a preferred orientation on a meridian to f .Namely, pick an embedded disc bounded by the meridian and intersecting the 2-handle geometrically once.Orient the disc so that this is a positive intersection and then restrict this orientation on the disc its boundary.Lemma 4.13.Assume that f has k double points.Then The summand Z k of both H 1 (ν(f )) and H 1 (∂ν(f )) is generated by a collection of meridians to the dotted circles in Figure 1.The summand Z/n ⊆ H 1 (∂ν(f )) is generated by a meridian of f . Proof.From Figure 1 we can read off the claimed homology groups for ν(f ); it is homotopy equivalent to S 2 ∨ k S 1 .One can deduce the homology groups of ∂ν(f ) from the long exact sequence of the pair (ν(f ), ∂ν(f )), together with the fact that H 2 (ν(f )) → H 2 (ν(f ), ∂ν(f )) can be identified with Z n − → Z by viewing it as the adjoint to the intersection pairing.Alternatively, by switching the dots to 0's in Figure 1, we obtain a link surgery diagram for ∂ν(f ), from which we can read off the claimed homology groups for this boundary manifold.The claims about generators are clear from the picture.□ Remark 4.14.To each double point of f , we can assign a double point loop on the image f (S 2 ).This is a loop on the surface that leaves the double point on one sheet of the immersion and returns on the other, missing all other double points.The meridians to the dotted circles in Figure 1 are isotopic in ν(f ) to double point loops on the surface. Proposition 4.15.We have that that H 1 (X \ f ) ∼ = Z/n, generated by a meridian of f , and moreover the inclusion induced map j * : Proof.For the first claim, we observe that since X is simply connected, there is an exact sequence Using Poincaré-Lefschetz duality and the universal coefficient theorem, we identify H 2 (X, ∂X) ∼ = H 2 (X) * .Under this identification ι * becomes the adjoint of λ X , and hence the sequence is isomorphic to n From this, we deduce that H 1 (∂X) ∼ = Z/n (and H 2 (∂X) = 0, although we will not need this). We now move on to computing that Using the groups computed in Lemma 4.13, the Mayer-Vietoris sequence for X = W ∪ν(f ) now shows that the inclusion -induced maps and that the inclusion-induced map is an isomorphism From the classification of finitely generated abelian groups, we deduce that H 1 (W ) ∼ = Z/n.The Z/n summand of H 1 (Y ) is generated by a meridian of f , by Lemma 4.13.The order of this meridian element is preserved under the map (16), as this is an isomorphism, in particular implying it maps to a generator of the torsion summand H 1 (W ).Moreover, taking these three facts above, combined with the long exact sequence of the pair (W, Y ), we may compute that For the final claim, consider that part of the long exact sequence for the pair (W, ∂X) is Using Poincaré-Lefschetz duality and the universal coefficient theorem, we identify But this group is 0 by the computation above.This shows j * is a surjective map Z/n → Z/n, implying it is an isomorphism as required.□ We have shown that j * : H 1 (∂X) ) is an isomorphism, but we wish to be more careful about keeping track of which isomorphism it is.Fix a generator γ ∈ H 1 (∂X) ∼ = Z/n.Let k : ∂ν(f ) → X \ ν(f ) be the inclusion map.Definition 4.17.A meridional marking for f is a homology class δ ∈ H 1 (∂ν(f )) that contains a representative given by a meridian to f with the preferred orientation.The meridionally marked immersion (f, δ) is said to be consistent (with respect to γ) We consider the behaviour of consistency under finger moves on the immersion.Proposition 4.18.Suppose that f ′ : S 2 ↬ X is obtained from f by a self finger move.Let δ ′ ∈ H 1 (∂ν(f ′ )) be a meridional marking for f ′ obtained by taking a representative meridian for δ disjoint from a neighbourhood of the finger move arc.Then (f ′ , δ ′ ) is consistent if and only if (f, δ) is consistent. Proof.Suppose (f, δ) is consistent.Then there exists a surface Σ ⊆ X \ ν(f ) witnessing that j * (γ) and k * (δ) are homologous.We may assume this surface is disjoint from a neighbourhood of the finger move arc.Write the inclusion k ′ : ∂ν(f ′ ) → X \ ν(f ′ ).Then Σ witnesses that j * (γ) and k ′ * (δ ′ ) are homologous in X \ ν(f ′ ), so (f ′ , δ ′ ) is consistent.Conversely, let Σ be a surface in X \ ν(f ′ ) witnessing that j * (γ) and k ′ * (δ ′ ) are homologous.Choose a Whitney disc V reversing the finger move.We can assume that meridians in the markings δ and δ ′ are disjoint from the Whitney arcs.It may be that V intersects Σ.If this is the case, perform finger moves on Σ, guided by arcs in V , to push intersections between V and Σ off V ; see Figure 2, parts (a) and (b).This is the procedure called pushing down [FQ90, §2.5].Each application of pushing down creates a pair of intersection points between Σ and f .For The surface Σ after being pushed down.Also pictured, a choice of arc A on f between the new intersection points, and missing the Whitney arcs on f .each such pair, choose an arc on f joining the intersections, which is disjoint from the Whitney arcs on f ; see Figure 2(b).Using the normal directions to f , thicken the arc to a 3-dimensional 1-handle, and then use the boundary of this 1-handle to tube Σ to itself; see Figure 2(c).This gives a new Σ, disjoint from the Whitney disc, disjoint from ν(f ), and witnessing a homology between j * (γ) and k * (δ) in X \ ν(f ).Therefore (f, δ) is consistent.□ We can finally prove that the map ψ : Z/n → Z/n from Diagram (5) is always ±1. Proof of Lemma 4.4.Write f 0 : S 2 → X and f 1 : S 2 → X for locally flat embeddings with images S 0 and S 1 respectively.As both f 0 and f 1 are embedded, homotopic, and have the same normal Euler number, they are regularly homotopic; see e.g.[KPRT22, Theorem 2.27]. A regular homotopy consists of a finite sequence of finger and Whitney moves.Choose a meridional marking δ 0 for f 0 , represented by an oriented meridian µ 0 ⊆ ∂ν(f 0 ) that is disjoint from neighbourhoods of all finger move arcs and Whitney discs in the finite sequence.The meridian µ 0 thus survives the regular homotopy and becomes a meridian µ 1 of f 1 .We denote by δ 1 the corresponding meridional marking for f 1 .Choose a generator γ ∈ H 1 (∂X) to arrange that (f 0 , δ 0 ) is consistent.By Proposition 4.18, (f 1 , δ 1 ) is also consistent. Since there exists a choice of g 0 , g 1 such that ψ(g 0 , g 1 ) = ±1, Corollary 4.11 now shows that for any choice, we must have ψ = ±1.□ Homeomorphisms between 2-sphere exteriors Recall W i := X \ νS i and L i := ∂νS i .We now prove that there is a homeomorphism between the 2-sphere exteriors W 0 and W 1 . Proposition 5.1.Let X be a compact, oriented 4-manifold and suppose that X ≃ S 2 .Suppose for i = 0, 1 that S i ⊆ X are images of locally flat embeddings of S 2 that both represent a given generator of π 2 (X).Write W i := X \ νS i and assume π 1 (W i ) is abelian.Then there is a homeomorphism F : W 0 ∼ = W 1 that restricts to the identity map on ∂X and on ∂νS i ∼ = L(n, 1), the latter for some choices of boundary parameterisations. We will build up to the proof of Proposition 5.1 with some lemmas.Recall, given choices of boundary parametrisations g i : − → L(n, 1), we defined W (g 0 , g 1 ) = W 0 ∪ −W 1 , glued using the g i on the L i boundary components and by the identity on ∂X; see Definition 4.1. Lemma 5.2.Under the hypotheses of Proposition 5.1, and for any two choices of 2-disc bundle structure on the tubular neighbourhoods G i : νS i ∼ = D 2 × n S 2 , that restrict to boundary parameterisations g i : L i ∼ = L(n, 1), there is a spin structure on W (g 0 , g 1 ). Proof.When n is odd, we compute H 1 (W i ; Z/2) = 0, so that W i has a unique spin structure s i .Similarly, ∂X and L i have unique spin structures because W i is a Z-homology cobordism.Thus the restrictions of s 0 and s 1 to ∂X agree, and so do the restrictions of s 0 and s 1 to L i .This implies the chosen spin structures are compatible with the boundary glueing maps defining W (g 0 , g 1 ). When n is even, recall that X ≃ S 2 is spin and has a unique spin structure.For i = 0, 1, endow W i with the spin structure s i restricted from the spin structure on X.On L i , the restricted spin structure ∂s i is one of two possible spin structures on L i .But exactly one of the spin structures on L(n, 1) extends to D 2 × n S 2 .As the boundary parametrisation L i ∼ = L(n, 1) extends to νS i ∼ = D 2 × n S 2 , we see that for both i = 0, 1, the spin structure ∂s i • g −1 i is this bounding spin structure on L(n, 1).Thus the spin structures ∂s 0 and ∂s 1 agree on L 0 and L 1 under the glueing map defining W (g 0 , g 1 ).□ Lemma 5.3.Suppose the hypotheses of Proposition 5.1, and assume furthermore that n = 0. We compute the signature of W .As n = 0, the original manifold X has vanishing intersection form and thus signature 0. Similarly νS i has signature 0. Since X = W i ∪ νS i , Novikov additivity now shows W i has signature 0. A further application of Novikov additivity implies W has signature 0. We wish to obtain specific generators for H 2 (W ).For this, consider the long exact sequence where α and δ are both ( 1 1 1 1 ).Write L = coker(α) and J = ker(δ), which we note are both free of rank 1.There is then an exact sequence From this we deduce that H 2 (W ) ∼ = Z ⊕ Z. Now L is generated by the class of a generator of H 2 (W 0 ), which is the image of a generator of H 2 (S 1 × S 2 ), say. Choose a loop λ ⊆ ∂X generating H 1 (∂X; Z).For each of i = 0, 1 there exists a properly embedded annulus A i ⊆ W i with λ = A i ∩ ∂X and g −1 i (S 1 × {pt}) = A i ∩ L i .Fix choices of A i .Glueing these annuli together, we obtain an embedded torus T ⊆ W such that γ([T ]) is a generator for J in sequence (4).Write S ⊆ W for the 2-sphere {pt} × S 2 ⊆ S 1 × S 2 ⊆ W . Then the intersection form for W is where a ∈ 2Z is some unknown integer (which is necessarily even because W is spin). Consider for The Poincaré dual to [S] ∈ H 2 (W ; Z) is some class y ∈ H 2 (W ; Z), and as an element of Hom Z (H 2 (W ; Z); Z) is given by the left column of the intersection form λ X,Z .In particular, this map evaluates to 0 on Because y maps vertically to 0, the restrictions of G and η to W i are homotopic, for each i = 0, 1.In particular, G restricted to The differentials d 2 : E 5,0 2 → E 3,1 2 and d 2 : E 4,1 2 → E 2,2 2 are given by Proposition 2.7.Moreover, the former is known to be the surjective map Z/n → Z/2, and the latter is known to be an isomorphism Z/2 → Z/2 [KPT21, §6.2.1].So on the E 3 page, the only nonzero term when r + s = 4 is E 0,4 3 , which is isomorphic to Z since E 2,3 2 = 0.Moreover, all the differentials d m , for m ≥ 3, with codomain E 0,4 m , have finite groups in their domains, and therefore inductively we see that they map trivially to E 0,4 r ∼ = Z.Thus this term survives to the E ∞ page, and we To compute the signature of W consider the section of Mayer-Vietoris sequence implies that H 2 (W ) is Z-torsion.Hence σ(W ) = 0 and so (W, ν W0 ∪ ν W1 ) vanishes in the group Ω 4 (B(n), p n ).By Lemma 2.1, this implies that (W 0 , ν W0 ) is cobordant to (W 1 , ν W1 ) rel. boundary.By Theorem 2.6, such a bordism determines a surgery obstruction in L s,τ 5 (Z[Z/n]), which vanishes only if (W 0 , ν W0 ) is s-cobordant to (W 1 , ν W1 ) rel. boundary. We claim that L s,τ 5 (Z[Z/n]) = 0.For this, by [Bak76, Theorem 7] the simple Wall group is trivial ), which will follow if the hypotheses of Proposition 2.5 are satisfied.These hypotheses hold: firstly, Oliver [Oli88, Theorem 5.6] showed that Wh(Z/n) is torsion-free; secondly, the involution on Wh(π) is trivial [Bas74, Proposition 4.2] (see also [Bak77]) for any finite abelian group, so in particular for π ∼ = Z/n.This completes the proof of the claim. As the value group for the surgery obstruction vanishes L s,τ 5 (Z[Z/n]) = 0, the desired scobordism exists.As Z/n is a good group, the Freedman-Quinn s-cobordism theorem [FQ90, Theorem 7.1A] implies there exists a homeomorphism F : W 0 ∼ = W 1 restricting to the identity on ∂X and to g −1 1 • g 0 : L 0 → L 1 on this boundary component.□ Proof of Proposition 5.1, assuming n = 0.By Proposition 3.7, the normal 2-type of W i is B(0) = BTOPSpin ×S 1 × CP ∞ → BSTOP.A normal 2-smoothing for W i consists of a choice of spin structure s i : W i → BTOPSpin, together with a choice of map ℓ i : W i → S 1 that induces an isomorphism on π 1 , and a choice of map h i : W i → CP ∞ that induces an isomorphism on π 2 .By Lemmas 4.2 and 5.2, there exist choices of boundary parametrisation g i : L i ∼ = S 1 × S 2 and normal 2-smoothings ν Wi = s i × ℓ i × h i such that the normal 2-smoothing The obstruction in E 4,0 3 ∼ = Z ∼ = Ω TOPSpin 4 determined by W is given by the signature of W (divided by 8), which vanishes by Lemma 5.3.The obstruction in E 4,0 ∞ ⊆ Ω TOPSpin 4 (S 1 × CP ∞ ) determined by W is given by η * ([W ]) ∈ H 4 (CP ∞ ; Z).Using Lemma 5.3, we assumed that the map η is such that this obstruction vanishes.In addition, during the proof of Lemma 5.3 we showed that one can arrange for η * ([W ]) to take any even value.It follows that E 4,0 ∞ ⊆ E 4,0 2 ∼ = Z is the subgroup 2Z, and so in fact we must have d 4,0 3 = 0 and ker d 4,0 3 = E 4,0 3 ∼ = 2Z.We conclude that Ω TOPSpin 4 (S 1 × CP ∞ ) ∼ = Z ⊕ 2Z, and that (W, s × λ × η) is the trivial element of this group. Hence (W, s × λ × η) vanishes in the group Ω 4 (B(n), p n ).By Lemma 2.1, this implies that (W 0 , ν W0 ) is cobordant to (W 1 , ν W1 ) rel. boundary.Choose such a bordism (Z, Ξ).By Theorem 2.6 this determines a surgery obstruction θ(Z, Ξ) ∈ L s,τ 5 (Z[Z]), and if this obstruction vanishes then (W 0 , ν W0 ) is s-cobordant to (W 1 , ν W1 ) rel. boundary.As Wh(Z) = 0, the exact sequence (4) shows that the obstruction in fact lies in the subgroup L s 5 (Z[Z]) ⊆ L s,τ 5 (Z[Z]).This obstruction group is well-known to be L s 5 (Z[Z]) ∼ = L h 4 (Z) ∼ = 8Z, given by the signature of a closed codimension 1 submanifold that meets an embedded loop γ ⊆ Z algebraically once, where γ ⊆ Z is a loop such that Ξ(γ) generates π 1 (B(0)) ∼ = Z.This signature may be nonzero.To kill the signature, we take the internal S 1 -sum between Z and copies of S 1 ×E 8 , or S 1 ×−E 8 as appropriate, identifying copies of γ ⊆ Z in the former with representatives of the S 1 factors in the latter.After modifying Z in this way, the obstruction in L s 5 (Z[Z]) vanishes and so the desired s-cobordism exists by Theorem 2.6.As Z is a good group, the Freedman-Quinn scobordism theorem [FQ90, Theorem 7.1A] implies there exists a homeomorphism F : W 0 ∼ = W 1 restricting to the identity on ∂X and to g −1 1 • g 0 : L 0 → L 1 on this boundary component.□ Proof of Theorem 1.1 We now record two short lemmas before moving on to the proof of Theorem 1.1. Lemma 6.1.Let X ≃ S 2 be a compact 4-manifold with intersection form (n), where n ∈ Z. Suppose F : X → X is an orientation preserving homeomorphism, restricting to the identity on ∂X, and such that the map F * : H 2 (X) → H 2 (X) is multiplication by −1.Then n ∈ {±1, ±2}. Proof.By the universal coefficient theorem we have F * : H 2 (X) → H 2 (X) is also multiplication by −1.Consider the commuting square with downwards maps given by F * and horizontal maps induced by inclusion H 2 (X) H 2 (∂X). −1 1 It is straightforward to identify the horizontal maps with the quotient map Z → Z/n.The commutativity of the square implies 1 ≡ −1 mod n, which holds if and only if n ∈ {±1, ±2}.□ Lemma 6.2.Let X ≃ S 2 be a compact 4-manifold.Let S ⊆ X be a spine with selfintersection n, where n ∈ {±1, ±2}.Then there is an orientation preserving homeomorphism F : X → X, fixing ∂X pointwise, and such that F | S : S → S is an orientation reversing homeomorphism. Proof.Write (D 2 × n S 2 , L(n, 1)) for the Euler number n disc bundle over S 2 .Choose an orientation preserving homeomorphism G : ν(S) ∼ = D 2 × n S 2 that preserves the 0-section.Apply the orientation preserving map τ from Definition 4.6 to the disc bundle D 2 × n S 2 .When n = ±1, Figure 1 . Figure 1.A Kirby diagram for the D 2 -bundle over S 2 with Euler number n, and with k self-plumbings. Tubing Σ to itself, guided by the arc A ⊆ f . Figure 2 . Figure 2. Pushing intersections between V and Σ down, then tubing Σ to itself. [S].As [S] ∈ H 2 (W i ; Z) generates, and the diagram commutes, we see that y maps to 0 under the vertical morphism.Write u[S] + v[T ] for the Poincaré dual class to η * (x).Let b := a/2 and let G : W → CP ∞ be such that G * (x) = η * (x) − (u + bv)y.
12,457.2
2022-08-08T00:00:00.000
[ "Mathematics" ]
Transient protein-protein interface prediction: datasets, features, algorithms, and the RAD-T predictor Background Transient protein-protein interactions (PPIs), which underly most biological processes, are a prime target for therapeutic development. Immense progress has been made towards computational prediction of PPIs using methods such as protein docking and sequence analysis. However, docking generally requires high resolution structures of both of the binding partners and sequence analysis requires that a significant number of recurrent patterns exist for the identification of a potential binding site. Researchers have turned to machine learning to overcome some of the other methods’ restrictions by generalising interface sites with sets of descriptive features. Best practices for dataset generation, features, and learning algorithms have not yet been identified or agreed upon, and an analysis of the overall efficacy of machine learning based PPI predictors is due, in order to highlight potential areas for improvement. Results The presence of unknown interaction sites as a result of limited knowledge about protein interactions in the testing set dramatically reduces prediction accuracy. Greater accuracy in labelling the data by enforcing higher interface site rates per domain resulted in an average 44% improvement across multiple machine learning algorithms. A set of 10 biologically unrelated proteins that were consistently predicted on with high accuracy emerged through our analysis. We identify seven features with the most predictive power over multiple datasets and machine learning algorithms. Through our analysis, we created a new predictor, RAD-T, that outperforms existing non-structurally specializing machine learning protein interface predictors, with an average 59% increase in MCC score on a dataset with a high number of interactions. Conclusion Current methods of evaluating machine-learning based PPI predictors tend to undervalue their performance, which may be artificially decreased by the presence of un-identified interaction sites. Changes to predictors’ training sets will be integral to the future progress of interface prediction by machine learning methods. We reveal the need for a larger test set of well studied proteins or domain-specific scoring algorithms to compensate for poor interaction site identification on proteins in general. http://www.biomedcentral.com/1471-2105/15/82 Background Protein-Protein Interactions (PPIs) occur when two or more proteins bind together, triggering the proteins' biological activities. With alterations in PPIs being a contributing factor in much of human disease, both academia and industry pursue further understanding of these sites. PPIs can be divided broadly into two types: obligate interactions, where constituents are not stable structures in physiological conditions unless they are in a complex; and transient interactions, where binding partners may dissociate from each other and exist as stable entities in the unbound state [1]. Prediction of obligate interfaces is pharmacologically less interesting since the formation of such interfaces does not typically contribute to signalling cascades. In contrast, the binding of transiently interacting partners almost always leads to important cellular signalling events [1]. PPIs can be viewed at multiple levels: the structural level, in which the atomic details of a molecular interaction are accurately known, and at the cellular network level, which provides a broader overview of the functional relationships and downstream effects of interactions between the proteins in the cell. Protein interaction networks are useful sources of biological information for numerous purposes, including the provision of putative interactions between domains as inferred from known PPIs, such as those derived from two-hybrid screening [2]. While protein networks can provide a more holistic understanding of the processes within the cell, pharmaceutical development relies on a comprehensive understanding of the intricacies of a given interaction, so as to chemically manipulate its function in a precise manner. As such, PPI prediction places emphasis on the more fine-grained, structural approach to elucidating the mechanism of pharmacologically relevant interactions. Statistical models for understanding and predicting interaction sites largely began with Jones and Thornton in 1997 [3] with their systematic characterisation of PPIs; following, a number of groups have added and refined their initial characterizations, observing PPI sites to be highly conserved, hydrophobic, planar, and protruding [4][5][6][7][8][9]. As no single feature is sufficient for accurate identification of a PPI site, methods such as linear regression [10] or scoring algorithms modelled after empirical energy functions [11] were employed in early predictors. Although their conceptual simplicity is suited for initial exploration by virtue of a clearer set of results to analyse, these methods require prior knowledge about the features used to characterize the interaction sites, usually in the form of hard-coded parameters. Consequently, these parameters are more prone to bias and incapable of adaptation to new data without direct human intervention. Researchers are increasingly turning to machine learning (ML), frequently using algorithms such as support vector machines (SVM) [12,13], neural networks [14][15][16], Bayesian networks [17][18][19], and random forests [20]. Accurate ML prediction relies on sets of features, used to distinguish instances of a given class, being consistent across query and training instances. For training data, the typical PPI predictor uses either a very small, handselected set of well-studied proteins [12,21,22] or a large number of proteins obtained through automated filtering of an online protein structure database [13,23] such as the Protein Data Bank (PDB) [24]. This is generally accompanied by a small set of characterising features and a supervised learning algorithm [12,18,21,[25][26][27][28][29]. We present an analysis of machine learning protein interface prediction (MLPIP), inspecting the process from beginning to end. We investigated a large set of interface residue features drawn from previous efforts, a large PPI dataset filtered from the PDB, and numerous machine learning algorithms, including logistic regression, decision trees, and Bayesian networks. From this analysis we achieved a predictor that scored higher than competing predictors using a well labelled training dataset of proteins with a decision tree learning algorithm and four characterising features. Training and testing dataset In an attempt to exclude proteins participating in obligate interactions, we generated our training set by selecting only proteins which can form stable structures in vivo without needing to be bound to other proteins [30]; crystal structures of obligate interactions should not satisfy this criteria. We only took proteins with structural deposits in the PDB as it is a necessary requirement for the analysis of structural feature that contribute to PPIs. Many intrinsically disordered proteins (IDPs), which lack stable tertiary structures, also participate in transient PPIs. The lack of PDB entries of IDPs limits their use in PPI prediction. The training set consisted of a collection of proteins having only one entry in the COMPND record of the PDB, representing proteins in an unbound state so they could be queried and trained in their native conformations. Protein files with a single COMPND entry most often contain multiple, nearly identical, chain entries representing either subunits of a homomultimer or similar conformations of the same protein included for completeness; these files represent the monomers in the training set. To control the quality and variety of the monomers in the training set, proteins were filtered by properties in their PDB files. These included structure resolution (≤ 3.5 Å), number of chains (≤ 5), chain length (≥ 100 and ≤ 800), and chain length standard deviation (the difference between the multiple chains of the same compound listed in the PDB file was ≤ 5), ensuring proteins http://www.biomedcentral.com/1471-2105/ 15/82 in the set were of high quality and regular size, as well as ensuring regular handling of chain heads and tails. The number of connections (≤ 1000) and number of disulphide bonds (≤ 100) were filtered to remove unusual proteins such as 2IWV, a monomeric porin laced with ligands, and 3V83, a transport protein with a high number of disulphide bridges. In addition, antigen-antibody complexes were removed and the presence of nucleotides in the protein structure was disallowed to restrict training samples to strictly protein-protein interactions. We also mandated that crystallographic B-factors be given for all atoms so that the values could be used directly as a feature by machine learning. With the above criteria applied to a 2013 January snapshot of the PDB, the monomer dataset is reduced to 48,424 structures. The majority of protein exclusion was due to chain length restrictions, as many smaller entities in the PDB are not proteins at all but are instead fragments or pharmacological compounds. Lastly, proteins with more than 70% sequence identity, as determined by BLAST [31], were considered duplicates, and only one was kept. This was done to avoid the over-representation of proteins whose structure has been recorded many times, though some homologues may remain in the dataset. Of each remaining structure, the chain used to represent the monomer was the longest chain in the PDB file or the chain with the lowest sum of B-factors (should there be several chains of the same length). If no chains had a lower sum of B-factors or longest length, then the first chain in the file was used. A protein complex was defined as a structure having more than one COMPND entry. To identify which monomers in the dataset have interaction partners, each monomer was aligned to all chains of all protein complexes in the PDB. A match was found if there was at least 80% sequence identity between one of the chains in the complex and the monomer. A geometric definition of a PPI was adopted for the identification of interacting residues using a distance cut-off between chains. For a residue to be considered part of an interacting site, it was required that there be at least one pair of atoms at a distance of at most 4.5 Å between two interacting chains. Interface mapping successfully mapped 663 monomers to complexes. After PDB validation and feature calculations, 392 monomers remained, with many discarded proteins representing complexes without acceptable interfaces as judged by our program. An unacceptable interface site was most commonly found on a complex where there were no atoms within 4.5 Å between two "interacting" chains or a site that had less than 90% residue identity between the complex and monomer representations of the chain (usually due to a poorly resolved residue), which did not allow us to exactly map the interacting residues directly back to the monomer. The monomer-complex interface mapping process described is effective for complexes stored directly in the pdb file, but ignores the biologically relevant multimeric form as predicted by the crystallographers (stored in the BIOMT field of the pdb files). For all proteins in the training set, the BIOMT data was analysed and interface mapping was performed, treating monomer files, as well as the complexes to which they were mapped, with the appropriate BIOMT information, thereby creating new interfaces for use in machine learning. Biologically, these are important to preserve, but they made no statistically significant difference to the performance of the predictor on the dataset as measured by cross validation. Most importantly, we aimed to maintain consistency with datasets used by other learners derived from data mining, and as such, these interface sites are not included in the below data, but data and results for most tests below is available in the supplemental material (Additional file 1: Table S5). Improving data labelling It must be addressed that a protein with a low percentage of observed interacting residues is likely not a result of lack of interactions, but rather a lack of complexes by which to identify the interacting residues of the protein. Thus, the use of proteins with unknown or unidentified interactions, which can be a problem even in manually curated datasets, leads to a systematic increase in type II error. As it is impossible to guarantee knowing every interface site for a given protein, an approach of assuming better identification of these sites in proteins with higher rates of identified interacting residues was adopted. We considered a protein as divisions of 100 residue domains, as suggested by the typical size of a domain [32], and assumed the accuracy of residue labelling was quite good if there was at least one interface site per domain (based on observations of protein domain sizes and interaction site distribution [33,34]), and comparatively better if there were at least two interface sites per domain. This reduced the possibility of missing interfaces, though potentially not fully eliminating it. To thoroughly explore the effect of data labelling accuracy, two testing subsets were generated from the full training set: a set with proteins where there is at least one interface per 100 residues (NI1) and a set of proteins where there are at least two interfaces per 100 residues (NI2). Compensating for class imbalance In total, only a small fraction of all residues in the full training set were identified as interacting, creating a class imbalance problem in MLPIP where the number of non-interacting residues far outweighs the number of interacting ones, resulting in a high bias toward the majority class [35][36][37]. The issue was explored with http://www.biomedcentral.com/1471-2105/15/82 random under-sampling, where instances of the majority class are removed until a target distribution of each class is attained. To find the optimal distribution, a set of proteins was repeatedly under-sampled to produce training sets with the proportion of interacting residues ranging from 30% to 70% of the training instances. These were then compiled, trained upon, and cross-validated for their true positive rates and true negative rates to find the point at which these rates met for a balanced prediction. Machine learning features Features used to predict interface sites were categorised as either residue features or residue-local features. To calculate residue-local features, a residue was considered with each of its neighbours, a set typically ranging from two to six residues on which calculations were performed over their collective surface area with the resulting value applied to the nodal residue. The use of residuelocal features allows the calculation of the basic geometric attributes: protrusion, roughness, surface density, and curvature. The scope of these features is limited so that the integrity of the residue value is little affected by the values of residues further away on the protein. A baseline set of residue features was chosen from the pool of commonly-used features in MLPIP: these were relative solvent-excluded surface area (relSESA), B-factors, hydrophobicity, propensity, electrostatic potential, energy of solvation, conservation, evolutionary rate-shift, and disorder. For details on the calculations of each of these features, please refer to Additional file 2. To ensure proper citation of tools used to calculate some of these features, we note that MSMS [38] was used to calculate Solvent Excluded Surface Area as defined by Connolly [39], the mean maximum residue surface area was calculated using UCSF Chimera [40] (Additional file 3: Table S1); the CX algorithm [41] was used to calculate protrusion; roughness was calculated as a function of protein surfaces, as recommended by Pettit and Bowie [42]; the scale by Fauchére and Pliska [43] was used to calculate hydrophobicity; electrostatic potential was derived with APBS [44]; the curvature calculation was modelled on an approach by Coleman et al. [45]; rate shift was computed with the Rate4Site algorithm [46]; and conservation was given by the ScoreCons algorithm [47]. These tools are commonly used in MLPIP studies and were here used to give values as similar as possible to previous efforts for individual attributes. Machine learning algorithms The Weka software package [48] was used for all the machine learning algorithms, where default parameters were employed for the following algorithms: Logistic Regression, Multilayer Perceptron, Bayesian Networks, and a variety of decision tree models. In particular, an Alternating Decision-Tree model, which specialises in two-class problems and performed very well in testing, was used for the final predictor. The classifiers were validated using leave-one-out cross-validation (LOOCV). Instead of leaving out a single residue, which is regularly the instance, each cross-validation leaves out a full protein, including all of its constituent residue instances. Classifiers were trained on a training set preprocessed to remove all instances of one particular test protein and tested on the residues belonging to the protein removed. This is iterated over the entire set of training proteins. Statistical analysis of results Prediction of interacting site was evaluated and measured using the following statistics: Where TP is the number of true positives, TN is true negatives, FP is false positives, and FN is false negatives. The statistics calculated here are: true positive rate (TPr), also commonly referred to as sensitivity or recall, true negative rate (TNr), also known as specificity, precision (PRC), the Matthews correlation coefficient (MCC), and F measure (F1). Following LOOCV, mean MCC is given as an average score over a set of proteins. Pearson's correlation between features and interaction site involvement was computed using MATLAB and evaluated using a two-tailed test of significance. Statistics of machine learning results were tested with the Wilcoxon signed-rank test using R, where the difference in score was ranked by absolute value and each rank was modified by the sign of the difference, i.e. whether the change was positive or negative after treatment. The modified ranks were then summed to generate the test statistic W, for which the sampling distribution approaches normality and a p-value can be calculated based on the z-score. Feature set selection Feature set selection was performed across machine learning algorithms and training sets, using cross-validation to select optimal models based on MCC scores from LOOCV. A genetic algorithm was used to produce and http://www.biomedcentral.com/1471-2105/15/82 intelligently evolve attribute sets, which were then evaluated by a variant of the RACE algorithm [49], in which all individuals of a given generation were raced against an elite (i.e. a best feature set yet found) of the previous generations. Computational efficiency was further supplemented by a variant of the Hoeffding Race algorithm described by Maron and Moore [50] with a simple addition, replacing the highest confidence bounded average x max =x+ with a weighted average of the current empirical mean and x max , representing the unevaluated data, to take advantage of the known data set size. Datasets The full feature set was calculated for every residue regardless of whether it was exposed on the surface or buried within the core of the protein, since buried residues may become exposed during interactions and thus play a role or affect nearby surface residues and modify their ability to participate in interactions. Within the total testing set of 392 proteins, an average of 13% of residues were labelled as interacting. However, there was wide deviation (see Additional file 4: Table S2), with, for example, 1WYY having nearly all of its residues labelled interacting. An unusually high proportion of interacting residues, here considered greater than 50%, occurred in only 9 proteins. The NI1 test set of 71 proteins with at least 1 interface per 100 residues, which likely has fewer unidentified interaction sites, has 26% of its residues labelled interacting, and NI2, a test set of 15 proteins used for comparative data with existing servers with at least 2 interfaces per 100 residues, has 36% of its residues labelled interacting. Since it is of the general consensus that manually identified interaction sites from published sources generate a more accurate dataset, a thorough analysis of more than 80 publications on more than 70 protein complexes and their protomers was performed. We found that papers reporting on the complexes often do not identify the interaction site completely, instead identifying only a few residues, which the authors considered to be of interest. Papers that provided a more complete description of interaction sites proved almost identical to interaction sites mined by our program when 72 manually selected proteins and their interface sites were examined (data not shown). Our automated method provides similarly accurate information regarding interaction sites, but with vastly improved efficiency over manual sifting of publications. Other avenues of exploration are available to PPI prediction, such as the investigation of integrating protein network information into transient PPI prediction models. This potentially includes novel feature extraction and the creation of PPI datasets via putative interface databases. Conversely, PPI prediction may provide structural detail on the characteristics of the interactions present in protein networks. A comparison of datasets derived from the putative interaction networks versus those calculated from structural information could provide insight into the relevant characteristics of PPIs, as well as a mutual and independent validation of the features discovered via both methods. Optimal class distribution By varying the proportion of interacting and noninteracting residues in the training set, we observed that the ability of the classifier to predict a class is strongly influenced by the proportion of instances of that class present in the training set ( Figure 1). The point at which the classifier was able to predict positively and negatively equally well was consistent across most classifiers, with 45%-50% of residues labelled positively interacting. The exception was the Naive Bayes classifier, which predicted optimally with approximately 40% interacting residues and 60% non-interacting. Feature set evaluation An analysis of each feature's correlation to positive labelling was made using the Pearson product-moment correlation coefficient (Additional file 5: Table S3). The geometric features -surface area, protrusion, and roughness -exhibited the greatest significant correlation with interaction site presence; the features hydrophobicity, residue propensity, solvation energy, B-factors, and electrostatic potential displayed reduced yet significant correlation. The remaining features, rate shift, curvature, conservation score, and disorder demonstrated poor correlation with weaker statistical significance (P > 0.001). Though features with the highest correlation to protein interface presence are most desirable, machine learning classifiers rely on the collective effect of the features involved, where those best correlating with interaction site are not necessarily the best features with which to train due to orthogonality and other concerns. For testing the reliability of each feature in predicting interaction sites and to take advantage of the effect of combinations of features, we used a modified genetic search algorithm to find optimal feature sets for a variety of evaluation classifiers ( Table 1). The best correlated features by absolute value of raw correlation with positive labelling were relSESA (0. Multilayer Perceptrons were excluded from our feature selection analysis due to excessive run-time required to generate the optimal feature set, which yielded insignificant increases in score. While it is expected that different classifiers will place varying value on distinct attributes, we found surface area, solvation energy and density to be consistently selected. Further, electrostatic potential, Curvature 2 • A white box ( ) represents the feature was selected when the algorithm was tested on all proteins, using leave-one-out cross-validation, while a feature with a black circle (•) was selected when tested on the NI1 subset. Count is the number of times a feature was selected by either datasets tested. conservation, rate shift (which is another measure of conservation), and disorder were also frequently present amongst the resulting optimal feature sets, chosen by at least 3 out of 5 classifiers. Assessing the effect of the optimal feature set ( Table 2) on both the full protein set and the NI1 subset led to significant improvement in performance for all predictors tested, as determined by the Wilcoxon's signed rank test (P < 0.05 in all cases). In most cases, applying feature selection led to significant improvement in sensitivity (TPr) without improving specificity (TNr). For two classifiers, Baysian Networks and AD-Trees, the change in MCC was not significant after applying feature selection when used on the full protein set, but high significance levels (P < 0.001) were reached when applied on the more accurately labelled NI1 subset. Training set evaluation The ability to accurately identify residues with respect to their involvement in PPIs, which has not been noted in most MLPIP papers, is an issue that is very challenging to solve entirely as it is difficult to ensure that a protein has every interaction site identified. A comparison of prediction accuracy between the full dataset and the NI1 subset showed that regardless of whether feature selection was applied, prediction accuracy as determined by MCC and F measure was significantly improved for all machine learning algorithms ( Table 3). The NI1 showed an average of 0.06 increase in MCC using the optimal feature set, representing a 44% increase in performance. To reduce noise in the dataset, we performed an iterative removal of proteins that scored the lowest from the training set in cross validation using AD-Trees for http://www.biomedcentral.com/1471-2105/15/82 Figures 2, and 3), assuming that they were the ones adding the most noise. Expectedly, this resulted in a considerable increase in score. To assess if this is a result of the trend in the moving average of MCCs by the iterative removal of the lowest scoring proteins, we tracked the changes in MCC of the set of 10 proteins (1C9Q, 1I9E, 1J1J, 1MZ4, 1YGT, 1ZKZ, 2JAY, 2R2Y, 3R3Q, 3RKI) that had the highest MCCs when trained with the full protein set. Through the use of a better labelled set to perform the same operation, considerably less variation in score occurs ( Figure 4). When dissected, the effect on score as the worst scoring proteins are iteratively removed is hypothesized to be the combined result of three factors: presence of unknown interaction sites, training set optimisation, and the "odd Algorithm abbreviations in Table 1. Statistical significance was calculated using 1-sided Wilcoxon's signed rank test. (* P < 0.05; ** P < 0.01; *** P < 0.001). http://www.biomedcentral.com/1471-2105/15/82 protein" effect. The odd protein effect refers to proteins that deviate significantly from a typical protein structure, which would here be considered a mix of secondary structures in a moderately globular structure. An example of an "odd protein" is 3MTT, which was the second lowest scoring protein, consisting of only two alpha helices. Another example is 1WYY, a membrane fusion protein with 84% of its residues interacting that is made up of tightly, circularly packed alpha helices, and could not be filtered through basic restrictions. This was the only protein that was removed by hand after discovering its unusual number of interacting residues and highly unusual shape. Testing on the worst 10 proteins that were removed, Promate [18], cons-PPISP [51], PINUP [52], and PIER [28] achieved average MCC scores of .007, .050, .042, and .11 respectively (data not shown), substantially lower than their scores on the full test set (Section 'A new MLPIP: RAD-T (Residues on Alternating Decision-Trees)'), suggesting this is not a weakness of the predictor used, but a property of the proteins themselves. After the removal of many proteins with low labelling accuracy and unusual characteristics, training set optimisation begins to occur, where the training set converges on a set of proteins that predict well on each other, but fails to generalise to the majority of the other proteins in the full set. In the case of the full training set, this began when 160 proteins remained. As the training set becomes increasingly small, the performance of the classifier becomes highly variable, which is expected when there are not enough samples for the machine to generalise, resulting in overfitting to the training data and poor performance on the test data. Increasing the bias in the learning decreased the ability for the machine to generalise on novel data. When testing on the full set using non-feature-selected AD-Trees, training on the top 10 scoring proteins gave an average MCC of 0.085, training on the top 20 gave an average of 0.105, training on the NI1 gave an average of 0.131. LOOCV on http://www.biomedcentral.com/1471-2105/15/82 the full set gives an average MCC of 0.138. This inability to generalise is not a disadvantage, assuming training sets can be intelligently biased, as many therapeutic applications require accurate prediction on only a few target proteins. For example, identifying proteins with similar structure and using these to specialize the machine or another learning mechanism has been used successfully by PredUS [53] and by PrISE [54]. Unexpectedly, the top scoring proteins demonstrated by our curve had little structural identity with one another, according to DeepAlign [55], PDBeFold [56] and MISTRAL [57]; they also lacked significant functional similarities as judged by their Gene Ontology (GO) tags (see Additional file 2, section two). These proteins' feature identity with one another as related to their interacting sites has not yet been traced to structural or functional origins. There was no significantly increased Pearson's correlation between each feature and positive labelling (see Additional file 6: Table S4 for full results). As such, there is certainly some other underlying connection given that these proteins score well against each other within such a small set, with unusual similarity between the properties per residue defining interacting and non-interacting residues. The identification of some biological similarity may enable its application to those proteins that are lower scoring on the curve and could be pursued. We continue to explore this problem in an attempt to better understand what makes a residue's interaction predictable. A new MLPIP: RAD-T (Residues on Alternating Decision-Trees) To assess the capabilities of existing machine learning protein interface site predictors and to compare their performance to an optimised predictor with a better filtered training set, we created a new predictor, Residues on Alternating Decision Trees (RAD-T). The AD-Tree algorithm was used as the machine learner, having consistently provided the highest scores in Section 'Feature set evaluation' and offering potential for optimisation. RAD-T was run with 10 boosting iterations, trained on the NI1 set of proteins, and tested on a modified Docking Benchmark 3.0 set of 188 proteins (DS188) [53,58] to generate a comparable test of performance against other machine learning predictors. Reports of scores from competing programs Cons-PPISP, Meta-PPISP [59], PINUP and Promate were generated previously [54] and used directly to compare to RAD-T. The features used were relSESA, solvation energy, electrostatic potential, and density, chosen by a round of attribute selection with AD-Trees using the NI1 set. Compared to our competitors, at 65 % (Table 4), RAD-T had the highest true positive rate, which measures the recall/sensitivity of interacting residues in the prediction. This boost in recall came at the cost of the precision and accuracy of the overall prediction. From an application point of view in the development of therapeutic targets, it is preferable to make an overestimate of the number of residues interacting than to miss a potentially important cluster of interacting residues. Since the DS188 set consisted of 14% interacting sites, which is similar to our the full set of proteins, a rate we consider suboptimal, we also tested RAD-T on the NI2 set. The greatest concern for performance testing in this case was overfitting, which can be avoided if all testing proteins are excluded from the training set. However, since the 15 test proteins constitute approximate one-fifth of the training set of 71 proteins, removing all of them strongly skews the learner and would not create an accurate representation of predictor performance. A solution was reached by removing only the query protein from the training set for each test, as in cross validation. While the testing set's labelling is presumed to be quite accurate and it is unlikely there are many unknown interfacing sites that could artificially decrease scores, this testing set consists only of a small set of proteins and thus needs to be expanded for wider use. RAD-T outperforms its counterparts with a median increase in MCC of 0.11 and average of 59.11%, and an average increase in F measure of 28 % ( Table 5). The difference in level of performance as compared to the DS188 is in line with our expectation that machine learning performance improves with more accurately labelled datasets. It is notable that 3 out of the 5 competing servers showed a decrease in MCC when tested on the NI2 set compared to the DS188. This is mainly due to the low recall, depicted as TPr, on the part of their servers while TPr for RAD-T improved from 65% to over 80% with the better test data. The size of the testing set is also too small to conduct significance tests that have meaningful interpretations. This difference in score may actually be higher than measured here, as the competitors' predictors have the distinct advantage of having included our test proteins in their training sets. Conclusion Our results show that the most useful features for machine learning protein interaction site prediction are, in order, relSESA, solvation energy, density, electrostatic potential, http://www.biomedcentral.com/1471-2105/15/82 These servers were chosen for consistency with literature and because they were the prediction servers that demonstrated reliability or were not altogether offline. Averages and medians indicated do not include RAD-T. conservation, rate shift, and disorder, as found by feature selection across a variety of machine learning algorithms on our datasets. Relative solvent excluded surface area and solvation energy were also critical features when buried residues were excluded from the dataset (data not shown), indicating that the role of relSESA lies beyond its ability to distinguish buried and exposed residues. We found that optimisation of machine learning algorithms and feature selection produced significantly better results than their unoptimised counterparts. However, these differences are overshadowed by the change resulting from varying restrictions on labelling accuracy. Improvements to dataset labelling led to better predictions by most algorithms tested regardless of whether an optimal feature set is used. Therefore, labelling accuracy in machine learning testing sets needs to be better addressed. As such, previous machine learning predictors may perform much better than their original results suggest. Through iterative removal we identified those proteins that scored worst, middling, and best. We believe that this method can be an important tool to better understand what makes a protein's interaction sites highly predictable. In cases of neither obviously poor labelling accuracy nor odd structure, it remains unclear what makes a protein interface highly predictable or unpredictable. Surprisingly, the top 10 scoring proteins in our training set scored almost as well when trained with the noise of the poorly labelled proteins as they did when trained with a better labelled training set. There are existing ways to specialise training sets, most popularly by using training proteins with high structural similarity to the given testing protein. In our study, the best performing training set that does not use the query protein as a training instance was this set of 10 proteins with low structural identity. We believe in order for MLPIP to make further progress, future focus will need to be directed towards training set generation and specialisation, and the search for larger testing sets with fully identified interface sites for more reliable scoring and significance tests. We also believe that MLPIP, frequently compared to the more specialised and more easily scored docking, may be undervalued due to poor scores by misinterpreting hidden interface sites. Using our conclusions and experiments to optimise the training set, feature set, and machine learning algorithm, we created a prediction program that scores substantially higher than previous non-structurally specialising machine learning predictors, with an average increase in score of 59% as compared to five other leading predictors. We expect continued improvement upon the acquisition of more proteins in the PDB both for more training instances and a better labelled testing set, as well as upon the application of specialised training sets for each query protein. Many areas of extraordinary complexity remain difficult to address by MLPIP, such as the timing, knowledge of physiological conditions, and biochemical modifications of proteins, with regards to their effect on interactions. We wish to explore methods of modelling these phenomena as well as exploring protein interaction networks and putative interactions for use in extending the versatility of future PPI predictors. Additional files Additional file 1: Table S5. BIOMT Data. Result of inclusion of complexes created from BIOMT field of pdb file. Additional file 2: Supplementary information for transient protein-protein interface prediction: datasets, features, algorithms, and the RAD-T predictor. This PDF file details the derivation of machine learning features and the mentioned gene ontology experiments.
8,265.6
2014-03-24T00:00:00.000
[ "Computer Science", "Biology" ]
A review of seventeen years of bank filtration in Brazil: results, benefits and challenges-Part 1: state of Santa Catarina This work is the first part of a national review about Bank Filtration (BF) that began in 2003, in Brazil. These studies were conducted in the laboratory and in the field with water and natural sediment from the study regions, showing how BF has been efficient worldwide for the treatment of water for public supply as an alternative treatment. It aims to show the synthesis of results to date and point out its main benefits and challenges; that is, the state of the art at the national level. The review is concentrated in Santa Catarina (part 1), Pernambuco and Minas Gerais (part 2). BF demonstrates efficiency in reducing parameters such as turbidity and coliforms (total and fecal), pesticides and toxins. However, BF showed low capacity in reducing parameters such as salinity and true color. BF is highly dependent on local geological conditions, so parameters such as iron, manganese, fluorine, alkalinity, hardness, and chlorides can be added to the treated water. INTRODUCTION The increasing industrialization of Brazil in recent years has brought incalculable damage to the environment, especially to surface waters, where deterioration has brought technical challenges to water treatment technologies already employed. Thus, water pretreatment technologies have been widely studied to ensure that treatment plants meet national potability standards. Bank Filtration (BF) is a water treatment technology that has been used for over 140 years throu-ghout Europe (RAY et al., 2003;SOARES, 2015). In Brazil, this technology has been studied for 17 years, limited to small-scale field studies, with no reports on this technology implementation in full scale. BF consists of the use of natural materials from the bank itself and from the bottom of the source as a filtering medium. It occurs through the positive, natural or induced hydraulic gradient (through pumping) in wells with hydraulic connection, built near the banks of the surface well ( Fig. 1). A review of seventeen years of bank filtration in Brazil: results, benefits and challenges -Part 1: State of Santa Catarina Figure 1 -Operation diagram of Bank Filtration Source: Esquivel et al. (2012), adapted from Hiscock and Grischek (2002) and Sens et al. (2006) This gradient induces the water flow through the soil, which removes or attenuates the contaminants present in the surface water in the source-production well route. Captured water is a mixture of both aquifer and surface water percolated by the bank. The proportion of filtered well water captured from wells depends on a number of factors, including bottom depth and permeability, water viscosity, stock elevation, and pumping rates. According to Donald and Grygaski (2002) apud Sens et al. (2006), the BF system location requi-res important information about local soil characteristics, such as: In addition to physical retention, several other phenomena occur during the water flow towards the well, since the bank soil contains microorganisms that can act in certain substances (pesticides, toxins, organic matter, among others), promoting water quality improvement. Thus, BF works as a low-cost pretreatment in the production of high-quality water supply and can perform as the only treatment before disinfection (MONDARDO, 2009). BF has been applied in Europe to produce water for supply, most often along the Rhine, Elbe and Danube rivers (RAY et al., 2003). In the United States, interest has grown because it is a low-cost, complete or alternative treatment for filtration systems to remove waterborne pathogens, such as Giardia, Cryptosporidium and viruses (SENS et al., 2006). The first known BF use for water supply purposes was from a company in the United Kingdom (Glasgow Waterworks), which built a drainage pipe parallel to the Clyde River in 1810 to extract filtered water from the riverbank. In the mid-nineteenth century, BF was officially adopted in Europe to produce drinking water. In Western Europe, one of the first BF facilities was built in Germany on both sides of the Rhine River due to limited groundwater resources in the region. Due to an outbreak of the cholera epidemic in Hamburg, Germany, in 1892, caused by the direct use of the waters of the Elbe River for public supply, the use of artificial or natural passages of underground river water as a new form of water abstraction for human consumption has become essential. Statistical data from 1998 showed that, among sources used for water supply by members of the Rhine River Sanitation Association (German and Dutch side), 49% corresponded to BF and groundwater recharge (RAY et al., 2003). Some BF facilities on the Danube have been operating for over a century, near the cities of Vienna, Austria, and Bratislava in the Slovak Republic. Other important BF projects can be found in Budapest, and in the city of Belgrade in Yugoslavia (SENS et al., 2006). In Brazil, BF has been used for some time, without being named, in the Upper Itajaí Valley -Santa Catarina State, through wells of 1.2 to 1.5 m in diameter, built along the rivers of Itajaí do Sul, Itajaí do Oeste and Itajaí do Norte (tributaries of the Itajaí Açu River) (SENS et al., 2006). This work is a national review in the field of bank filtration technology that shows the results obtained over 17 years of studying BF in Brazil to date and point out the main benefits and challenges of the technique; that is, the state of the art at the national level. It was divided into two parts for a better understanding and organization of the content on bank filtration: Part 1 describes the researches performed in the state of Santa Catarina, and Part 2 describes the researches performed in the states of Pernambuco and Minas Gerais. This first part was compiled on a result matrix, from which tables were created containing physical, chemical, biological and specific contaminant parameters. For each grouping of results, there was a discussion of the elaborated table. Source: Adapted by the authors from Sens et al. (2006), Soares (2009), Michelan (2010 and Guedes (2018) Peri Lake -Florianópolis (SC): Lake bank characterization Peri Lake is located in the southern region of Florianópolis island. The lake has approximately 5.0 km 2 (SILVA, 1999), an average depth of 4.2 m, its deepest part reaches 11 m (SIMONASSI, 2001). The aquifer has an average depth of 20 m, a hydraulic conductivity of 1x10 -4 m/s and an effective porosity of 25% (ESQUIVEL et al., 2012). The lake water is approximately 3 m above sea level with no saline wedge intrusion (CAMPOS, 2012). The studies were conducted in an area with marine deposits (SANTOS et al., 1989;OLIVEIRA, 2002). The design of the production well (P1) began in 2003 at 20 m from the shore of the lagoon (SENS et al., 2006). In 2011, this well was replaced by a new one with a smaller diameter (P2), keeping the same distance from the lake (ESQUI-VEL et al., 2017). The sediment analysis carried out from the bottom of the lake, near the wells, showed the presence of 85% of fine sand in the bottom of the Peri Lake (in the first 50 cm), and 13% medium/ coarse sand, with little depth variation, and detection of 4% clay (MONDARDO, 2009;ESQUIVEL et al., 2012;SOARES, 2015;SOARES et al., 2019). The presence of organic matter (OM) was also found in the order of 27% in the first 30 cm depth (SENS and DALSASSO, 2007). The effective diameter values (d10) in Peri Lake were similar in the first 50 cm, indicating hydraulic conductivity from 1.7x10 -4 to 2.9x10 -4 m/s (RABELO, 2006;ESQUIVEL et al., 2012;SO-ARES, 2015;SOARES et al., 2019). The sediment site also has a low curvature coefficient (mean CC of 1.0) and a low uniformity coefficient (mean CU 1.4), indicating uniformity of grains and sediments with grain size tending to homogeneity (ESQUIVEL et al., 2012;SOARES, 2015;SOARES et al., 2019). The low specific porosity, ranging from 25% to 26%, is attributed to the organic deposition of compounds and/or presence of retained gases in the medium (physical and biological clogging) (ESQUIVEL et al., 2012). Through soil evaluation to a large extent, marine sediments were identified in its composition, with dark sand (since there is OM) up to about 1 m deep, fine sand (also containing OM) from 1 to 4 m deep, fine white sand between 4 and 18 m, and clay between 18 and 23 m. The presence of fine sand falls on a scale of 80 to 99% in the first 5.5 m depth (SENS et al., 2006;SENS and DAL-SASSO, 2007;ESQUIVEL et al., 2016). Sens et al. (2007) built P1 with 100 mm in diameter, 12 m deep and the filter in the last 4 m, approximately 20 m from the lake. Monitoring studies have indicated that the flow naturally proceeded towards the main well (SENS and DALSASSO, 2007;MONDARDO, 2009). Two protection wells were also built on each side of P1 to ensure that the water infiltrated preferably came from Peri Lake (SENS et al., 2006). When P2 was built, with 50 mm in diameter and 12 m deep, 20 m from Peri Lake and a 2 m filter, P1 was decommissioned (ES-QUIVEL et al., 2017). Seven piezometers were drilled, four of them 12 m deep, located at 8, 40, 55 and 57 m from the bank, and the remaining at 4, 5, and 6 m deep, 1 m from the bank (ESQUIVEL et al., 2012(ESQUIVEL et al., , 2016(ESQUIVEL et al., , 2017. The quality of bank filtered water in P1 and P2 are discussed in Table 1. The saturated aquifer layer (D) was defined as roughly 18 m, with effective porosity (ne) corresponding to 20% (SENS et al., 2006;SENS and DALSASSO, 2007;ESQUIVEL et al., 2012). The pumping flow (Q) in the first study corresponded to 24 m³/d (approximately 0.26 L/s), obtaining hydraulic conductivity (K) of 1.49x10 -5 m/s, and 1.5x10 -4 m/s, and the vertical conductivity (Kv) obtained was 1.42x10 -5 m/s and horizontal (Kv), 1.42x10 -4 m/s (SENS et al., 2006;SENS and DALSASSO, 2007;ES-QUIVEL et al., 2012). After 48 h of pumping, the well lowered about 0.6 m (SENS et al., 2006;SENS and DALSASSO, 2007). The water's travel time from the lake to the well was about 10 days for the 5.5 m deep well, and 14 days for P1 with a depth of 12 m (SENS and DALSASSO, 2007). 2.1.1 Water quality characterization: physical, chemical and biological parameters Table 1 shows raw water physical parameters (RW -water from the source -Peri Lake) and filtered water (FW -water in production wells -P1 or P2) between the years 2004 and 2017 (SENS et al., 2006;SENS and DALSASSO, 2007;MONDARDO, 2009;CAMPOS, 2012;ESQUIVEL et al., 2012ESQUIVEL et al., , 2016ESQUIVEL et al., , and 2017. As observed, BF was very effective in turbidity removal, in both P1 and P2, (94 and 97% average removal, respectively) and consequently, apparent color (89 and 93% removal, respectively). Not as expressively, advantageous BF results in true color removal were noted as well. In pro-duction well P1 (SENS et al., 2006;MONDARDO, 2009), the total dissolved solids (TDS) increased at an average of 3.2 times. The authors pointed out that the increase in TDS was due to the presence of calcium sediment at the site, which was also related to the increase in P1 conductivity (3 times), total hardness (7 times) and total alkalinity (11 times) as per Table 2. Moreover, it was considered that TDS variation was caused by the leaching of existing compounds in the soil. RW-raw water (source-Peri Lake); FW-filtered water (production well); FW/RW average -average increment of parameter in FW; NA-Not analyzed; NTU-Turbidity Unit; TDS-Total Dissolved Solids. There was also a 3-to 4-fold increase in electrical conductivity in both wells. Esquivel et al. (2017) mentioned that the increase of these parameters after BF is justified by the interaction with the aquifer, which has leached minerals in its composition. However, even with the increase of these parameters, as well as total hardness, Ca hardness and Mg hardness in FW, they were within the potability standards effective in 2012 (Ministry of Health Ordinance 2914, current Annex XX of Consolidation Ordinance Nº 5). In P2, the hardness results in calcium and magnesium showed an increase of 11 and 2 times, respec-tively. In this well, there was also a small increase of manganese and little variation of iron in relation to the source. In terms of the source, the chloride variation in both wells was not significant either. Similar concentrations of chloride in RW and wells P1 and P2 indicated that most of the filtered water comes from the lake (ESQUIVEL et al., 2017). The pH of the source varied slightly over the 13 years of the study, not exceeding 0.6 pH units. Table 2 shows a small increase in pH in both wells in relation to the source, which agrees with the observed increase in alkalinity and total hardness. x x x RW-raw water (source-Peri Lake); FW-filtered water (production well); FW / RW average -average increment of parameter in FW; NA-Not analyzed Source: Adapted by the authors from Sens (2006), Sens and Dalsasso (2007), Mondardo (2009), Campos (2012, Esquivel (2012Esquivel ( , 2016Esquivel ( , and 2017 In isolation, Esquivel et al. (2017) also analyzed bromides in RW and FW in well P2, not observing variations. Furthermore, 3.1 and 34 mg/L of calcium and 1 and 2.4 mg/L of magnesium were obtained in RW and FW, respectively. The increase in the values of these parameters aligns with the increase of total hardness. Changes were observed in alkalinity, total hardness and conductivity conditions in FW, as evidenced by studies performed in P1 (Table 2), as well as complementary chemical aspects of FW (Table 3). These changes were justified by Sens et al. (2006), who attributed them to the presence of calcium sediments in the geological profile, observing fragments during well drilling. Nevertheless, the authors considered that there was a great influence of sediments on the increase of the electrical conductivity from the lake. The same chemical behavior aforementioned was observed in P2. Regarding the evaluation of nitrate ions, Mondardo (2009) observed the appearance of 1.24 mg/L in P1. The other authors did not detect variations between the source and the produc-tion well (SENS et al., 2006;SENS and DALSASSO, 2007;ESQUIVEL et al., 2012ESQUIVEL et al., , 2016ESQUIVEL et al., , and 2017 except Sens et al.(2006), who noted a small 1.25fold increase in nitrate ions in FW (1.93 mg/L), the appearance of ammoniacal nitrogen (between 1.2 and 1.9 mg NH 3 -N/L) (SENS et al., 2006;SENS and DALSASSO, 2007), and the removal of 100% of chlorophyll in well P1 (SENS and DAL-SASSO, 2007). There was no evaluation of these parameters in P2. Some authors (CAMPOS, 2012;ESQUIVEL et al., 2016) observed a 2-fold increase in the concentration of hydrogen sulfide in P2 as well (Table 4). Table 3 shows the absorbance 254 nm results, which indicates a considerable removal of dissolved organic matter (DOM) in P1 (66%) and 29% in P2. In addition, there was a significant decrease in total organic carbon (TOC) and dissolved oxygen (DO) in P1 and 98% of DO decrease in P2. (2012), Esquivel (2012Esquivel ( , 2016Esquivel ( , and 2017 Concerning the chemical aspects related to the degradation dynamics of OM, Esquivel (2012) mentioned that the process begins with the rapid consumption of DO, leading to an increase in nitrate ion (due to ammonium oxidation and the onset of OM degradation), and elevated manganese, iron, and sulfide values as the travel time increases. Even with the low OM reduction observed by the DOC analysis (Table 3), the removal of DOM (through absorbance results at 254 nm) responsible for the formation of trihalomethanes (THM) stands out (CAMPOS, 2012). Campos (2012) and Esquivel et al. (2012, and2016) highlight that the DO decrease indicates the occurrence of anoxic conditions, which is in agreement with the low oxide-reduction potential obtained in P2 (ORP in RW = 52 in FW = -307) (ESQUIVEL et al., 2012) according to Table 4. With the low oxygen in the medium, microorganisms continue to use other electron-accepting species such as OM. Esquivel et al. (2012 and2017), in a more indepth study, observed that the lake water presented around 8.6 mg O 2 /L. Through P2 monitoring, redox conditions were identified in the first meters of infiltration, where practically all oxygen, nitrate and sulfate were consumed. Iron and manganese dissolved, and the odor confirmed the presence of hydrogen sulfide. There was a gradual decrease of iron (II) and sulfide ions with increasing distance and depth to P2, where pyrite (FeS 2 ) formation occurs, as well as possible precipitation of iron carbonate (FeCO 3 ). The concentration of manganese ions (II) increased as it approached P2. Regarding the Trihalomethane Formation Potential (THMFP), Esquivel (ESQUIVEL et al., 2012) perceived a seasonal behavior, with a higher THMFP concentration at higher temperature periods, possibly due to the desorption/ dissolution of natural organic matter (NOM). The decrease in UV-254 nm absorbance values in the systems and the reduction of specific ultraviolet absorption (SUVA), as shown in Table 4, demonstrated that the water infiltration in the soil, on the well source route, preferentially promotes the THMPF removal in relation to the entire NOM. Esquivel concluded, through a first-order kinetic model, that the removal of NOM and THMFP occurs in the first days of infiltration, and the reduction of easily degradable NOM occurs in less than 2 days. As such, the moderately degradable fraction would need 60 to 90 days of travel to have 95% removal. An increase in travel time to remove slowly degradable NOM, which in practice is non-degradable, would not significantly change the removal achieved in their studies as about 95% of THM-FPs are present in the easily degradable OM fraction (ESQUIVEL et al., 2012, and2017). RW -raw water (source-Lagoa do Peri); FW-filtered water (production well); ORP -Oxide-reduction potential; THMFP -Trihalomethane Formation Potential; SUVA-specific ultraviolet absorption. Microbiological parameters were performed only for P1. The first analyses of BF efficiency for the removal of phytoplankton, including cyanobacteria and, more specifically, Cylindrospermopsis raciborski, were performed by Sens et al. (2006), whose results demonstrated 100% removal (Table 5). In later studies of Sens and Dalsasso (2007), Mondardo (2009), Romero et al. (2014), similar results were obtained. Studies also showed improvement in FW quality in P1, which refers to the presence of equivalent saxitoxins, and no traces were found after treatment. In all studies, none of the parameters in Tab Table 5 -Results of biological parameters in Peri Lake. Source: Adapted by the authors from Sens (2006), and Sens and Dalsasso (2007) Table 6. Table 6 -Synthesis of the models elaborated from field research. (ESQUIVEL et al., 2012) Estimate the travel time of the water to the production wells and the natural behavior of groundwater. 1) No pumping From the bottom of the lake to where the well grooves begin at 20 m from the lake and 9.5 m deep, it was an estimated 190 days. 2) With pumping With a flow of 30 m³/d of water, the minimum estimated time was at least 80 days. The water in the lake naturally infiltrates towards the piezometer system. (SOARES, 2015) Evaluate the behavior of groundwater in different BF pumping scenarios, at flows of 100 L / s and 200 L / s and considering different levels of bank clogging. 1) Natural water availability conditions The lake feeds the aquifer and the Sangradouro channel, which is also fed by groundwater. 2) Decreased water level of the lake The more the surface water level decreases, the more the channel is fed by groundwater, adopting an identity of flow gain and losing the characteristic of influential water body. With an applied flow rate of 100 L/s, under favorable conditions, the infiltration rates in the lake remain constant. 3) In dry conditions The flow of groundwater goes to the well gallery, reducing the natural recharge of the lake caused by the infiltration, and the channel suffers water loss due to pumping. The 3 scenarios above With an applied flow of 200 L/s, in all conditions, the infiltration rate in the lake remains constant and the aquifer suffers a decrease, being even greater in periods of drought, where the channel does not receive water from the aquifer and provides flow loss (worst scenario). (VARELA et al., 2018) Define the best water catchment scenario on the banks of Peri Lake. Barra da Lagoa -Florianópolis (SC): Study site characterization Barra da Lagoa is a neighborhood located in the eastern region of Florianópolis island, as shown ). A phytoplankton screen at the entrance of the system prevented the capture points from being blocked by fine sediments such as sand and small solids, as protection, besides, the project had a backwash operation when there was a lower quality of filtered water or less flow (BUR-GARDT, 2017). Soil samples at 1 m, 3 m, 4 m and 6 m in depth, as well as sand contained on the surface, were analyzed in order to compare the samples sub-mitted to climatic conditions with samples from the subsoil. The local sand presented, on average, from 1 to 6 m in depth, 0.17 mm and 0.24 mm in effective diameter-D10, and D-60, respectively. The surface sand presented 0.23 mm and 0.28 mm of effective diameter-D10 and D-60, respectively. The uniformity coefficient was, on average, 1.39. This shows that, over the 6 m depth, the characteristics of the filter medium undergo little change, being quite uniform as well. These results give the site the ability to remove impurities (BURGARDT, 2017). Water quality characterization: physical and chemical parameters Physical and chemical parameters were analyzed weekly between August and September 2016 (BURGARDT, 2017;BURGARDT et al., 2017;BURGARDT and SENS, 2018), and the results are shown in Table 7. The authors observed that the levels of salinity and electrical conductivity indicated that water filtration originated from the ocean, without mixing with fresh groundwater. There was also a considerable decrease in turbidity and apparent color. True color, TDS and DOC did not show a significant decrease in FW, nor did the absorbance at 254 nm. The system also showed a decrease in dissolved organic compounds, but on a smaller scale, con-sidering the results of absorbance 254 nm and DOC. The parameters of pH, salinity and elec-trical conductivity did not change significantly, showing a simple reduction, while the temperature remained constant (BURGARDT, 2017). Aquaculture Lake -Ituporanga (SC): Study site characterization Ituporanga is a municipality in the state of Santa Catarina, Brazil, which has the UTM coordinates of 638149.00 m E and 6966884.00 m S zone 22. It is located 163 km from the city of Florianópolis, the state capital. The BF facilities used by Soares (2009) were located next to an aquaculture lake on the EPAGRI's premises. The purpose of the system was the treatment of lake water for animal drinking. The location chosen for drilling showed a higher constitution of coarse sand (32.4%) in the first 1.2 m of depth, followed by fine sand (91%) in the second layer of soil (1.20 to 2.10 m), with low clay and silt content. Subsequent predominance of clay (50%) was observed in the third layer, following up to 4.6 m. The porosity of the second layer corresponded to about 30 to 35%, with hydraulic conductivity (K) of 5.4x10 -4 m/s. It was observed that the lake shore presented an average hydraulic conductivity around 2.22 x 10 -7 m/s, indicating sediment clogging and little potential for infiltration (SOARES, 2009). A production well was drilled with a depth of 2.8 m and a diameter of 1 m. The well flow at the ma-ximum (dynamic) level corresponded to 0.03 L/s, according to the rainfall conditions of the period, with intermittent pumping. It was observed that, in an average obtained from 100 days of operation, 25% of the water filtered by the well originated from the lake and the rest came from the aquifer. The time taken from the water to the well corresponded to about 70 days (SOARES, 2009). Table 8 shows the physical parameters analyzed in the aquaculture lake between 2009 and 2010. Water quality characterization: physical, chemical and biological parameters There was an increase in FW of 4 and 5 times of apparent color and turbidity, respectively (SO-ARES, 2009). A 6-fold increase in turbidity was also observed by Soares (2009) and Romero et al. (2010), who considered the influence of clay and iron in the increase observed in the apparent color and turbidity parameters, with a small true color removal. It was taken into account that such aspects were influenced by rainfall at the site and the leaching capacity of the compounds in the soil as well. The occurrence of iron and manganese in FW could also be correlated with the observed physical results. There was a 100% removal of TDS and 12% of SS (suspended solids), whose low removal was attributed to the accommodation of the soil around the well (SOARES, 2009). Table 9 presents the results of the chemical parameters obtained in the aquaculture lake (SO-ARES, 2009). The increase of total iron, manganese (II), nitrite and nitrate ions in the FW was observed by Soares (2009). Romero et al. (2010) also observed a 6-fold increase in total iron concentration after BF. These results were RW-raw water (aquaculture lake); FW-filtered water (production well); MPN -most probable number; A-absent. (2009) and Romero et al. (2010) Itajaí do Sul River -Ituporanga (SC): Study site characterization Ituporanga is a city located in the central-west region of the State of Santa Catarina. It is located 163 km from the state capital, Florianópolis. The Itajaí do Sul River is the main watercourse in the region, belonging to the Itajaí do Sul sub-basin, which covers 10 municipalities in Santa Catarina, including Ituporanga. The stretch of the Itajaí do Sul River that was part of the study area, be- sition of surface water in relation to agricultural areas was approximately 3 and 10 m (vertical and horizontal distance, respectively) (MICHELAN, 2010;ROMERO et al., 2010;MICHELAN et al., 2011;GUEDES et al., 2018). The subsoil of the perforated site showed a constitution of clay, silt and fine sand in the first 1.2 m of depth, followed by a layer of silt (58.16%), fine sand (29.61%), clay (6.74 %), medium sand (4%) and coarse sand (1.49%) up to 3.9 meters deep, where the predominance of fine, medium and coarse gravel (50.22%) started. It was considered that the last layer contributed to a hydraulic permeability (K) of 3.0x10-3 m/s. Michelan (2010) obtained a weighted effective porosity of 19%, with a travel time of 15 days while Romero et al.(2010) found 36.2% porosity, with a travel time of 28 days. The production well, located 18.5 m from the source, was dug in order to measure 4.7 m in depth and 1 m in diameter. The well went into operation with a maximum production flow of 12.76 m³/d (0.15 L/s) (MICHELAN, 2010;ROMERO et al., 2010;MI-CHELAN et al., 2011;GUEDES et al., 2018). 2.4.1 Water quality characterization: physical, chemical and biological parameters Table 11 shows that there was partial removal of all the studied physical parameters, except in the studies by Romero et al. (2010) where a small increase in TDS of 1.2 times in FW was observed (non-Tabulated result). The turbidity removal was more expressive, on average 68%, with Romero et al. (2010) achieving 84% removal of this parameter. Source: Adapted by the authors from Michelan (2010), Romero et al. (2010) and Michelan et al. (2011) Among the chemical parameters analyzed, according to Table 12, the increase in PA of 4 times on average in the total iron concentration stands out (MICHELAN, 2010;ROMERO et al., 2010;MICHE-LAN et al., 2011;GUEDES i, 2018), a factor associated to Michelan et al. (2010) Regarding microbiological parameters, Table 13 shows the efficiency of BF in the removal of total coliforms (MICHELAN, 2010;ROMERO et al., 2010) and E. coli (MICHELAN, 2010;ROMERO et al., 2010;MICHELAN et al., 2011). In another study, Michelan et al. (2011) mention the removal of 2 logs of total coliforms. RW-raw water (source-Itajaí do Sul River); FW-filtered water (production well); TOC-Total organic carbon, DO-dissolved oxygen. Source: Adapted by the authors from Michelan (2010), Romero et al. (2010), Michelan et al. (2011), Guedes et al. (2018 For more details regarding carbofuran removal using BF technology, Michelan (2010) carried out a test involving filtration columns whose results showed a travel time of 25 days for the removal of carbofuran in the order of 80%. The half-life of the compound in the system corresponded to 10.5 days at neutral pH. Belo River -Orleans (SC): Study site characterization Orleans is a municipality located in southern Santa Catarina, as shown in Fig. 3, and is 196 km away from the state capital, Florianópolis. The BF system approached by Guedes (GUEDES, 2018;GUEDES et al., 2019) corresponded to the structure located on the Belo River bank, about 2.16 km from the source, inserted in the rural area of Orleans, Santa Catarina, which has the UTM coordinates of 667449.00 m E and 6831854.00 m S zone 22. In addition to the abstraction well, the system was composed of pumping that was performed using photovoltaic energy. The geological profile of the land presented clay and silt (up to 2 m deep), basalt rocks and sand (between 2 to 5 m) in addition to coarse sand (5 to 10 m deep), considering the water level of the corresponding groundwater at an average depth of 2.5 m. The hydraulic conductivity of the soil presented an average of 5.2x10 -5 m/s, typical for sediments composed of coarse and medium sand (GUEDES, 2018;GUEDES et al., 2019). The production well, called P1, was built 8 inches in diameter, 15 m deep and was located 17 m from the source. The time taken from the water to the well fluctuated between 16 and 32 days, depending on the lowering of the aquifer and the pump used, the pumping being intermittent (GUEDES, 2018). Subsequently, Guedes et al. (2019), in a second production well (P2) 1 m in diameter and 5 m deep, 25 m from the source, also pumped intermittently, as the energy supply system was through photovoltaic cells. It was also possible to identify the hydraulic connection between the water in the wells and the source (GUEDES, 2018; GUEDES et al., 2019). Water quality characterization: physical, chemical and biological parameters The results in Tables 14, 15 and 16 represent the physical, chemical and biological parameters, respectively, obtained in the production wells P1 (GUEDES, 2018) and P2 (GUEDES et al., 2019). Table 14 shows the excellent efficiency of BF in removing the studied physical parameters, both in P1 and in P2, with removal percentages above 95%. RW-raw water (source-Belo River-Orleans); FW-filtered water (production well); NTU -turbidity unit. Source: Adapted by the authors from Guedes (2018); Guedes et al. (2019) In Table 15, it is possible to observe that there was an increase of 1.3 and 1.2 in the electrical conductivity (GUEDES, 2018;GUEDES et al., 2019) due to the chemical characteristics of the sediments, influencing the composition of the leachate materials, such as the release of soil ions. The BF efficiency in the total iron removal was also observed in both wells. There was a DOC reduction, showing the degradation of organic compounds during filtration (GUEDES, 2018) of DO, as well as a decrease in pH, which was much greater in P2. The system's location in a rural area justifies the presence of microbiological contaminants due to the existence of animal husbandry in the region, in addition to the possibility of illegal effluent discharges along the source path. In the data in Table 16, the absence of E. Coli after BF is observed in both wells without, however, eliminating total coliforms (GUEDES, 2018;GUEDES et al., 2019). Nonetheless, 99.99% of removal was obtained. RW-raw water (source-Belo River-Orleans); FW-filtered water (production well); MPN -most probable number. FINAL CONSIDERATIONS From the content prepared for Part 1 of this review, for the correct application of BF, sustainable sources from surface and underground water must be relied on in hydraulic connection with the local aquifer formed by alluvial or unconsolidated materials. The hydrogeology of the aquifer, the hydrology of the water body, the morphology of the river or lake, the composition of its bottom and its quality in addition to the temperature of the surface and groundwater are other local characteristics to consider when applying BF (ESQUIVEL et al., 2016). As discussed in Burgardt (2017) In all studies carried out, the flow rates applied were low to the point of representing something consistent for the real scale of a public supply system. In the future, it would be interesting to accomplish a study that analyzed quality and quantity (higher flows) to understand what will happen with the travel time and its final quality. As can be seen in this review, among the parameters involved in the studies presented, the BF Emmendoerfer ML, Martins M, Pizzolatti BS, Soares MBD, Signori AM, Sens ML technique demonstrates efficiency in reducing parameters such as: turbidity and coliforms (total and fecal), pesticides and toxins. However the technique showed low capacity to reduce parameters, for example: salinity and true color. Parameters such as pH and dissolved oxygen do not change significantly. It is also important to note that once the technique is dependent on the soil composition inherent to local geological conditions, parameters as iron, manganese, fluorine, alkalinity, hardness and chlorides can be added to the treated water as a function of the redox conditions during runoff as well as through variation flow, changing the contribution of the source and aquifer. In summary, for public supply systems, BF can be used to make pre-treatment in already consolidated situations, that is, in systems in operation (water treatment plants), which can make the raw water quality reach the beginning of treatment with the same standards of water quality (or better standards, for example: less turbidity) as when the current treatment systems were conceived (start of operation), causing them to have a longer useful life or even a greater flow. The final considerations will be presented in the second part of this review.
7,925.4
2021-11-10T00:00:00.000
[ "Biology" ]
A New Oren – Nayar Shape-from-Shading Approach for 3 D Reconstruction Using High-Order Godunov-Based Scheme 3D shape reconstruction from images has been an important topic in the field of robot vision. Shape-From-Shading (SFS) is a classical method for determining the shape of a 3D surface from a one intensity image. The Lambertian reflectance is a fundamental assumption in conventional SFS approaches. Unfortunately, when applied to characterize the reflection attribute of the diffuse reflection, the Lambertian model is tested to be inexact. In this paper, we present a new SFS approach for 3D reconstruction of diffuse surfaces whose reflection attribute is approximated by the Oren–Nayar reflection model. The partial differential Image Irradiance Equation (IIR) is set up with this model under a single distant point light source and an orthographic camera projection whose direction coincides with the light source. Then, the IIR is converted into an eikonal equation by solving a quadratic equation that includes the 3D surface shape. The viscosity solution of the resulting eikonal equation is approximated by using the high-order Godunov-based scheme that is accelerated by means of an alternating sweeping strategy. We conduct the experiments on synthetic and real-world images, and the experimental results illustrate the effectiveness of the presented approach. Introduction 3D shape reconstruction from images has been an important topic in the field of robot vision.Shape-From-shading (SFS), initiated by Horn [1,2], is a classical method for determining the shape of a 3D surface from a one intensity image.He described the SFS problem by the so-called Image Irradiance Equation (IIR), which is a nonlinear Partial Differential Equation (PDE) of the first-order, and demonstrates the relationship between the 3D surface shape and its relevant 2D image intensity variations. There are primarily two steps when processing an SFS problem.The first step is that of setting up a partial differential IIR under certain assumptions.The assumptions are used to model the image formation process and are dependent on the reflection attribute of the surface, the light source, and the camera.The second step is that of building a numerical scheme to acquire a solution of the IIR, which is the 3D shape of the known intensity image.Since the pioneering work of Horn, the majority of the SFS works focuses on how to build a numerical scheme with the simple Lambertian model; thus, many SFS algorithms have been reported (one can see the extensive list of the references in the three surveys [3][4][5]).According to Zhang et al. [4], Durou et al. [5], and Tozza and Falcone [6], these algorithms are roughly divided into two categories: PDE-based approaches and optimization approaches.The early work of Horn [1] based on the method of characteristics and the viscosity solution theories-based methods [7][8][9][10][11] all belong to the first category.It is worth mentioning that Rouy and Tourin [7] first formulated the SFS problem as a Hamilton-Jacobi-Bellman (HJB) PDE, whose solution is obtained through the viscosity solution theories.Kimmel and Sethian [8] converted the partial differential IIR with Lambertian reflection model into an eikonal PDE.They adopted the fast-marching method [12] to compute its viscosity solution.These approaches [1,7,8] are based on an orthographic camera projection.For a perspective camera projection, Prados and Faugeras [9] associated the partial differential IIR with a Hamiltonian and approximated its viscosity solution using optimal control strategy on the basis of the work [7].Breuß et al. [10,11] studied analytically and numerically the perspective SFS model introduced by Prados and Faugeras [9] and the related Hamilton-Jacobi (HJ) PDE.Also, they proved the convergence of a finite-difference scheme and a semi-Lagrangian scheme for the HJ PDE.The second category includes the minimization methods for the variational problem. Lee and Kuo [13] presented an iterative SFS method under the perspective camera projection.They approximated a smooth surface by the union of triangular surface patches which involved only the height variables.The shape was recovered by linearizing the reflectance map and minimizing a quadratic cost functional.Courteille et al. [14] introduced some form of regularization, by formulating a PDE-based weak perspective or perspective IIR in terms of a spline basis.We want to mention the work by Quéau et al. [15], who combined the advantages of PDE-based approaches and optimization approaches, and created a generic variational solution based on PDEs that is suitable for SFS under natural illumination and can handle a variety of scenarios for the camera and the lighting.Finally, in addition to these work, we should also mention that a latest work by Scheffler et al. [16], who gave a novel graph theoretical framework built on cycle bases.They proposed a linear program for realizing the approach to solve the disambiguation problem for SFS. As for the modeling process of the surface reflection, few are reported in the SFS literature.Generally, the Lambertian reflectance is a fundamental assumption in conventional SFS approaches for approximating the reflection attribute of the diffuse reflection.Unfortunately, the Lambertian model has been tested to be inexact, especially for rough diffuse surfaces [17].Samaras and Metaxas [18] adopted the Deformable Models and Oren-Nayar [17] model to construct an approach for non-Lambertian SFS and estimating the light direction.Ragheb and Hancock [19] also reported a non-Lambertian SFS with the Oren-Nayar model and presented two solutions: the analytic solution and the lookup table.Ahmed and Farag [20] gave several non-Lambertian SFS models and solved the partial differential IIR using the Lax-Friedrichs sweeping scheme [21].Tozza and Falcone [6] proposed a unified formulation for several non-Lambertian SFS models, solved by a semi-Lagrangian approximation scheme and proved a convergence result.In these non-Lambertian SFS approaches, the distant point light source and the orthographic camera projection are adopted.Because of their simplicity, our current study is based on these models.Of course, recent research focuses on the more realistic imaging model, including a nearby point light source and perspective camera projection.Ahmed and Farag [22] used the model proposed by Prados and Faugeras [9], where the point light source is ideally attached to the camera's projection center, and a light attenuation term 1/r 2 (r defines the distance between the position of the light source and the 3D surface point) has been considered in order to eliminate the ambiguity regarding convex and concave that makes the non-Lambertian SFS problem ill-posed.Moreover, Ju et al. [23] extended the work of Galliani et al. [24]: they used spherical surface parameterization to Oren-Nayar reflection model and thus could resolve an arbitrary position of the point light source.Nevertheless, the solving process needs transform the fast-marching method described in Cartesian coordinates [12] into spherical coordinates. In this paper, we present a new Oren-Nayar SFS approach for 3D reconstruction of diffuse surfaces by extending our work [25].The Oren-Nayar reflection model is also applied to approximate the reflection attribute of the diffuse reflection.Then, a partial differential IIR with this model is set up under a single distant point light source and an orthographic camera projection whose direction coincides with the light source.Although several works [6,20,26] have been reported in this situation, our main contribution is that we convert the IIR into a standard eikonal PDE by solving a quadratic equation which includes the 3D surface shape, after which we try to obtain the viscosity solution of the resulting eikonal equation by using the high-order Godunov-based scheme. IIR for Oren-Nayar Reflectance Under the basis that the image plane of the camera is an x − y plane and the camera's optical axis is a z-axis, the SFS problem is able to be expressed as reconstructing a 3D surface, z(x, y), which satisfies the partial differential IIR [1]: where I(x, y) is the image irradiance that is considered to be equal to the image intensity, p(x, y) and q(x, y) respectively stand for the first partial derivatives of the surface z(x, y) with respect to x and y, and R is the reflectance map derived from the reflection model.Assuming that the camera performs an orthographic projection of a surface which is irradiated by a single point light source placed distantly from the surface, we can adopt the formulation as follows to describe the normal vector n at a 3D surface point (x, y, z(x, y)), n = [p, q, −1]. ( The reflected radiance of an ideal diffuse surface obeys Lambert's law, which can be expressed by where θ i , as illustrated in Figure 1, is the angle between the normal vector n, and the incident direction of the point light source L, ρ is the albedo of diffuse surfaces that is constant in this paper, and I 0 is the radiant intensity of the light source.However, for a typical real diffuse surface, the Lambertian reflection model is tested to be an inexact approximation of the diffuse component of the surface reflection [17,25].To eliminate the inexactness for diffuse reflection deriving from the assumption of the Lambertian model, Oren and Nayar [17] produced an advanced reflection model for rough diffuse surfaces where the roughness model has been used.The roughness model considers that the surface is constituted by extended symmetric V-shaped cavities, where every V-cavity consists of two planar facets which are considered to obey the Lambert's law.The roughness of the surface is determined by a probability function of the planar facet orientations.Moreover, using a Gaussian distribution, with the reflection geometry shown in Figure 1, a formulation for the surface reflected radiance L r is addressed: where The parameter σ is standed as a measure of the surface roughness, and it represents the standard deviation of the Gaussian distribution. Considering that the direction of the point light source L coincides with the camera's direction V, we have ϕ i = ϕ r , θ i = θ r , and α = β.Consequently, Equation (4) can be simplified to Generally speaking, Equation ( 5) can only be established under the assumptions that are a single distant point light source and an orthographic camera projection whose direction coincides with the light source or a nearby point light source attached to the projection center of the camera performing a perspective projection.If the camera's direction does not coincide with the light source, our approach is no longer suitable, in which case one can refer to the work of [6,23].Defining that the unit vectors of L and V are all [0, 0, −1] and since θ i is the angle between the normal vector n and L, we have , According to [1], taking Equation ( 6) into Equation ( 5), the reflectance map derived from the Oren-Nayar model is formulated as follows and now we can express the partial differential IIR for the Oren-Nayar reflectance: A similar IIR for the Oren-Nayar SFS has been reported in [26] and can be solved by a semi-Lagrangian approximation scheme.Here we convert the IIR into a standard eikonal PDE which will be solved by the high-order Godunov-based scheme.Obviously, the formulation ( 7) is able to be characterized by a quadratic equation in the variable ∇z(x, y) 2 + 1. Applying the change of variable g = ∇z(x, y) 2 + 1, Equation ( 7) is formulated as Calculating Equation ( 8) and satisfying g ≥ 1, we have Thus, the SFS problem (7) has been reformulated as a standard eikonal PDE: where ϕ(x, y) is a boundary condition and Ω is a given image domain. Generally speaking, Equation ( 5) can only be established under the assumptions that are a single distant point light source and an orthographic camera projection whose direction coincides with the light source or a nearby point light source attached to the projection center of the camera performing a perspective projection.If the camera's direction does not coincide with the light source, our approach is no longer suitable, in which case one can refer to the work of [6,23].Defining that the unit vectors of L and V are all [0,0, 1] − and since i θ is the angle between the normal vector n and L , we have According to [1], taking Equation (6) into Equation ( 5), the reflectance map derived from the Oren-Nayar model is formulated as follows and now we can express the partial differential IIR for the Oren-Nayar reflectance: A similar IIR for the Oren-Nayar SFS has been reported in [26] and can be solved by a semi-Lagrangian approximation scheme.Here we convert the IIR into a standard eikonal PDE which will be solved by the high-order Godunov-based scheme.Obviously, the formulation ( 7) is able to be characterized by a quadratic equation in the variable 2 ( , ) Calculating Equation ( 8) and satisfying 1 g ≥ , we have Thus, the SFS problem (7) has been reformulated as a standard eikonal PDE: Ω, z x y g x z x y φ x y x (10) where ( , ) φ x y is a boundary condition and Ω is a given image domain.Reflection geometry model of a 3D surface with the normal vector n.L and V are the incident direction of the point light source and the camera's direction, respectively.ϕ i , ϕ r and θ i , θ r are their corresponding tilt and slant angles, respectively. Numerical Algorithm for Solving the Resulting Eikonal PDE In this section, we employ the high-order Godunov-based scheme [27] that is accelerated by alternating sweeping strategy [28,29] to solve the viscosity solution of the resulting eikonal PDE (10). High-Order Godunov-Based Scheme of the given image domain Ω whose resolution and grid size are r × c and s × s, respectively.Now our goal is to get a discrete solution z m,n = z(x m , y n ) of the unknown 3D surface z(x, y). A first-order Godunov-based scheme proposed in [7,29] is employed to discretize resulting eikonal PDE (10): where into Equation ( 11), we have Hence, the first-order scheme viscosity solution of Equation ( 13) can be obtained thusly: To build a higher-order scheme, the p m,n and q m,n with higher-order accuracy need to be approximated.According to [27], since the third-order Weighted Essentially Non-Oscillatory (WENO) scheme has a uniform high order accuracy and is more efficient and robust than other schemes (e.g., the Essentially Non-Oscillatory (ENO) scheme), we also choose to employ the third-order WENO approximations. where Here where µ is a very small number keeping the denominator from getting too close to 0. In a similar way, q − m,n and q + m,n are specified.Thus, the higher-order scheme viscosity solution of Equation ( 13) can be obtained: Alternating Sweeping Strategy for Godunov-Based Scheme To speed up the convergence of the high-order Godunov-based scheme, the philosophy of alternating sweeping strategy and Gauss-Seidel iterations [28,29] are used as follows.The newest available values for z are adopted when we calculate the derivatives p − m,n , p + m,n , and q − m,n , q + m,n .Simultaneously, the iterations of the scheme do not only sweep in a single direction, but in the four alternating directions as follows: (a) from upper left to lower right; (b) from lower left to upper right; (c) from lower right to upper left; (d) from upper right to lower left.As can be clearly demonstrated that varied values z m±2,n , z m±1,n and z m,n±2 , z m,n±1 will be taken into account according to the current sweeping direction within the preceding cycle. The high-order Godunov-based scheme accelerated by an alternating sweeping strategy is summarized in the following way: (1) Initialization: On the boundary ∂Ω we set the grid points to be accurate values, i.e., z 0 m,n = ϕ m,n , which are unchanged during the iterations of the scheme.The approximated solution from the first-order Godunov-based scheme is applied as the initial guess for all other grid points. (2) Iterations with Alternating Sweepings Orderings: We calculate z new m,n at (i + 1)th iteration, updated by the rules (19) using Gauss-Seidel iterations with the following four alternating sweepings orderings: where δ > 0 is a known threshold value, the scheme stops and converges, or else comes back to (2).In this paper, δ = 10 −5 is taken. Experimental Results The effectiveness of the presented approach is assessed using several experiments on synthetic and real-world images.All the experiments were conducted on a general-purpose personal computer with an Intel Core i5-3570 CPU and 4 GB of DDR3 memory.We implemented all the approaches in Matlab, using C mex functions under Microsoft Windows operating system environments. Experimental Results on Synthetic Images A synthetic image of the sphere which was produced using the following formulation (20) was employed in our experiments: where (x, y) ∈ [−63, 64] × [−63, 64] and R = 50 is the sphere's radius.Another synthetic image of the vase which was produced using the following formulation (21) was also employed: where f (y) = −0.025(6y− 1)(2y + 1)(2y − 1) 2 (3y + 2) 2 + 0. The two images are generated by the model in this paper with the direction vectors of the point light source and the camera all being [0,0, 1] T − .The variant approach of Kimmel and Sethian [8] and the approach of Ahmed and Farag [20] are used to compare with our presented approach.It has been noted that the approach of Ahmed and Farag [20] applied the Lax-Friedrichs sweeping scheme to solve the Oren-Nayar SFS, and that the existing approach of Kimmel and Sethian [8] used the first-order Godunov-based scheme to solve the Lambertian SFS and therefore cannot directly solve the Oren-Nayar SFS.However, it is able to solve our partial differential IIR (10) directly.We define the first-order Godunov-based scheme that is accelerated by an alternating sweeping strategy for the Oren-Nayar SFS as the variant approach of Kimmel and Sethian [8]. The experimental results of the sphere are illustrated in Figure 3. Figure 3a shows the synthetic intensity image which is adopted for the 3D reconstruction with the parameter set: 0.2 σ = . Figure 3b demonstrates the reconstructed height map using the variant approach of Kimmel and Sethian [8] and the known Dirichlet boundary condition z φ = on Ω ∂ , while the error map with the original height map is shown in Figure 3c. Figure 3d demonstrates the reconstructed height map using the approach of Ahmed and Farag [20], while the error map with the original height map is shown in Figure 3e. Figure 3f demonstrates the reconstructed height map using our presented approach with the known Dirichlet boundary condition z φ = on Ω ∂ , while the error map with the original height map is shown in Figure 3g.Finally, Figure 4 illustrates the corresponding experimental results of the synthetic vase.The two images are generated by the model in this paper with the direction vectors of the point light source and the camera all being [0, 0, −1] T .The variant approach of Kimmel and Sethian [8] and the approach of Ahmed and Farag [20] are used to compare with our presented approach.It has been noted that the approach of Ahmed and Farag [20] applied the Lax-Friedrichs sweeping scheme to solve the Oren-Nayar SFS, and that the existing approach of Kimmel and Sethian [8] used the first-order Godunov-based scheme to solve the Lambertian SFS and therefore cannot directly solve the Oren-Nayar SFS.However, it is able to solve our partial differential IIR (10) directly.We define the first-order Godunov-based scheme that is accelerated by an alternating sweeping strategy for the Oren-Nayar SFS as the variant approach of Kimmel and Sethian [8]. The experimental results of the sphere are illustrated in Figure 3. Figure 3a shows the synthetic intensity image which is adopted for the 3D reconstruction with the parameter set: σ = 0.2. Figure 3b demonstrates the reconstructed height map using the variant approach of Kimmel and Sethian [8] and the known Dirichlet boundary condition z = ϕ on ∂Ω, while the error map with the original height map is shown in Figure 3c. Figure 3d demonstrates the reconstructed height map using the approach of Ahmed and Farag [20], while the error map with the original height map is shown in Figure 3e. Figure 3f demonstrates the reconstructed height map using our presented approach with the known Dirichlet boundary condition z = ϕ on ∂Ω, while the error map with the original height map is shown in Figure 3g.Finally, Figure 4 illustrates the corresponding experimental results of the synthetic vase. Figures 3 and 4 show that the variant approach of Kimmel and Sethian [8], the approach of Ahmed and Farag [20], and our presented approach can all obtain satisfactory reconstructed results of the diffuse surfaces.Moreover, we can further see that our presented approach gets a higher accuracy and smaller errors than the variant approach of Kimmel and Sethian [8] and the approach of Ahmed and Farag [20]. To further demonstrate the effectiveness of our presented approach, a quantitative comparison between the variant approach of Kimmel and Sethian [8], the approach of Ahmed and Farag [20], and our presented approach is adopted by using the CPU time, the Mean Absolute Error (MAE), and the Root Mean Square Error (RMSE).Table 1 lists the comparisons of the three approaches for the synthetic sphere and vase images.As is directly seen, our presented approach shows better performance in the reconstructed errors than the variant approach of Kimmel and Sethian [8], as well as than the approach of Ahmed and Farag [20] in all the images which we performed.The MAE and RMSE of our proposed approach are about one-fiftieth and one-twentieth of the variant approach of Kimmel and Sethian [8] for the sphere, and about one-seventh and one-fourth for the vase, respectively.The approach of Ahmed and Farag [20] shows a slightly worse performance; maybe it is difficult to find a good estimate for the artificial viscosity term.However, our presented approach requires longer CPU time than the variant approach of Kimmel and Sethian [8], because the former adopts a complex high-order approximation scheme. Figures 3 and 4 show that the variant approach of Kimmel and Sethian [8], the approach of Ahmed and Farag [20], and our presented approach can all obtain satisfactory reconstructed results of the diffuse surfaces.Moreover, we can further see that our presented approach gets a higher 2a; (d) reconstructed height map using the approach of [20]; (e) error map of (d) with Figure 2a; (f) reconstructed height map using our presented approach; (g) error map of (f) with Figure 2a.accuracy and smaller errors than the variant approach of Kimmel and Sethian [8] and the approach of Ahmed and Farag [20].To further demonstrate the effectiveness of our presented approach, a quantitative comparison between the variant approach of Kimmel and Sethian [8], the approach of Ahmed and Farag [20], and our presented approach is adopted by using the CPU time, the Mean Absolute Error (MAE), and the Root Mean Square Error (RMSE).Table 1 lists the comparisons of the three approaches for the synthetic sphere and vase images.As is directly seen, our presented approach shows better performance in the reconstructed errors than the variant approach of Kimmel and Sethian [8], as 2b; (d) reconstructed height map using the approach of [20]; (e) error map of (d) with Figure 2b; (f) reconstructed height map using our presented approach; (g) error map of (f) with Figure 2b. Experimental Results on Real-World Image In order to evaluate the performance of our presented approach for real diffuse surfaces, we performed it on a real-world image and compared the reconstructed result with the existing approach of Kimmel and Sethian [8], the variant approach of Kimmel and Sethian [8], and the approach of Ahmed and Farag [20].The image is the real-world vase used in the surveys [4,5], which is shown in Figure 5a. Figure 5b gives the ground truth referred to [5]. Figure 5c-f illustrate the reconstructed height maps using the approach of Kimmel and Sethian [8] with exact height values at each singular point, the variant approach of Kimmel and Sethian [8] with the known Dirichlet boundary condition z = ϕ on ∂Ω, the approach of Ahmed and Farag [20], and our presented approach with the known Dirichlet boundary condition z = ϕ on ∂Ω, respectively. In order to evaluate the performance of our presented approach for real diffuse surfaces, we performed it on a real-world image and compared the reconstructed result with the existing approach of Kimmel and Sethian [8], the variant approach of Kimmel and Sethian [8], and the approach of Ahmed and Farag [20].The image is the real-world vase used in the surveys [4] and [5], which is shown in Figure 5a.Figures 5b gives the ground truth referred to [5].Figures 5c-f illustrate the reconstructed height maps using the approach of Kimmel and Sethian [8] with exact height values at each singular point, the variant approach of Kimmel and Sethian [8] with the known Dirichlet boundary condition z φ = on Ω ∂ , the approach of Ahmed and Farag [20], and our presented approach with the known Dirichlet boundary condition z φ = on Ω ∂ , respectively. Similarly, we calculated the MAE and RMSE errors of these approaches and the comparisons results are listed in Table 2. From the reconstructed results illustrated in Figure 5c-f, we can see that our presented approach and the variant approach of Kimmel and Sethian [8] are more vivid than the approach of Kimmel and Sethian [8].The same conclusion can be drawn from Table 2: our presented approach shows best performance since our approach employs the advanced reflection model for diffuse surfaces and high-order accurate numerical scheme.Recall that the approach of Kimmel and Sethian [8] assumes the Lambertian reflection model for diffuse surfaces.As previously, the approach of Ahmed and Farag [20] shows a slightly worse performance as can be seen from Figure 5e.(c) reconstructed height map using the approach of [8]; (d) reconstructed height map using the variant approach of [8]; (e) reconstructed height map using the approach of [20]; (f) reconstructed height map using our presented approach.(c) reconstructed height map using the approach of [8]; (d) reconstructed height map using the variant approach of [8]; (e) reconstructed height map using the approach of [20]; (f) reconstructed height map using our presented approach. Similarly, we calculated the MAE and RMSE errors of these approaches and the comparisons results are listed in Table 2. From the reconstructed results illustrated in Figure 5c-f, we can see that our presented approach and the variant approach of Kimmel and Sethian [8] are more vivid than the approach of Kimmel and Sethian [8].The same conclusion can be drawn from Table 2: our presented approach shows best performance since our approach employs the advanced reflection model for diffuse surfaces and high-order accurate numerical scheme.Recall that the approach of Kimmel and Sethian [8] assumes the Lambertian reflection model for diffuse surfaces.As previously, the approach of Ahmed and Farag [20] shows a slightly worse performance as can be seen from Figure 5e. Conclusions This paper reports a new Oren-Nayar approach for 3D reconstruction of diffuse surfaces.We employed a Oren-Nayar reflection model instead of Lambertian model to approximate the reflection attribute of the diffuse reflection.The partial differential IIR for Oren-Nayar reflectance was set up with the model that the direction of the distant point light source is the same as the camera performing an orthographic projection.A standard eikonal PDE was derived from the IIR by solving a quadratic equation in the variable which includes of the 3D surface shape, and we obtained the viscosity solution of the resulting eikonal PDE using the high-order Godunov-based scheme that was accelerated by the philosophy of alternating sweeping strategy and Gauss-Seidel iterations.Experiments were conducted on synthetic and real-word images and the experimental results illustrate that our presented approach is effective and can get a higher accuracy than the first-order Godunov-based scheme and the Lax-Friedrichs scheme. Frankly speaking, the presented approach in the paper can be only used in the situation with a single distant point light source and an orthographic camera projection whose direction coincides with the light source.Nevertheless, we believe that the idea of solving the quadratic IIR can be employed to alternately solve the SFS problem with a nearby point light source attached to the projection center of the camera performing a perspective projection.At the same time, the light attenuation term can be considered in order to eliminate the ambiguity regarding convex and concave that makes the SFS problem ill-posed.This work will be explored in the future. Figure 1 . Figure 1.Reflection geometry model of a 3D surface with the normal vector n .L and V are the incident direction of the point light source and the camera's direction, respectively., i r φ φ and Figure 1 . Figure 1.Reflection geometry model of a 3D surface with the normal vector n.L and V are the incident direction of the point light source and the camera's direction, respectively.ϕ i , ϕ r and θ i , θ r are their corresponding tilt and slant angles, respectively. Figure 2 . Figure 2. The original height maps: (a) the sphere; (b) the vase. Figure 2 . Figure 2. The original height maps: (a) the sphere; (b) the vase. Figure 3 . Figure 3. Experimental results of the synthetic sphere: (a) intensity image generated by Figure 2a with the parameter set: σ = 0.2.(b) reconstructed height map using the variant approach of [8]; (c) error map of (b) with Figure2a; (d) reconstructed height map using the approach of[20]; (e) error map of (d) with Figure2a; (f) reconstructed height map using our presented approach; (g) error map of (f) with Figure2a. Figure 4 . Figure 4. Experimental results of the synthetic vase: (a) intensity images generated by Figure 2b with the parameter set: 0.2 σ =.(b) reconstructed height map using the variant approach of[8]; (c) error map of (b) with Figure2b; (d) reconstructed height map using the approach of[20]; (e) error map of (d) with Figure2b; (f) reconstructed height map using our presented approach; (g) error map of (f) with Figure2b. Figure 4 . Figure 4. Experimental results of the synthetic vase: (a) intensity images generated by Figure 2b with the parameter set: σ = 0.2.(b) reconstructed height map using the variant approach of [8]; (c) error map of (b) with Figure2b; (d) reconstructed height map using the approach of[20]; (e) error map of (d) with Figure2b; (f) reconstructed height map using our presented approach; (g) error map of (f) with Figure2b. Figure 5 . Figure 5. Experimental results of the real-world vase: (a) real intensity image; (b) ground truth referred to[5]; (c) reconstructed height map using the approach of[8]; (d) reconstructed height map using the variant approach of[8]; (e) reconstructed height map using the approach of[20]; (f) reconstructed height map using our presented approach. Figure 5 . Figure 5. Experimental results of the real-world vase: (a) real intensity image; (b) ground truth referred to[5]; (c) reconstructed height map using the approach of[8]; (d) reconstructed height map using the variant approach of[8]; (e) reconstructed height map using the approach of[20]; (f) reconstructed height map using our presented approach. Table 1 . Comparisons of approaches for the synthetic images. Table 2 . Comparisons of approaches for the real-world vase. Table 2 . Comparisons of approaches for the real-world vase.
7,205.8
2018-05-18T00:00:00.000
[ "Engineering", "Computer Science", "Physics" ]
Xylose fermentation efficiency of industrial Saccharomyces cerevisiae yeast with separate or combined xylose reductase/xylitol dehydrogenase and xylose isomerase pathways Background Xylose isomerase (XI) and xylose reductase/xylitol dehydrogenase (XR/XDH) pathways have been extensively used to confer xylose assimilation capacity to Saccharomyces cerevisiae and tackle one of the major bottlenecks in the attainment of economically viable lignocellulosic ethanol production. Nevertheless, there is a lack of studies comparing the efficiency of those pathways both separately and combined. In this work, the XI and/or XR/XDH pathways were introduced into two robust industrial S. cerevisiae strains, evaluated in synthetic media and corn cob hemicellulosic hydrolysate and the results were correlated with the differential enzyme activities found in the xylose-pathway engineered strains. Results The sole expression of XI was found to increase the fermentative capacity of both strains in synthetic media at 30 °C and 40 °C: decreasing xylitol accumulation and improving xylose consumption and ethanol production. Similar results were observed in fermentations of detoxified hydrolysate. However, in the presence of lignocellulosic-derived inhibitors, a positive synergistic effect resulted from the expression of both XI and XR/XDH, possibly caused by a cofactor equilibrium between the XDH and furan detoxifying enzymes, increasing the ethanol yield by more than 38%. Conclusions This study clearly shows an advantage of using the XI from Clostridium phytofermentans to attain high ethanol productivities and yields from xylose. Furthermore, and for the first time, the simultaneous utilization of XR/XDH and XI pathways was compared to the single expression of XR/XDH or XI and was found to improve ethanol production from non-detoxified hemicellulosic hydrolysates. These results extend the knowledge regarding S. cerevisiae xylose assimilation metabolism and pave the way for the construction of more efficient strains for use in lignocellulosic industrial processes. Background The depletion of fossil fuel reserves, the economic problems associated with their use and the growing environmental concerns related with greenhouse gas emissions have led to a search for new renewable energy sources [1]. The use of lignocellulosic biomass for the production of bioethanol and value-added products has emerged as a sustainable alternative to fossil sources, as lignocellulose is one of the most abundantly available renewable biomass sources on earth and its utilization does not compete with the use of land for food production [2][3][4]. The complex and recalcitrant structure of lignocellulosic biomass comprises cellulose (a crystalline, linear homopolymer of glucose), hemicellulose (a branched, amorphous heteropolymer of hexoses and pentoses) and lignin [1]. The pre-treatment and hydrolysis steps, required to obtain fermentable sugars from lignocellulosic biomass, also release inhibitory compounds, such as 5-hydroxymethylfurfural (HMF), furfural and acetic acid [5]. These compounds strongly affect microbial growth and ethanol fermentation [6,7]. Glucose and xylose are the most abundant monosaccharides present in lignocellulosic hydrolysates, representing between 60-70% and 30-40% of their sugar composition, respectively [8]. Saccharomyces cerevisiae, the preferred microorganism for large-scale ethanol production, does not naturally consume xylose. Thus, to obtain economically viable production of bioethanol, the hemicellulosic fraction composed mainly of xylose should be efficiently converted into ethanol through development of a S. cerevisiae strain capable of consuming xylose as well as having strong resistance to inhibitory compounds. Xylose assimilation is achieved through conversion of xylose into xylulose and subsequent phosphorylation to xylulose-5-phosphate, which is further metabolized in the pentose phosphate pathway. Two different pathways have previously been expressed in S. cerevisiae to convert xylose into xylulose: the oxidoreductase and the isomerase pathway. The oxidoreductase pathway is used by many xylose-fermenting yeast species and consists of two enzymatic reactions catalyzed by xylose reductase (XR) and xylitol dehydrogenase (XDH) [9]. First, XR reduces xylose to xylitol, preferably using NADPH over NADH as cofactor; xylitol is then oxidized to xylulose by XDH, which uses only NAD + as cofactor. The XR/XDH pathway has a bottleneck caused by a cofactor imbalance between the mainly NADPH-dependent XR and the NAD + -dependent XDH, which generally causes xylitol accumulation and thus lowers ethanol production [10]. Additionally, S. cerevisiae carries the endogenous gene GRE3 that encodes an unspecific NADPH-dependent aldose reductase that can convert xylose to xylitol [11]. The expression of this enzyme, which only uses NADPH as cofactor, might aggravate the redox imbalance, therefore leading to even greater xylitol accumulation and to inefficient xylose fermentation [12]. In this sense, the deletion of GRE3 [13][14][15] and genetic modifications supporting cofactor regeneration [16] decrease xylitol accumulation and consequently increase ethanol production. The isomerase pathway is mainly found in bacteria and is a one-step reaction catalyzed by xylose isomerase (XI) that directly converts xylose to xylulose without cofactor requirement [8,17]. Initial attempts to express bacterial xylose isomerase in yeast were not very successful, except for a xylose isomerase from a thermotolerant bacterium Thermus thermophilus [18]. Expression of a xylose isomerase from the fungus Piromyces sp. resulted for the first time in efficient xylose fermentation with this pathway [19]. Later, Brat et al. [20] were able to express a highly efficient xylose isomerase from Clostridium phytofermentans in a laboratory S. cerevisiae strain, also resulting in efficient xylose fermentation. Functional expression of bacterial xylose isomerase in an industrial S. cerevisiae strain was more challenging and was accomplished only after extensive mutagenesis and evolutionary engineering [21]. Increasing inhibitor tolerance further improved the performance of this industrial yeast strain [22]. Because of the inhibitory effect of xylitol on XI activity [23], the strategies used to reduce xylitol production in XR/XDH strains can also improve the isomerization activity of XI in vivo and thus also improve xylose fermentation rate in strains with XI [24]. Industrial environments have been a resource of robust S. cerevisiae strains, exhibiting higher fermentation performance and resistance to lignocellulosic-derived inhibitors when compared to other laboratory strains [25]. Thermotolerance is another trait presented by some yeast strains isolated from industrial environments that can be desirable for bioethanol fermentation. As previously mentioned, an economically viable production of second-generation ethanol passes by an efficient fermentation of the lignocellulosic-derived whole slurry (cellulosic and hemicellulosic fractions), and in this sense, the implementation of simultaneous saccharification and co-fermentation (SSCF, co-consumption of both glucose and xylose) processes is of upmost importance for this purpose [14]. Nevertheless, the discrepancy in the optimal temperature for S. cerevisiae fermentation (30-35 °C) and for saccharolytic enzyme hydrolysis (approximately, 50 °C) poses a major drawback, and in this sense the process would benefit from the use of more thermotolerant strains with xylose consumption abilities. Nonetheless, previous works have shown that yeast strains isolated from different industrial environments respond differently to the same genetic modifications for xylose consumption [14,26], making the heterogeneity of genetic background an important factor to be taken into account. S. cerevisiae PE-2 and CA11 (isolated from a first-generation bioethanol plant and a "chachaça" fermentation process, respectively) were previously engineered for d-xylose consumption with the XR/XDH pathway, and, despite presenting different fermentation profiles, both were capable of ethanol production from different non-detoxified hemicellulosic hydrolysates [14]. Despite the importance of designing robust S. cerevisiae strains capable of efficient xylose assimilation, and even though the genetic engineering of S. cerevisiae for xylose consumption is extensively studied, few previous works have addressed the comparison of different xylose consumption pathways [27][28][29]. In a first study, both pathways were compared in a laboratorial strain using a lignocellulosic hydrolysate, with the XR/XDH pathway resulting in higher ethanol productivity while the Piromyces XI expression resulted in higher ethanol yield [27]. Li and collaborators [28] obtained similar results on comparing two non-isogenic engineered yeast strains for xylose consumption (one containing the XR/XDH pathway and the other Piromyces XI) in synthetic media. More recently, the expression of a XI from C. phytofermentans was compared to the expression of the XR/XDH pathway in an industrial strain and synthetic media, and was found to result in higher ethanol yield with similar xylose consumption abilities [29]. Taking these into account, and due to the reported specificities of each of the pathways, we hypothesized that the simultaneous expression of both XR/XDH and XI could be beneficial for xylose consumption in a lignocellulosic fermentation context. Furthermore, to the best of our knowledge, there is only one report of the simultaneous expression of both xylose consumption pathways (using a XI from bovine rumen metagenome) in S. cerevisiae: performed in a laboratorial strain and synthetic media, presenting low xylose consumption rates and ethanol yields and lacking a thorough comparison with the sole expression of each of the pathways [30]. Considering these, and the lack of comparison studies performed in more industrial-relevant conditions, in this work the utilization of XR/XDH and/or XI pathways was compared in the industrial (-derived) strains PE-2∆GRE3 and CA11. The effect of the two pathways, either separately or combined, on xylose consumption and ethanol production was evaluated with different fermentation conditions in synthetic xylose-containing medium and also in corn cob lignocellulosic hydrolysate. Evaluation of kinetic profiles during xylose fermentation To evaluate the effect of expressing different xylose consumption metabolic pathways, the different constructed strains were characterized for their capacity to metabolize and convert xylose during cotton stopper fermentation in synthetic medium (Fig. 1, Table 1). As seen in Fig. 1A, the strains constructed from PE-2∆GRE3 show similar xylose consumption profiles (at a 95% confidence level); however, the PE-2∆GRE3 containing only the XI pathway was capable of producing higher ethanol concentrations than the strains containing the XR/ XDH pathway (Fig. 1C, P < 0.001), reaching at 24 h of fermentation an ethanol yield from xylose of 0.401 g/g (corresponding to 79% of the theoretical maximum). On the other hand, the strains expressing the XR/XDH (alone or together with the XI pathway) accumulated higher quantities of xylitol ( Fig. 1B, P < 0.0001) with yields superior to 0.3 g/g at 24 h of fermentation, while the xylitol yield of the PE-XI strain was lower than 0.02 g/g (Table 1). Regarding the CA11-derived strains, the one containing only the XI pathway presented a slightly lower xylose consumption capacity than the strains with the XR/XDH pathway ( Fig. 1D, P < 0.05). Nevertheless, the strains expressing the XI pathway, CA11-XR/XDH + XI and CA11-XI, produced higher ethanol titers during the fermentation (Fig. 1F, P < 0.0001), both presenting similar ethanol yields (Table 1), while the CA11-XR/XDH strain accumulated higher levels of the by-product xylitol ( Fig. 1E, P < 0.0001). At the end of fermentation, the glycerol accumulation in the medium was inferior to 1 g/L for all strains. Additionally, the use of cotton stoppers allowed some oxygenation of the medium, and consequently higher amounts of xylose were used for biomass production (in comparison with oxygen-deprived conditions) resulting in lower ethanol yields (with the strain with the highest fermentation performance reaching 79% of the theoretical yield) ( Table 1). Evaluation of xylose fermentation capacity in oxygen-deprived conditions at 30 and 40 °C To further evaluate the fermentative capacity of these strains, they were tested in oxygen-deprived conditions to potentiate ethanol production, and also at higher temperature (40 °C) to evaluate their potential in more demanding ethanol fermentation processes, such as simultaneous saccharification and fermentation (SSF) (Fig. 2, Table 2). As previously observed at 30 °C, among the PE-2∆GRE3derived strains, the strain expressing solely the XI pathway presented a higher ethanol titer and yield from xylose ( Fig. 2b, Table 2, P < 0.0001), while the strains containing the XR/XDH pathway presented xylitol yields ca. 6 times higher than the PE-XI strain (Fig. 2b, Table 2, P < 0.0001). Regarding the strains derived from CA11, the one with sole expression of the XI pathway also reached a higher ethanol concentration ( Table 2, P < 0.05); as already observed previously (Table 1), CA11-XR/XDH showed higher xylitol accumulation capacity ( Table 2, P < 0.05). It should be noted that the strains with the highest ethanol production ability were also the ones capable of consuming higher xylose quantities during the time course of fermentation (~ 120 h) at 30 °C (Fig. 2, Table 2). The decrease in oxygen availability increased the amount of ethanol produced by the strains solely containing the XI pathway (yield of ethanol from xylose ~ 0.43 g/g) and by the CA11 strain with both pathways (Fig. 2b, Table 2). However, for the strains that have previously shown a higher potential for xylitol accumulation (PE-XR/XDH, PE-XR/XDH + XI and CA11-XR/XDH, Fig. 1, Table 1), the oxygen-deprived conditions did not improve ethanol production but instead greatly increased the yield of the by-product xylitol (Fig. 2b, Table 2). Considering the fermentations at 40 °C, it is clear that the higher temperature severely affected the performance of all strains, decreasing xylose consumption and ethanol production, as well as biomass growth (Fig. 2a, Table 2). Comparing the PE-2∆GRE3-derived strains in terms of xylose consumption, the PE-XI presented the highest percentage of xylose consumed (79%, Fig. 2a) at the end of fermentation (120 h), while among the strains with CA11 background the one expressing both pathways reached a higher percentage of xylose consumption (84%, Fig. 2a). Nevertheless, the ethanol yields from xylose were not severely affected by the increase in temperature (Fig. 2b, Table 2), with the PE-XI, CA11-XR/XDH + XI and CA11-XI strains presenting the highest values (≥ 0.345 g/g). Curiously, these strains with higher ethanol yields accumulated also higher xylitol quantities at 40 °C than at 30 °C (Table 2). Additionally, in all fermentations, at both temperatures, the strains with sole expression of the XI pathway (PE-XI and CA11-XI) accumulated higher quantities of the by-product glycerol, with the CA11-XI strain accumulating 0.154 g of glycerol per gram of xylose consumed at 40 °C (Table 2, P < 0.05). Comparison of enzyme activities in cell extracts of xylose fermentations in oxygen-deprived conditions at 30 and 40 °C To fully understand the role played by the different pathways in xylose metabolism of each strain, the level of activity of the XR, XDH and XI enzymes was evaluated in yeast cell extracts after 24 h of xylose fermentation at 30 and 40 °C (Fig. 3). At 30 °C, the PE-XR/ XDH showed the highest activity among all the strains of the XR (both using NADH and NADPH cofactors) and XDH enzymes (Fig. 3a-c), while the PE-XI strain showed the highest activity of XI enzyme (Fig. 3d). The PE-XR/XDH + XI presented XR and XDH activity levels more than twofold lower than those presented by the PE-XR/XDH strain. Additionally, the XI activity level in this strain was considerably lower (0.07 U/mg of protein) than that from the PE-XI strain (0.50 U/mg of protein) and from the other CA11 strains containing this enzyme (Fig. 3d). Regarding the CA11-derived strains, the XR activity was higher in the strain with both pathways than in the one solely expressing XR/ XDH (Fig. 3a, b), while the XDH activity at 30 °C was higher in the CA11-XR/XDH strain (Fig. 3c). The CA11 strain with both pathways also presented a slightly higher XI activity than the CA-XI strain (Fig. 3d) at 30 °C. It should also be noted that while the XR gene introduced in these strains has a mutation for NADHpreference, the activity of the XR enzyme is still much higher (~ 10 times) when using the cofactor NADPH ( Fig. 3a, b). Also, and as expected, the strains without heterologous expression of an enzyme presented only residual levels of the corresponding enzymatic activity. With the increase of temperature to 40 °C, the activity levels of the enzymes were severely decreased (Fig. 3), with the exception of the XDH activity in the strains PE-XR/XDH, PE-XR/XDH + XI and CA11-XR/ XDH + XI (Fig. 3c). Effect of different metabolic pathways on the fermentation capacity of S. cerevisiae PE-2∆GRE3 and CA11 in corn cob hydrolysate Evaluation of fermentation capacity in corn cob hydrolysate Corn cob is a lignocellulosic biomass with a high percentage of xylan, which when submitted to appropriate pre-treatment and hydrolysis results in a xylose-enriched liquor. In this sense, this biomass was selected to test the performance of these xylose-consuming strains in lignocellulosic hydrolysates. After treatment, the corn cob hydrolysate comprised per liter ca. 28 were similar to that observed with synthetic medium at 30 °C (Fig. 2a, Table 2), for the strains solely expressing the XI pathway (PE-XI and CA11-XI) and consuming higher amounts of xylose. Similarly, the PE-XI and CA11-XI strains were the ones presenting higher ethanol titers, reaching yields of 0.44 g of ethanol per gram of consumed sugars (Table 3, P < 0.01), while the PE-XR/XDH, PE-XR/ XDH + XI and CA11-XR/XDH accumulated higher amounts of xylitol, with yields higher than 0.427 g of xylitol per gram of xylose (Table 3, P < 0.01). When comparing the strains with both pathways, it should be noted that the CA11-derived strain presented higher ethanol yield and lower xylitol accumulation levels than the PEderived strain (Table 3), as already observed in synthetic medium ( Table 2, Fig. 2b). On the other hand, the presence of inhibitory compounds such as acetic acid, furfural and HMF in the non-detoxified corn cob hydrolysate severely affected the fermentative capacity of all the strains, reducing xylose consumption and ethanol production (Fig. 4A, Table 3). The strains only containing the XI pathway, PE-XI and CA11-XI, were essentially incapable of xylose consumption (less than 10% of initial xylose in 96 h of fermentation) (Fig. 4A), producing only residual amounts of ethanol (Table 3). From the PE-2∆GRE3-derived strains, the one with only the XR/XDH pathway consumed higher amounts of xylose during the time course of fermentation (Fig. 4A, P < 0.01), but produced low amounts of ethanol while accumulating xylitol (yields of 0.272 and 0.264 g/g, respectively) ( Table 3). On the other hand, the CA11-XR/XDH + XI strain, which consumed approximately the same amount of xylose as PE-XR/XDH, presented the highest ethanol yield (0.384 g/g) of all the strains tested in non-detoxified corn cob hydrolysate (an improvement of more than 38% in comparison with the other strains) ( Table 3, P < 0.05) with low xylitol production (corresponding to a xylitol yield of 0.0716 g/g) ( Table 3). Evaluation of furfural and HMF detoxification capacity To further elucidate the association between the xyloseconsuming pathway and the yeast capacity for detoxification of furan compounds, the different strains were used for fermentation of xylose-containing synthetic media supplemented with 4 g/L of furfural and 0.5 g/L of HMF (Fig. 4B, C). For both furfural and HMF detoxification, the strains expressing the XR/XDH pathway showed improved capacity when compared to the ones with high activity of the XI enzyme (Fig. 4B, C, P < 0.05). However, while the strains without or with low XI activity were capable of furfural depletion in approximately 8 h, the CA11-XR/XDH + XI strain only completely detoxified furfural after 24 h (Fig. 4B). Discussion The economically viable production of second-generation bioethanol must comprise an effective conversion of the xylose present in the lignocellulosic biomass. Normally, xylose consumption in S. cerevisiae is obtained by the heterologous expression of the XR/XDH or the XI pathways. While several xylose-consuming S. cerevisiae strains have been constructed using either strategy, the reported ethanol yields from xylose are still far below the theoretical yield [31][32][33][34]. On the other hand, there is a lack of consensus in the few studies comparing the use of the different xylose-consuming pathways: while all describe the strains containing the XI pathway as capable of reaching higher ethanol yields than the ones containing the XR/XDH pathway, the XI role in xylose consumption rates and ethanol productivity is not clear [27][28][29]. Table 2 Fermentation parameters in YPX medium at 30 and 40 °C in oxygen-deprived conditions Considering the role that different yeast backgrounds may play in the final xylose consumption capacity, in this work two different industrial S. cerevisiae strains were used to systematically compare the efficiency of the two pathways, and also of the combination of both pathways, for xylose consumption in the absence or presence of lignocellulosic-derived inhibitors. The first evaluation of the performance of the different strains, in synthetic media and in the presence of oxygen, revealed a clear advantage of the strains expressing XI for ethanol production, while the strains containing the XR/XDH pathway accumulated higher xylitol levels. Nevertheless, these conditions favored biomass production, resulting in lower ethanol yields. As expected, when tested in oxygen-deprived conditions, the strains containing solely the XI pathways achieved higher ethanol yields, of ca. 0.43 g/g of xylose, which are in accordance with the values previously obtained in synthetic xylose medium using an industrial S. cerevisiae strain expressing the same XI [20]. Interestingly at 30 °C, the strains with higher XI activity (PE-XI, CA11-XR/XDH + XI and CA11-XI) consumed more xylose during the time course of fermentation. The source of the XI enzyme may explain this difference in performance, as in previous comparative investigations, using XI from the fungus Piromyces sp. in S. cerevisiae strains, the XI-containing strains showed lower xylose consumption rates than the ones containing XR/XDH [27,28], while a more recent comparative study using the XI from the anaerobic bacterium C. phytofermentans (also used in this work) showed similar xylose consumption profiles [29]. As expected, the strains with low XI activity levels (PE-XR/XDH, PE-XR/XDH + XI and CA-XR/XDH), which metabolize xylose mainly through the XR/XDH pathway, presented higher xylitol yields. In fact, fermentation in oxygen-deprived conditions also resulted in higher accumulation of xylitol in the XR/XDH-utilizing strains, as the decrease in aerobic respiration reduces the oxidation of NADH, and the consequent low NAD + /NADH ratio favors xylitol production [35]. It should be noted that even with the deletion of the GRE3 gene, the PE-2 strain containing the XR/XDH pathway produced similar, and sometimes higher, xylitol levels than the CA11-derived strain, as a result of the already described natural predisposition of PE-2 for accumulation of this compound [13,14,36]. Moreover, the differences between the strains PE-XR/XDH + XI and CA11-XR/XDH + XI clearly corroborate the significant role played by the XR/XDH pathway in xylitol accumulation and the association between the XI pathway and higher ethanol production: in the CA11-derived strain, there is an increase in the XI activity when both pathways are used, achieving higher ethanol yields, while in the PE-XR/XDH + XI strain only residual levels of XI activity could be observed, resulting in xylitol yields similar to those in the PE-2∆GRE3 strain containing only the XR/XDH pathway. The performance of the xylose-consuming strains was also evaluated in fermentations at higher temperature, which, as expected, resulted in an accentuated reduction in their xylose consumption and ethanol and xylitol production capacity, which is in accordance with the significant decrease in the activity level of almost all xylose consumption enzymes at 40 °C. Nevertheless, and as observed at 30 °C, the PE-XI and CA11-XI strains presented higher ethanol yields than the XR/XDH containing strains. Interestingly, the PE-2∆GRE3-derived strains containing the XR/XDH pathway were able to maintain a high activity level of the XDH enzyme at 40 °C, probably explaining their reduced xylitol accumulation at this temperature. The S. cerevisiae CA11 strain has previously been reported to have thermotolerant fermentation capacity [14,37], and despite the fact that the CA11derived strains generally did not show a better performance when compared with the PE-2∆GRE3-derived strains, the CA11-XR/XDH + XI strain showed the highest xylose consumption capacity among all the strains at 40 °C. To test the applicability of these strains in conditions more close to real industrial conditions, they were tested in hemicellulosic corn cob hydrolysate. When firstly submitted to fermentation with detoxified hydrolysate, performance of the strains was similar to that obtained in HMF (B, C). Furfural: a,c***; b,c***; d,e*; d,f****; e,f**. HMF: a,c****; b,c****; d,e**; d,f****; e,f**** synthetic media: higher ethanol yields obtained from the strains with high XI activity, and more xylitol accumulation from the strains mainly using the XR/XDH pathway. In non-detoxified hydrolysate, and despite the robust characteristics of the industrial S. cerevisiae chassis used, the presence of inhibitors such as acetic acid, furfural and HMF greatly hampered the fermentative capacity of all strains, but mainly of the strains containing only the XI pathway. The lack of the XR/XDH pathway may explain the low xylose consumption and lower level of ethanol production presented by the PE-XI and CA11-XI strains. In fact, the XR enzyme from P. stipitis has been described as having the capacity to convert HMF into less toxic compounds [38], and in this work, the presence of the XR/XDH pathway was found to greatly increase the detoxification rate, not only of HMF, but also of furfural. Additionally, the presence of furfural has been described previously as being advantageous for ethanol production from xylose through the XR/XDH pathway, as its NADHdependent detoxification regenerates NAD + , relieving the redox imbalance and reducing xylitol accumulation [39,40]. In turn, the conversion of xylitol into xylulose regenerates NADH that can be used for further detoxification. In this sense, as a result of the redox equilibrium between furan bioconversion and the XR/XDH pathway (Fig. 5), the strains containing this pathway were capable of xylose fermentation in the non-detoxified hydrolysate. This effect can be observed clearly in the furfural detoxification pattern of the CA11-XR/XDH + XI strain (which has high enzymatic activity of both the XR/XDH and XI pathways): as xylose conversion is divided between the two pathways, less NADH is regenerated by the XR/XDH pathway, resulting in slower furfural detoxification than in the strain mainly using the XR/XDH pathway, but still faster than in the one containing only the XI pathway. In fact, the CA11-XR/XDH + XI strain achieved the highest ethanol production and yield, indicating a clear advantage of expressing both pathways for fermentation of non-detoxified hydrolysates: XR/XDH which facilitates inhibitor detoxification with concomitant decreased xylitol accumulation; and XI, which also reduces xylitol accumulation and allows high conversion yields of xylose into ethanol. Accordingly, the strain PE-XR/XDH + XI, despite presenting high detoxification capacity (derived from the presence of the XR/XDH pathway), presents a low enzymatic activity of XI, which results in higher xylitol accumulation, lower xylose consumption and higher ethanol titers in non-detoxified hydrolysate (in comparison with CA11-XR/XDH + XI). However, the accumulation of xylitol by this yeast, despite being considerably smaller than in the absence of inhibitors, may still prevent the attainment of higher ethanol yields. Regarding xylitol accumulation, the use of XI from C. phytofermentans clearly shows an advantage, as this enzyme was found to be far less inhibited by xylitol than other XIs already expressed in S. cerevisiae [20]. Also worth mentioning is the fact that, despite the reported role of GRE3 in detoxification of lignocellulosic-derived aldehydes, such as furfural, HMF and methylglyoxal [41,42], the deletion of this gene in the PE-2 strain did not seem to influence its detoxification capacity negatively, as it remained similar, or even superior, to that of the CA11-derived strains. Conclusions In this work, we have compared the performance of two industrial S. cerevisiae strains containing XI and/or XR/ XDH for xylose consumption and ethanol production in synthetic medium and corn cob hydrolysate. It is, to the best of our knowledge, the first study involving the simultaneous expression of both xylose consumption pathways in comparison with the single expression of one of the pathways. In synthetic medium, the XI-carrying strains showed the highest ethanol titer and yield, with low xylitol production, and a similar pattern was obtained when the strains were tested in detoxified corn cob hydrolysate. In non-detoxified corn cob hydrolysate, the CA11-XR/XDH + XI strain presented the highest ethanol yield, which can be explained by the high enzymatic activity of the XR/XDH and XI pathways, which allows furfural and HMF detoxification and high ethanol production with low xylitol formation. In this sense, a finetuning of the expression of the XR/XDH and XI pathways in the same yeast strain, possibly coupled with additional strategies to decrease xylitol accumulation, would be beneficial to increase the efficiency of ethanol production with undetoxified lignocellulosic hydrolysates. Yeast strains and plasmid constructions The yeast strains, plasmids and primers used in this study are listed in Table 4. The industrially derived S. cerevisiae PE-2∆GRE3 strain (with the GRE3 deleted to avoid the natural ability of PE-2 for xylitol accumulation) [13] and the industrial isolate CA11, a flocculent strain [43,44], were used as chassis strains for comparison of the two xylose consumption pathways. The plasmid for simultaneous expression of the XR/XDH and XI pathways was constructed by insertion of the xylA gene from C. phytofermentans (with HXT7 promoter and CYC terminator, amplified from pBED-CpXI-NATMX plasmid with the primer pair XI_FW/XI_RV) in the PvuII restriction site of pMEC1049 by in vivo homologous recombination (pMEC1049 + XI). For the sole expression of the XI pathway, the XYL1/XYL2 transcriptional units were removed by in vivo homologous recombination of the pMEC1049 + XI plasmid (digested with EcoNI) with the PGItp-containing fragment (amplified from pYPKa_Z_ PGI1tp with the primer pair 577/567), resulting in the plasmid pMEC_XI. The resulting plasmids were extracted and transformed to Escherichia coli NZY5α (NZYtech) for propagation and confirmation by restriction analysis, followed by insertion into the PE-2∆GRE3 and CA11 strains using the LiAC/SS carrier DNA/PEG method [45]. The transformants were selected on YPD plates (10 g/L yeast extract, 20 g/L peptone, 20 g/L glucose and 20 g/L agar) containing 300 μg/mL of hygromycin and were further cultured in YPX medium (10 g/L yeast extract, 20 g/L peptone, 20 g/L d-xylose) until capable of xylose consumption. PE-2∆GRE3 and CA11 strains carrying the pMEC1049 plasmid for expression of the XR/XDH pathway have been described previously [13,14]. The recombinant strains were preserved at 4 °C on YPX plates containing 300 μg/mL of hygromycin to maintain selective pressure for the plasmids. Non-detoxified corn cob hydrolysate Corn cob was collected, milled and submitted to hydrothermal treatment (autohydrolysis) under non-isothermal conditions (T max of 205 °C, corresponding to a severity of 3.85) based on previous works [48,49]. Autohydrolysis treatment was carried out in a 2 L stainless steel reactor (Parr Instruments Company) equipped with Parr PDI temperature controller (model 4848) at liquid to solid ratio of 8 g distilled water/g of dry corn cob. After autohydrolysis, the resulting solid and liquid phases were separated by filtration. Liquid phase or autohydrolysis liquor was submitted to acid posthydrolysis (1.5% H 2 SO 4 , 165 min at 121 °C) to hydrolyze the oligomers into monomer sugars, based on previously optimized conditions [49]. The composition of non-detoxified hydrolysate (sugars, acetic acid and furan compounds) was determined by HPLC analysis. The hydrolysate was neutralized using CaCO 3 until pH 5 and sterilized by filtration (0.2 µm). Detoxified corn cob hydrolysate To remove inhibitors (phenolic compounds, weak acids and furans), corn cob hydrolysate was detoxified using activated charcoal and ionic exchange resins. Corn cob hydrolysate was mixed with activated charcoal at liquor to solid ratio of 10 g of hydrolysate per gram of activated charcoal with agitation for 1 h [50] to remove phenolic compounds. After that, the hydrolysate was treated with ion exchange resins as described by Rodríguez-López Inoculum preparation Yeast cells for inoculation were grown in YPX medium (10 g/L yeast extract, 20 g/L peptone, 20 g/L d-xylose) at 30 °C for 24 h, with orbital shaking (200 rpm). The cell suspension was collected by centrifugation for 5 min (at 4000g and 4 °C) and suspended in 0.9% (w/v) sodium chloride solution to obtain a suspension with 200 mg of fresh yeast/mL. The fermentation assays were inoculated with 5 mg of fresh yeast/mL (approximately, 1 mg of dry cell weight (DCW)/mL). Fermentation assays Fermentations were performed in 100 mL Erlenmeyer flasks, with cotton stopper or glycerol lock to prevent the entrance of oxygen (creating oxygen-deprived conditions) [25], and with a working volume of 30 mL and carried out at 30 or 40 °C in an orbital shaker at 150 rpm. Fermentations with cotton stopper were monitored by sample collection for HPLC analysis; oxygen-deprived fermentations were monitored by measurement of mass loss resulting from CO 2 production (data not show) and samples for HPLC were collected at the end of fermentation. Fermentation media consisted of YPX medium (10 g/L yeast extract, 20 g/L peptone, 50 g/L d-xylose) and corn cob hydrolysate (detoxified and non-detoxified) supplemented with YP (10 g/L yeast extract, 20 g/L peptone) or YPX medium supplemented with 4 g/L of furfural and 0.5 g/L of HMF. Enzymatic activities Cells for enzyme assays were collected at 24 h of fermentation in YPX medium at 30 and 40 °C. Crude cell extracts were prepared with Y-PER reagent (Thermo Fisher Scientific) and the protein content was determined by Bradford assay (Bio-Rad) [51]. XR, XDH and XI enzymatic activities in each of the cell extracts were assayed (in triplicate) by measuring the decrease/increase of NAD(P)H at 30 °C in a reaction mixture using a microplate reader spectrophotometer (at 340 nm). The reaction mixtures (containing appropriate dilutions of cell crude extract) were adapted from previously reported assays and their compositions were as follows: triethanolamine (100 mM, pH 7.0), NADH or NADPH (0.2 mM) and xylose (350 mM) for XR [52]; glycine (100 mM, pH 9.0), MgCl 2 (50 mM), NAD + (3.0 mM) and xylitol (300 mM) for XDH [52]; Tris-HCl (100 mM, pH 7.5), MgCl 2 (10 mM), NADH (0.15 mM), sorbitol dehydrogenase 2 U/mL and xylose (500 mM) for XI [53]. Specific activity is expressed as units per milligram of protein (U/ mg protein), in which units (U) is defined as micromol NAD(P)H reduced or oxidized per min. Determination of fermentation parameters Ethanol yield from xylose (Y et/x ) was calculated as the ratio between the ethanol concentration at a defined time of fermentation and the xylose consumed in that period of time. Ethanol yield from sugars (Y et/sug ) was calculated as the ratio between the ethanol concentration at a defined time of fermentation and the sugar (xylose and glucose) consumed in that period of time. Xylitol yield from xylose (Y xyol/xyl ) was calculated as the ratio between the xylitol concentration at a defined time of fermentation and the xylose consumed in that period of time. Analytical methods Samples from corn cob treatment (non-detoxified and detoxified hydrolysates) and from fermentation assays were analyzed for quantification of glucose, xylose, acetic acid and ethanol by HPLC using a Bio-Rad Aminex HPX-87H column, operating at 60 °C, with 0.005 M H 2 SO 4 and at a flow rate of 0.6 mL/min. The compounds were detected using a Knauer-IR refractive index detector. Furfural and HMF were quantified by ultra-highperformance liquid chromatography (UHPLC) using a Shimadzu Nexera X2 UHPLC chromatograph equipped with diode array detector (Shimadzu, SPD-M20A). Separation was performed on a reversed-phase Acquity UHPLC BEH C18 column (2.1 mm × 100 mm, 1.7 μm particle size; from Waters) at 40 °C. The flow rate was 0.4 mL/min. The HPLC-grade solvents used were water/ formic acid (0.1%) as solvent A and acetonitrile as solvent B. The elution gradient for solvent B was as follows: from 0.0 to 5.5 min at 5%, from 5.5 to 17 min a linear increase to 60%, from 17.0 to 18.5 min a linear increase to 100%, then column equilibration from 18.5 to 30.0 min at 5%. Statistical analysis Statistical analyses were carried out in GraphPad Prism for Windows version 6.01. Differences between strain performance in terms of ethanol and xylitol production, sugar consumption and furan detoxification profiles were tested by repeated measures two-way ANOVA, followed by Tukey post hoc test. Differences in kinetic parameters were determined using repeated measures one-way ANOVA, followed by Tukey post hoc test.
8,205.4
2019-01-28T00:00:00.000
[ "Biology" ]
A Novel Hydroxyapatite/Vitamin B12 Nanoformula for Treatment of Bone Damage: Preparation, Characterization, and Anti-Arthritic, Anti-Inflammatory, and Antioxidant Activities in Chemically Induced Arthritic Rats The usage of nanomaterials for rheumatoid arthritis (RA) treatment can improve bioavailability and enable selective targeting. The current study prepares and evaluates the in vivo biological effects of a novel hydroxyapatite/vitamin B12 nanoformula in Complete Freund’s adjuvant-induced arthritis in rats. The synthesized nanoformula was characterized using XRD, FTIR, BET analysis, HERTEM, SEM, particle size, and zeta potential. We synthesized pure HAP NPs with 71.01% loading weight percentages of Vit B12 and 49 mg/g loading capacity. Loading of vitamin B12 on hydroxyapatite was modeled by Monte Carlo simulation. Anti-arthritic, anti-inflammatory, and antioxidant effects of the prepared nanoformula were assessed. Treated arthritic rats showed lower levels of RF and CRP, IL-1β, TNF-α, IL-17, and ADAMTS-5, but higher IL-4 and TIMP-3 levels. In addition, the prepared nanoformula increased GSH content and GST antioxidant activity while decreasing LPO levels. Furthermore, it reduced the expression of TGF-β mRNA. Histopathological examinations revealed an improvement in joint injuries through the reduction of inflammatory cell infiltration, cartilage deterioration, and bone damage caused by Complete Freund’s adjuvant. These findings indicate that the anti-arthritic, antioxidant, and anti-inflammatory properties of the prepared nanoformula could be useful for the development of new anti-arthritic treatments. Introduction Rheumatoid arthritis (RA) is an inflammatory disease characterized by chronic inflammation and destruction of bone and cartilage in the ankle joint, accompanied by swelling and pain, ultimately leading to joint deformation [1,2]. RA triggers recurrent synovitis, the production of autoantibodies, and bone and cartilage destruction of the ankle joint, as well as chronic pain and impaired joint function [3]. RA affects approximately 1-2% of the global population, with onset usually in people's 40s and 50s, though onset is possible at any age [4]. The specific etiology of RA is still unknown. However, it has been demonstrated that a range of effector cell activities may contribute to RA pathogenesis [5,6]. Reactive oxygen species (ROS) contribute to the pathophysiology of RA and several other diseases [7,8]. Evidence of joint inflammation and degradation because of oxidative stress was found in arthritis animal models and in RA patients [9]. In arthritic animals, levels of ROS indicators of protein and lipid oxidation are elevated. Moreover, oxidative status changes have been observed in RA patients' serum, as well as in experimentally arthritis-induced rats [10]. Several serum growth factors, including chemokines and cytokines, have been implicated in RA development. Because of their roles in promoting inflammation and other biological processes, cytokines are crucial for RA progression. In rheumatic joints, pro-inflammatory/anti-inflammatory cytokine imbalance leads to an autoimmune response that destroys joints [11]. Currently, some disease-modifying antirheumatic drugs in combination with physiotherapy retard the progression of RA [12]. Nonsteroidal and steroidal anti-inflammatory medications are used as first-line therapy for RA [13]. Unfortunately, an effective novel RA treatment is still required. The application of nanotechnology in medical treatment is a hot research area. Such a therapeutic strategy would be applied in alternative and integrative medicine and would accelerate the development of treatments for several pathological disorders, including RA [14]. Hydroxyapatite (HAP), an inorganic material, has good biocompatibility, bioactivity, and safety. HAP can serve as a nanodrug delivery carrier, as it degrades effectively under acidic conditions [15]. HAP has been used for the sustained delivery of antibiotics and drugs for tissue regeneration and cell differentiation [16]. Nanohydroxyapatite (nano-HAP) has a beneficial outcome on biomaterial biocompatibility and bioactivity [17]. Minimal tissue reaction and more pronounced activity are achieved upon using nano-HAP, as it has a larger surface area relative to micro-HAP [18]. Nano-HAP can permeate through osseous tissue and deliver calcium phosphates required for bone regeneration after disease injury [3,19,20]. Consequently, Vit B 12 -loaded HAP might afford a nanoformula useful for RA treatment. In this lieu, the current work addressed the assessment of the anti-arthritic effects of nano-HAP/Vit B 12 in rats with CFA-induced arthritis considering anti-inflammatory and antioxidant activities, as well as ADAMTS5, TIMP3, and TGF-β roles. Characterization of Nano-HAP and Nano-HAP/Vit B 12 The XRD of the prepared samples is illustrated in Figure 1A. Nano-HAP showed typical XRD peaks for the HAP phase. Peaks at 25.9 • , 31. According to ICDD Card Number 01-073-0293, HAP crystallized in a hexagonal form. No significant difference was observed in XRD pattern peaks of loaded HAP-Vit B 12 , as increasing some peak intensities such as plane (211), peak shift, and despacing decreases after Vit B 12 loading, as shown in Figure S1. 49.7°, 53.2°, and 64° can be indexed to the plane families (002), (211), (300), (202), (212), (222), (132), (213), (004), and (323) that support the discovery of an admixture phase identified as Ca4Mg5(PO4)6 calcium magnesium phosphate (ICDD Card Number: 00-011-0231). According to ICDD Card Number 01-073-0293, HAP crystallized in a hexagonal form. No significant difference was observed in XRD pattern peaks of loaded HAP-Vit B12, as increasing some peak intensities such as plane (211), peak shift, and despacing decreases after Vit B12 loading, as shown in Figure S1. FTIR analysis of the prepared samples is shown in Figure 1B. Absorption bands at 3450 cm −1 and 1648 cm −1 correspond to hydroxyl groups' stretching and bending. Because of carbon dioxide absorption during the synthesis of nano-HAP in alkaline media, bands corresponding to carbonate ions are visible at 1430 cm −1 and 869 cm −1 . Bands at 1045 cm −1 and 575 cm −1 are typical peaks for the asymmetric stretching and bending of PO 4 −3 ions, respectively. The presence of dipole-dipole hydrogen bonding is confirmed by FTIR, which showed a decline in the intensity and shifting of the OH group band toward the minimally higher wavenumbers at approximately 3423.85 cm −1 . Additionally, the intensities of OH and PO 4 −3 bands at 1640.14 and 1035.78 cm −1 are substantially reduced and shifted to 1643.03 and 1031.14 cm −1 in nano-HAP/Vit B 12 , indicating the existence of a coordination mechanism between the two substances ( Figure 1B). The signed spectra of Vit B 12 ( Figure 1B) The N 2 adsorption-desorption isotherms of nano-HAP and nano-HAP/Vit B 12 samples are shown in Figure 1C. With a type H3 hysteresis loop, both isotherms fall under the type IV category. They have a pore size up to 55 nm, which are likely intra-particle pores formed upon nanoparticle aggregation, as indicated by the inset's pore size distribution. BET surface areas for nano-HAP and nano-HAP/Vit B 12 were 83.6 and 73.9 m 2 /g, respectively. The surface area decreases after the loading of Vit B 12 confirms its adsorption. This was consistent with SEM observations ( Figure 1D) and supported by the wide distribution of pore size (Tables S1 and S2). Specific surface areas (BET analysis) were 67.184 and 55.816 m 2 /g, while maximum pore volumes were about 0.48918 and 0.476947 cc/g for nano-HAP and nano-HAP/Vit B 12 , respectively. The specific surface area results from the porous structure of nanoparticles ( Figure 1C,D). Such a porous structure is favorable for medical applications. Hydrodynamic (HD) sizes of nano-HAP and nano-HAP/Vit B 12 are shown in Figure 1E. Nano-HAP has a large HD of about 665 nm, which indicates particle agglomeration in solution. On the other hand, nano-HAP/Vit B 12 had a smaller HD of about 312 nm, which can be attributed to the capping action by Vit B 12 that inhibited particle agglomeration. Both nano-HAP and nano-HAP/Vit B 12 showed an insignificant zeta potential (almost zero mV), indicating uncharged surfaces prior to and post-Vit B 12 loading and, hence, the null capability for electrostatic attraction of counters ions in solution. As shown in Figure 2A, scanning electron microscopy (SEM) revealed variable sizes and positions and a consistently rough surface with numerous pores. Collections of rodshaped nanoparticles were obvious and piled on each other. The observed morphological porosity was in good agreement with the BET study. Two dominant morphologies were clear. The first is layered-type aggregates forming 2D structures. The other is also a layered structure but with much shorter widths than their length and, hence, forming 2D rod-like aggregates. EDX analysis ( Figure 2B) reflected the purity of the sample with dominant peaks originating from calcium and phosphorous in addition to a weak sodium peak, probably originating from sodium hydroxide traces. In the nano-HAP/Vit B 12 TEM image ( Figure 2C), both layered and rod morphologies were visible. Layered structures showed width sizes ranging from 45 to 125 nm, while rods had much shorter widths of about 25 nm. Meanwhile, some large layers were formed ( Figure 2D) with a width of about 800 nm and a length of 1660 nm (1.6 µm). Molecular Modeling of HAP/Vit B 12 Monte Carlo (MC) simulation was addressed to understand and visualize the loading mechanism of Vit B 12 on the (0 1 0) HAP surface. The calculated adsorption energy was 121 kcal/mol, indicating favorable interactions between Vit B 12 and the HAP surface. The lowest-energy structure of the adsorbed Vit B 12 is displayed in Figure 3. The interaction between Vit B 12 and the (0 1 0) HAP surface was through hydrogen bond formation between the hydrogens of the Vit B 12 amide groups and oxygens of the HAP surface, as well as electrostatic interactions between Vit B 12 oxygen atoms and HAP calcium atoms. Molecular Modeling of HAP/Vit B12 Monte Carlo (MC) simulation was addressed to understand and visualize the loading mechanism of Vit B12 on the (0 1 0) HAP surface. The calculated adsorption energy was 121 kcal/mol, indicating favorable interactions between Vit B12 and the HAP surface. The lowest-energy structure of the adsorbed Vit B12 is displayed in Figure 3. The interaction between Vit B12 and the (0 1 0) HAP surface was through hydrogen bond formation between the hydrogens of the Vit B12 amide groups and oxygens of the HAP surface, as well as electrostatic interactions between Vit B12 oxygen atoms and HAP calcium atoms. Effect of Nano-HAP/Vit B 12 on Gross Lesions of the Paw and Ankle Joints An in vivo arthritis model was established to evaluate the effects of nano-HAP/Vit B 12 on CFA-induced arthritis. Compared with the negative (normal) control, the right hind paws and ankles of the positive control (CFA-induced arthritic rats) showed clear swelling, edema, and redness ( Figure 4A,B), which are external signs of arthritis. Arthritic rats that received nano-HAP/Vit B 12 had significantly fewer visible lesions ( Figure 4C). These findings suggest anti-inflammatory effects of nano-HAP/Vitamin B 12 . Effect of Nano-HAP/Vit B 12 on Right Hind Paws Volume Digital calipers were used to assess paw edema in various groups, as well as the diameter of the right hind leg in the paw region to assess swelling rates. Compared with the negative control group, there was a substantial (p < 0.05) increase in paw size (edema) of CFA-induced arthritic rats ( Figure 5). Meanwhile, nano-HAP/Vit B 12 -treated rats had less swollen paws relative to the positive control group. Effect of Nano-HAP/Vit B12 on Gross Lesions of the Paw and Ankle Joints An in vivo arthritis model was established to evaluate the effects of nano-HAP/Vit B12 on CFA-induced arthritis. Compared with the negative (normal) control, the right hind paws and ankles of the positive control (CFA-induced arthritic rats) showed clear swelling, edema, and redness ( Figure 4A,B), which are external signs of arthritis. Arthritic rats that received nano-HAP/Vit B12 had significantly fewer visible lesions ( Figure 4C). These findings suggest anti-inflammatory effects of nano-HAP/Vitamin B12. Effect of Nano-HAP/Vit B12 on Right Hind Paws Volume Digital calipers were used to assess paw edema in various groups, as well as the diameter of the right hind leg in the paw region to assess swelling rates. Compared with the negative control group, there was a substantial (p ˂ 0.05) increase in paw size (edema) of CFA-induced arthritic rats ( Figure 5). Meanwhile, nano-HAP/Vit B12-treated rats had less swollen paws relative to the positive control group. Effect of Nano-HAP/Vit B12 on Gross Lesions of the Paw and Ankle Joints An in vivo arthritis model was established to evaluate the effects of nano-HAP/Vit B12 on CFA-induced arthritis. Compared with the negative (normal) control, the right hind paws and ankles of the positive control (CFA-induced arthritic rats) showed clear swelling, edema, and redness ( Figure 4A,B), which are external signs of arthritis. Arthritic rats that received nano-HAP/Vit B12 had significantly fewer visible lesions ( Figure 4C). These findings suggest anti-inflammatory effects of nano-HAP/Vitamin B12. Effect of Nano-HAP/Vit B12 on Right Hind Paws Volume Digital calipers were used to assess paw edema in various groups, as well as the diameter of the right hind leg in the paw region to assess swelling rates. Compared with the negative control group, there was a substantial (p ˂ 0.05) increase in paw size (edema) of CFA-induced arthritic rats ( Figure 5). Meanwhile, nano-HAP/Vit B12-treated rats had less swollen paws relative to the positive control group. . Effect of Nano-HAP/Vit B12 on right hind paw size in CFA-induced arthritic rats (symbols indicate significant difference at p < 0.05). * Significant compare to arthritic rats, # significant compare to arthritic rats treated with Nano-HAP/Vit B12. Effect of Nano-HAP/Vit B12 on LPO, GSH Content, and GST Activity Nano-HAP/Vit B12 administration significantly (p < 0.05) reduced LPO levels and increased GSH content and GST activity in comparison to the positive control group, which had significantly (p < 0.05) higher LPO levels and lower GSH content and GST activity Figure 5. Effect of Nano-HAP/Vit B 12 on right hind paw size in CFA-induced arthritic rats (symbols indicate significant difference at p < 0.05). * Significant compare to arthritic rats, # significant compare to arthritic rats treated with Nano-HAP/Vit B 12 . Effect of Nano-HAP/Vit B 12 on LPO, GSH Content, and GST Activity Nano-HAP/Vit B 12 administration significantly (p < 0.05) reduced LPO levels and increased GSH content and GST activity in comparison to the positive control group, which had significantly (p < 0.05) higher LPO levels and lower GSH content and GST activity than negative control rats ( Figure 6A-C). Figure 5. Effect of Nano-HAP/Vit B12 on right hind paw size in CFA-induced arthritic rats (symbols indicate significant difference at p < 0.05). * Significant compare to arthritic rats, # significant compare to arthritic rats treated with Nano-HAP/Vit B12. Effect of Nano-HAP/Vit B12 on LPO, GSH Content, and GST Activity Nano-HAP/Vit B12 administration significantly (p < 0.05) reduced LPO levels and increased GSH content and GST activity in comparison to the positive control group, which had significantly (p < 0.05) higher LPO levels and lower GSH content and GST activity than negative control rats ( Figure 6A-C). Data are expressed as mean values ± SEM (* p < 0.05: significant difference relative to arthritic group, # p < 0.05: significant difference relative to arthritic rats treated with Nano-HAP/Vit B12). Histopathological Changes and Arthritic Score Histopathological examination demonstrated no inflammation in the right ankle joint sections of normal control rats' hind leg ( Figure 9A). Conversely, the synovium of CFAinduced arthritic rats showed hyperplasia, substantial inflammatory cell infiltration, and substantial cartilage destruction ( Figure 9B). Differently, nano-HAP/Vit B 12 -treated rats possessed nearly normal articular surfaces and synovial membranes free from inflammation ( Figure 9C). In addition, nano-HAP/Vit B 12 reduced cartilage deterioration, suppressed pannus development, and decreased synovitis. The joint damage's histology lesion scores are displayed in Table 1. (H) TIMP-3 (Data are expressed as mean values ± SEM, (* p < 0.05 significant difference relative to arthritic rats, # p < 0.05: significant difference relative to arthritic rats treated with Nano-HAP/Vit B12). Histopathological Changes and Arthritic Score Histopathological examination demonstrated no inflammation in the right ankle joint sections of normal control rats' hind leg ( Figure 9A). Conversely, the synovium of CFAinduced arthritic rats showed hyperplasia, substantial inflammatory cell infiltration, and sections of normal control rats' hind leg ( Figure 9A). Conversely, the synovium of CFAinduced arthritic rats showed hyperplasia, substantial inflammatory cell infiltration, and substantial cartilage destruction ( Figure 9B). Differently, nano-HAP/Vit B12-treated rats possessed nearly normal articular surfaces and synovial membranes free from inflammation ( Figure 9C). In addition, nano-HAP/Vit B12 reduced cartilage deterioration, suppressed pannus development, and decreased synovitis. The joint damage's histology lesion scores are displayed in Table 1. Discussion XRD ( Figure 1A) showed that nano-HAP/Vit B 12 retained the typical diffraction peaks of the nano-HAP structure [21]. No significant difference was observed in XRD pattern peaks with the increasing intensities of some peaks and shifted peak positions and decreased despacing after the adsorption of Vit B12 ( Figure S1). FTIR analysis ( Figure 1B) featured absorption bands for the stretching and bending vibration of hydroxyl groups at 3450 cm −1 and 1648 cm −1 [22,23]. Bands at 1430 cm −1 and 869 cm −1 indicated the existence of carbonate ions due to carbon dioxide adsorption during the synthesis of nano-HAP from alkaline media [24]. Bands at 1045 cm −1 and 575 cm −1 are typical for the asymmetric stretching and bending of PO 4 −3 ions, respectively [25]. There was no discernible difference between nano-HAP and nano-HAP/Vit B 12 in terms of pore size distribution. This indicates that Vit B 12 was adsorbed on the surface of HAP nanoparticles rather than inside the pores [26]. The decrease in surface area post-Vit B 12 loading confirmed adsorption of Vit B 12 . This result was consistent with the SEM observations ( Figure 1D). In the SEM pictures, various sizes and positions were observed, which showed a consistently rough surface with numerous pores [27]. To explore the therapeutic potential of nano-HAP/Vit B 12 on CFA-induced arthritis, an in vivo arthritis model was used. To evaluate the anti-inflammatory effects of RA, paw edema was measured [28]. The findings revealed that the treatment of arthritic rats with nano-HAP/Vit B 12 reduced the size of the swollen hind paw compared with arthritic rats. Such a paw size reduction indicates a slower swelling rate, which may be due to the suppression of inflammatory processes [29]. This might be attributed to nano-HAP/Vitamin B 12 's anti-inflammatory effects. Oxidative stress occurs when oxidant production exceeds the cell's antioxidant defenses, resulting in oxidative damage to the cell. Oxidative stress plays a role in the etiology of rheumatoid arthritis (RA) and other diseases. Increased reactive oxygen species (ROS) concentrations beyond the physiological standards induce oxidative stress [30]. ROS are unique signal mediators that play vital roles in cell growth and differentiation [31]. Oxidative stress triggers free radical cytotoxicity, which is associated with several diseases [32]. The results demonstrated that CFA-induced arthritic rats had significantly higher LPO levels and lower GSH content and GST activity. These data are consistent with Ahmed et al. [33] and Saleem et al. [34]. An elevated level of LPO indicates the production of ROS. In arthritic rats, excessive ROS production may have lowered the antioxidant defense system, increased LPO levels, and rendered synovial fluid and collagen susceptible to oxygen radicals-mediated damage [35]. The administration of nano-HAP/Vit B 12 to arthritic rats resulted in a significant increase in GSH content and GST activity, as well as a reduction of LPO levels to near negative control group levels, indicating scavenging of ROS or antioxidant potential, implying the effectiveness of nano-HAP/Vit B 12 for the treatment of RA. These results are in parallel with Ain et al. [3]. Accordingly, it may be deduced that oxidative stress is one of the primary mechanisms for inhibiting inflammatory cytokine in RA [34]. Additionally, it has been shown that individuals with RA encounter significant levels of oxidative stress in addition to elevated inflammation [36]. These findings suggested that the nano-HAP/Vit B 12 had a strong antioxidant activity against ROS because of CFA arthritis induction. CRP and RF are markers of systemic inflammation and the generation of antibodies against RA [37]. In parallel with Yang et al. [38], serum RF and CRP concentrations were significantly higher in arthritic rats than in normal rats. Serum RF and CRP were used to evaluate joint activity in RA-affected rats as they are two key indicators of systemic inflammation in RA [39]. Treating arthritic rats with nano-HAP/Vit B 12 significantly reduced both RF and CRP serum levels, suggesting that nano-HAP /Vit B 12 has anti-inflammatory effects. Several studies have shown that inflammation is a primary mechanism and an important function in RA rats [40][41][42][43]. Consistent with our findings, the arthritic positive control group revealed significantly increased serum levels of IL-1β, TNF-α, and IL-17 and a significantly decreased level of IL-4 in arthritic rats. Our results are in harmony with Yang et al. [38], Li et al. [44], and Setiadi and Karmawan [45]. Additionally, synovial inflammation and cartilage destruction are caused by the overproduction of pro-inflammatory cytokines such as TNF-α, IL-1β, and IL-17, as well as decreased anti-inflammatory fac-tors such as IL-4, which have been associated with RA [40][41][42][43]. In RA, TNF-α, IL-1β, and IL-17 all play significant and cooperative roles in cartilage degradation and synovial inflammation [34,40,46]. TNF-α overproduction also results in the production of matrixdegrading enzymes and higher amounts of IL-1β [40]. IL-1β is the most crucial cytokine in the formation of pathogenic arthritis and has been connected to signs such as morning stiffness. This cytokine, which is mainly produced by macrophages, has a vital role in the invasion of inflammatory cells, as well as the deterioration of cartilage and bone [47]. Additionally, IL-17 is essential for RA development, as it induces the overproduction of pro-inflammatory cytokines, osteoclast activation, and angiogenesis [40]. According to these data, therapeutic agents that effectively reduce the release of TNF-α, IL-1β, IL-6, and IL-17 could be beneficial RA treatments [40,46,48]. IL-4 is essential for regulating the levels of endogenous pro-inflammatory cytokines during RA [34,40]. According to studies, RA patients have higher levels of several cytokines than healthy individuals. These cytokines are released by immune cells to enhance cell interaction during inflammation. Since these cytokines are RA modulators, current standard RA treatments aim to reduce inflammation by inhibiting their release [49,50]. According to our data, nano-HAP/Vit B 12 treatment obviously reduced TNF-α, IL-1β, and IL-17 levels and elevated IL-4 expression, suggesting that the anti-inflammatory impact of nano-HAP/Vit B 12 might happen through the suppression of pro-inflammatory cytokines and the elevation of antiinflammatory cytokines. Therefore, anti-rheumatic drug delivery using nano-HAP has received attention. HAP is an efficient carrier for RA due to its anti-inflammatory [51,52] and osteoinductive characteristics [53,54]. ADAMTS-5 is the main aggrecanase recently discovered and has been found to be expressed in a variety of tissues, including cartilage. It is reported that the ADAMTS-5 level is regulated by many metabolic factors, such as TNF-α, IL-1β, and other cytokines [55][56][57]. ADAMTS-5 level increases under inflammatory conditions [58]. TIMP-3 is a member of the TIMP family, which are intrinsic moderators of matrix metalloproteinases (MMPs) and crucial for preserving the nearby extracellular matrix [59]. TIMP-3 may have the ability to stop cartilage loss [60,61]. TIMP-3 stands out as a significant regulator of inflammation due to its precision in blocking pro-inflammatory cytokines and damage to joint tissue [62]. Numerous researchers hypothesized that the imbalance between ADAMTS-5 and TIMP-3 may be because of the rapid degradation of the cartilage matrix in RA [63]. In the present study, nano-HAP/Vit B 12 exhibited a significant reduction of ADAMTS-5 and elevation of TIMP-3 in arthritic rats. Additionally, in this study, nano-HAP/Vit B 12 was prepared because Vit B 12 is important for bone health. Previous research has linked Vit B12 deficiency and RA, most likely as a result of deficient nutrition and eventual malabsorption brought on by autoimmune mechanisms [64]. Maintaining healthy amounts of vitamin B 12 , required for cells to protect themselves from the damaging effects of free radicals, might reduce inflammation [65]. According to our findings, nano-HAP/Vit B 12 had a strong anti-arthritic and anti-inflammatory activity. TGF-β is a crucial regulator of cellular and physiological processes, such as cell survival, migration, proliferation, differentiation, angiogenesis, and immunosurveillance [66]. It is crucial for the growth and homeostasis of many tissues. In addition to regulating ECM formation and degradation, it also affects cell migration, differentiation, apoptosis, and proliferation. Additionally, these factors regulate immune response and modulate how cells and tissues respond to damage [67]. TGF-β is one of the most well-known immunosuppressive cytokines that is released by practically all immune cells, including T, B, dendritic cells, macrophages, and fibroblasts, and it has a role in immune response [68] and plays a crucial role in preventing abnormal reactions that result in autoimmunity [69]. Increased TGF-β expression in RA is parallel with Lu et al. [70] and Raafat et al. [71]. Since several studies have confirmed the presence of TGF-β in the synovial tissues and synovial fluids of patients with RA, it has been suggested that TGF-β plays a role in the pathogenesis of RA [72]. TGF-β can increase inflammation and joint damage in RA by causing synovial fibroblasts and releasing several pro-inflammatory cytokines such as TNF-α, and IL-1β [73]. Treatment with nano-HAP/Vit B 12 reduced TGF-β expression. In support of this attribution, it was stated by Soriente et al. that nano-HAP was able to decrease inflammation by reducing the pro-inflammatory cytokine TGF-β [74]. According to our findings, nano-HAP/Vit B 12 had a strong anti-arthritic and anti-inflammatory activity. Histopathological examination demonstrated that the synovium of CFA-induced arthritic rats displayed hyperplasia, substantial inflammatory cell infiltration, and substantial cartilage destruction. These results are in harmony with Logashina et al. [75] and Shaaban et al. [76]. Rats treated with nano-HAP/Vit B 12 showed reduced cartilage deterioration, suppressed pannus development, and decreased synovitis. Therefore, our results clearly showed that nano-HAP/Vit B 12 is an effective inhibitor of inflammatory responses inside tissues, as well as in regulating inflammatory responses and paw edema induced by arthritis. These results also showed the anti-inflammatory activity of nano-HAP/Vit B 12 as a therapeutic agent for RA due to its effects in repressing inflammation inside tissues and reducing the swelling features of arthritis. Overall, this study demonstrated the effectiveness of nano-HAP/Vit B 12 as an anti-arthritic and anti-inflammatory treatment for RA. Chemicals CFA was purchased from Sigma Chemical Co., St Louis, MO, USA. Vitamin B12 was obtained from Pharma Swede Pharmaceutical Company, Cairo, Egypt. Other compounds were of analytical grade. Preparation of Nano-HAP and Nano-HAP/Vit B 12 HAP nanoparticles were prepared following a previous report by Chen et al. [77]. Briefly, a solution of Ca(NO 3 ) 2 (0.167 mol) and (NH 4 ) 2 HPO 4 (0.1 mol) in distilled water was adjusted to approximately the pH of 3.0 and volume of 1 L using urea (1 mol) as a buffer, diluted nitric acid, and distilled water. Then, the mixture was moved to a hydrothermal vessel made of stainless steel and Teflon, which was heated for 24 h at 95 • C. The precipitate was then filtered, thoroughly washed, dried, and kept for further use. For the synthesis of nano-HAP/Vit B 12 , Vit B 12 solution (1000 ppm) was prepared by dissolving in DDW water/(DMSO) dimethylsulfoxide (1:1 volume %). The prepared VB 12 solution was then mixed with 300 mg of HAP and stored in the dark at room temperature for 24 h. The resultant suspension was centrifuged at 13,000 rpm for 30 min. Vit B 12 -loaded HAP particles were washed with DDW water and dried under vacuum for 12 h at 60 • C. The supernatant from centrifugation was collected and filtered using 0.45 micron nylon filters, and then Vit B 12 concentration was determined by a UV-VIS spectrophotometer at 360 nm [78] to calculate Vit B 12 loading efficiency. The amount of VB 12 loaded onto the HA NPs was determined by a quantitative spectrophotometric method. VB 12 has maximum absorption at 360 nm. A calibration curve was prepared by measuring the absorbance at 360 nm of standard solutions of VB 12 (5-50 ppm) using a UV-VIS spectrophotometer. Characterization of Nano-HAP and Nano-HAP/Vit B 12 The prepared nano-HAP and nano-HAP/Vit B12 were evaluated using XRD (PANalytical Empyrean, Malvern, United Kingdom) using a scan angle range of 5 to 80 degrees, a scan step of 0.05 degrees, a current of 30 mA, and a 40 KV accelerating voltage. FTIR was acquired using a Bruker vertex 70 FTIR-FT Raman. Scanning electron microscopy and EDX analysis for morphological examination and ascertaining elemental composition were performed using SEM-EDX Quanta FEG250 (Elecmi, Zaragoza, Spain). An automatic surface analyzer employing the N 2 adsorption-desorption method was used to assess the BET-specific surface area, pore volume, and pore size distribution of nano-HAP and nano-HAP/Vit B 12 (TriStar II 3020, Micromeritics, Norcross, GA, USA). For the determination of microstructures, HRTEM (high-resolution transmission electron microscopy) (JEOL-JEM 2100, Akishima, Tokyo, Japan) was used. Molecular Modeling Monte Carlo simulation was performed using an adsorption locator in BIOVIA Materials Studio 2020. The HAP structure from the materials project was used [79]. The geometry of Vit B 12 loaded onto the HAP (0 1 0) surface was obtained using an adsorption locator employing the Metropolis Monte Carlo simulation with a simulated annealing method. The Bravais lattice of the HAP structure was triclinic. The Vit B 12 and HUE were optimized using a Forcite module. The optimized α, β, and γ angles were 90 • , 119.990 • , and 90 • . Lattice lengths of A, B, and C were 9.50281Ǻ, 6.93592Ǻ, and 18.9932Ǻ, respectively. A supercell with 6 × 3 × 3 of this crystal and (0 1 0) surface was cleaved with a 60Ǻ vacuum above the surface. The COMPASSIII forcefield was used [80]. The electrostatic interaction was calculated using the Ewald method. Atom-based was used to calculate the van der Waals interaction. Experimental Animals A total of 30 male Wistar rats weighing 120-150 g were obtained from the Egyptian Organization for Biological Products and Vaccines (VACSERA) (Helwan, Cairo, Egypt). Prior to the experiment, animals were monitored for two weeks to assure that they were infection free. Rats were housed in precisely calibrated polypropylene cages and kept in climate-controlled environments (55.5% humidity levels at RT: 20-25 • C), with an ad libitum diet and water and a daily light cycle (10-12 h/day). The experimental work was approved by the Faculty of Science Experimental Animal Ethics Committee (ethical approval number: 022-387), Beni-Suef University, Egypt. Induction of Arthritis and Animal Grouping For arthritis induction, 0.1 mL of CFA solution was intrapedally injected into the right hind paw footpad once a day for two days [33]. Following RA induction, the 30 Wistar rats were split into three groups (ten rats per each group) that were treated as shown in Figure 10. Experimental Animals A total of 30 male Wistar rats weighing 120-150 g were obtained from the Egyptian Organization for Biological Products and Vaccines (VACSERA) (Helwan, Cairo, Egypt). Prior to the experiment, animals were monitored for two weeks to assure that they were infection free. Rats were housed in precisely calibrated polypropylene cages and kept in climate-controlled environments (55.5% humidity levels at RT: 20-25 °C), with an ad libitum diet and water and a daily light cycle (10-12 h/day). The experimental work was approved by the Faculty of Science Experimental Animal Ethics Committee (ethical approval number: 022-387), Beni-Suef University, Egypt. Induction of Arthritis and Animal Grouping For arthritis induction, 0.1 mL of CFA solution was intrapedally injected into the right hind paw footpad once a day for two days [33]. Following RA induction, the 30 Wistar rats were split into three groups (ten rats per each group) that were treated as shown in Figure 10. Group 1: (Normal negative control rats): Normal rats in this group were orally given an equivalent amount of vehicle (saline) every day for 21 days. Group 2: (Arthritic positive control group): CFA-induced arthritic rats were treated for 21 days with equivalent amounts of vehicle (saline) taken orally every day. Assessment of Paw Edema In all groups, paw edema, swelling rate, and paw volume were evaluated to monitor the course of arthritis. Rats were anesthetized using 1:2:3 ACE mixtures of alcohol, chloroform, and ether for measurement. Day 0 was the first day of CFA injection, and measurements were taken on Days 3, 7, 14, and 21 post-arthritis induction. The volume of the hind paw was calculated using an electronic caliper. Collection of Blood and Tissue After mild anesthesia with diethyl ether, animals were sacrificed on the 21st day. Blood samples were collected and centrifuged (3000 rpm for 15 min), and sera were collected in sterile tubes and stored at −20°C for measurement of RF, CRP, 1L-1β, 1L-17, 1L- Group 3: (Arthritic + Nano-HAP/Vit B 12 ): CFA-induced arthritic rats were given a safe oral dose of nano-HAP/Vit B 12 at a daily dose of 30 mg/kg body weight [81] for 21 days. Assessment of Paw Edema In all groups, paw edema, swelling rate, and paw volume were evaluated to monitor the course of arthritis. Rats were anesthetized using 1:2:3 ACE mixtures of alcohol, chloroform, and ether for measurement. Day 0 was the first day of CFA injection, and measurements were taken on Days 3, 7, 14, and 21 post-arthritis induction. The volume of the hind paw was calculated using an electronic caliper. Collection of Blood and Tissue After mild anesthesia with diethyl ether, animals were sacrificed on the 21st day. Blood samples were collected and centrifuged (3000 rpm for 15 min), and sera were collected in sterile tubes and stored at −20 • C for measurement of RF, CRP, 1L-1β, 1L-17, 1L-4, TNF-α, ADAMTS-5, and TIMP-3 levels. The posterior ankle joints of rats' right leg from each group were removed. Phosphate buffer was mixed with liver tissue samples before centrifugation (3000 rpm for 15 min at 4 • C). Supernatants were kept at −30 • C until used for investigation of antioxidant activities and oxidative stress. Ankle joint tissues were stored at −70 • C and utilized for molecular investigations. After one day of fixation in 10% neutral buffered formalin, a third tissue piece was prepared for cutting into sections and stained for histological analysis. Histopathological Investigation On the 21st day of post-arthritis induction, rats were sacrificed, and the posterior ankle joints of the right leg were removed and placed in 10% buffered formalin for 48 h. Bones were decalcified using 10% formic acid for two weeks changing the solution twice a week. A surgical blade was used to determine when the decalcification process was complete. Following decalcification, tissues were washed with PBS, dried with ethanol, and embedded in paraffin wax. Sagittal slices (5 mm thick) were created and stained with hematoxylin and eosin (H&E). A blind histological examination was performed by the center for pathology, including synovitis, cartilage, and bone damage. Sections were classified for cartilage degeneration, bone erosion, synovial hyperplasia (pannus development), and inflammation (infiltration of mononuclear cells) using the system described by Sancho et al. [82]. Each characteristic was rated from 0 to 3, with 0 representing normal, 1 (+) representing mild inflammation, 2 (++) representing moderate inflammation, and 3 (+++) representing severe inflammation. Blood vessels, inflammatory infiltrates, articular cartilage, Pannus, and Menisci were found in the joint cavity space. q-RT-PCR Analysis Using Trizol (Invitrogen; Life Technologies, Waltham, MA, USA), total RNA from the tissue was isolated employing Sthoeger et al.'s method [83]. Following the isolation procedure and in line with the manufacturer's recommendations, the 260:280 ratios were assessed to evaluate RNA's purity. The production of complementary DNA (cDNA) was carried out using a high-capacity cDNA reverse transcription kit (Applied Biosystems, USA). Used primers are displayed in Table 2. Using the 2 −∆∆CT method, the expression of TGF-β was normalized to the expression of rat GAPDH. Statistical Analysis Results are presented as mean ± standard error (SE). The statistical variances across the groups were compared using a one-way analysis of variance (ANOVA). Using SPSS version 20 software, Duncan's approach for post-hoc analysis compares various groups with significance set at p < 0.05. Conclusions The current study revealed that the novel synthesized nano-HAP/Vit B 12 have promising anti-arthritic, anti-inflammatory, and antioxidant effects on CFA-induced arthritic rats. They caused oxidative stress reduction, inflammation suppression, and enhancement of the antioxidant defense system; additionally, they increased anti-inflammatory markers such as TIMP-3 and IL-4 and decreased inflammatory markers, such as IL-1β, TNF-α, IL-17, ADAMTS-5, and TGF-β levels. All these observed effects indicate the contribution of nano-HAP/Vit B 12 as an anti-arthritic agent ( Figure 11). Furthermore, nano-HAP/Vit B 12 improved the histopathologic implications of CFA-induced arthritic rats' articular joints. Finally, we can say that nano-HAP/Vit B 12 might serve as an effective treatment for RA. More clinical investigations are needed to determine the efficacy and safety of nano-HAP/Vit B 12 . Conclusions The current study revealed that the novel synthesized nano-HAP/Vit B12 have promising anti-arthritic, anti-inflammatory, and antioxidant effects on CFA-induced arthritic rats. They caused oxidative stress reduction, inflammation suppression, and enhancement of the antioxidant defense system; additionally, they increased anti-inflammatory markers such as TIMP-3 and IL-4 and decreased inflammatory markers, such as IL-1β, TNF-α, IL-17, ADAMTS-5, and TGF-β levels. All these observed effects indicate the contribution of nano-HAP/Vit B12 as an anti-arthritic agent ( Figure 11). Furthermore, nano-HAP/Vit B12 improved the histopathologic implications of CFA-induced arthritic rats' articular joints. Finally, we can say that nano-HAP/Vit B12 might serve as an effective treatment for RA. More clinical investigations are needed to determine the efficacy and safety of nano-HAP/Vit B12. Supplementary Materials: The following supporting information can be downloaded at: www.mdpi.com/xxx/s1, Figure S1 The XRD of HAP and HAP/Vit B12 with showing the displacing of each band, Table S1 BJH Pore distribution adsorption results for HAP and Table S2 BJH Pore
8,737.4
2023-04-01T00:00:00.000
[ "Medicine", "Materials Science" ]
Conceptual analysis studies in Turkish tests and suggestions to solve the computer network terms lexicon Conceptual inference on texts with natural language processing is one of the most important topics of artificial intelligence. The meaning extraction process with artificial intelligence algorithms is frequently done in foreign languages, and in recent years studies for Turkish have been increasing. In this study, concept-meaning extraction studies made for Turkish have been researched and extraction algorithms which can be done on the computer network concept dictionary have been investigated. Introduction Natural language processing (NLP) is one of the lower branches of artificial intelligence, and language works with areas such as shape and meaning.Together with developing information projects, work in the field of NDP is increasing rapidly.The increase of the studies made possible the meaning relations in the language features by calculating them with concept relations. When the language structure is resolved and the semantic properties are removed, the largest conceptual dictionary created is Wordnet (Miller, Beckwith, Fellbaum, Gross & Miller, 1993).Wordnet was tried to be associated many times after manual creation, but ultimately the first experiments of automatic association were done by Hearts (Hearts, 1998).Wordnet was originally developed for the English language.As usage increases, it has begun to be developed for various languages (Vossen,1998).The Balkanet project has been developed as a Wordnet subproject with Balkan languages and Turkish (Bilgin, Cetinoglu & Oflazer, 2004). This study focuses on two main reasons.Firstly; Studies conducted for Turkish Wordnet and related texts have been examined.Studies on Turkish sound structure and automation algorithms have been investigated.The second study aims to present concepts on the works that can be used on ontological (conceptual) dictionary created in another study by extracting concepts from Turkish Wordnet, Turkish Informatics Society, Turkish Language Association dictionary and computer network books.Turkish has the ability to create new vocabulary along with every addition it adds (Eryigit & Oflazer, 2006).In studies conducted for the Turkish language (Oflazer, Say, Hakkani-Tur & Tur, 2003;Oflazer, 2003;Hakkani-Tur, Tur & Oflazer, 2002), the words are grouped into shooting groups.The study aims to separate the words on the quoted text and to work on their roots. Wordnet Anaysis Studies One of the works done for Turkish Wordnet is the work done by using automatic translation and template methods.WordNet for translation has been translated into Turkish.As a template method; One template for each relationship between the concepts was created.Whether the words around the templates belonged to that class were examined; The samples obtained are divided into two classes (Amasyali, 2011).In his work Gungor worked with dictionary based algorithm and achieved successful results.A single feature of the Wordnet dictionary has not been exploited, many features have been looked at.Most of the algorithms developed are; Is an algorithm developed by removing the concept from irrelevant words in the definition citation (Aydin, Erkan, Gungor & Takci, 2014). In a study prepared considering the conceptual relations of the Turkic people, the words are divided into additions and roots.The endings of the roots, endless numbers and ordering, have been worked on in a certain part, rather than working in a large dictionary, making it difficult to use.After separating the words, their suffixes and their roots, the suffixes formed additional pairs among themselves.Since there is more than one sequential sound rule, a basic statistical based study is done.For error correction, the additions and roots of the kelmen were scanned in their resource pools and replaced by replacing the roots with the appropriate resource set.In this context; Fuzzy logic simulations, misconceptions made in the past, and encounter rates encountered during resource training (Dilsiz, 2005).In another surveillance exercise, firstly it was checked whether or not heel was spelled.Non-spoofable words are not accepted in Turkish and added to the dictionary to be created by the user.Ignorable words are resolved according to the generated hyphenation algorithm.This analysis is based on the Turkish sound rules, and at least one word that does not fit is accepted as a Turkic word entered from foreign languages.In the dictionary, Turkish equivalents are suggested for words with corresponding Turkish equivalents.Words that cannot be analyzed are not in Turkish or they are accepted as erroneous and the suggestions are listed.The flow diagram of the word suggestion algorithm of the work done in Figure 1 is shown (Delibas, 2008). Computer Network Terms And Resolution Recommedations In a previous study, the merging of the knowledge dictionary was done in the prepared TXT file; Turkish Wordnet, Turkish Language Association TDK Synonyms dictionary Turk Dil Kurumu Ana Sayfası(2015) Turkish Informatics Society information dictionary (Bilisim Sozlugu, 2014).Es Anlamyakın Arama Sozlugu(mythes-tr,2014) and the created dictionary were used.Because the generated dictionary is ontological base, the use of Turkish concept mining and analysis algorithms in their projects can increase the success (Aktas, Yilmaz, Cakir & Kutlu, 2016).In Figure 2, used dictionaries and word counts are shown.The dictionary structure, unlike the other dictionaries, consists not only of synonyms or semantics, but also of multiple situations.With the word itself; Synonym, side meaning, opposite meaning, definition tags.The implication of the side of Keliman can be explained as follows; For example, "network" is synonymous with "network", and "unix" is connected to "network" with a meaning.When you get out of the created dictionary; The dictionary can be analyzed using ontological analysis algorithms.One of the analysis works for the Turkish Language Institution is to look at the 30-word concept sequence for deriving meaning of the word.A set of contexts has been removed from the words on the right and 15 left of Kelimenin.For example, in the case of the word "RAM" in the text, it is deduced that the text is related to "computer" (Aydin, Erkan, Gungor & Takci, 2014).This can be the main theme of one of the concepts, sub-concepts, including a top branch of the tree, by going out of the way, checking the concept's synonyms and close meaning labels.With the concealed semantic indexing method (GAD), more meaningful results were obtained than the GAD method, which was formed from the terms of 2-gram and 3-gram letters with n gram supported GAD study.Generated terms are words accepted and meaningful associations are established (Guven, 2007).In Hung's work, he used additional semantic category information in the SOM (self-organizing map) model to improve field clustering performance.Three new vector presentation approaches have been proposed, namely the extended meaningful vector model, the hypernymic significant vector model and the hybrid vector space model.It has been seen that clustering performance has been improved Figure 3. Class hierarchy of ontology-based concept map (Gultepe & Memis, 2014) by removing symbolic information from Wordnet ontology with the hypernyms significant vector model emerging from these three models.Information from an ontology, such as WordNet ontology, has been shown to improve SOM clustering performance (Hung & Wermter, 2004).The concept can be deduced by studying the relationship between the concept and the concept map with the semantic tag. In Gultepe study, course planning, implementation and planning concept maps were created ontology basis.For this, Protégé used the ontology development editorial.The graphical interface of this ontology, 6-7-8.He visualized the concept maps of class mechanical topics (Gultepe & Memis, 2014).The display on the ontology-based concept map of the ordering of these classes is shown in Figure 3.In the study which was tested on ODTU-Sabanci Wooden Structure, statistical results were obtained based on decision support machines as a classifier and double dependency probabilities in loyalty analysis for Turkish (Eryigit, Adali & Oflazer, 2006).and the hybrid vector space model.It has been seen that clustering performance has been improved by removing symbolic information from Wordnet ontology with the hypernyms significant vector model emerging from these three models.Information from an ontology, such as WordNet ontology, has been shown to improve SOM clustering performance (Hung & Wermter, 2004).The concept can be deduced by studying the relationship between the concept and the concept map with the semantic tag. In Gultepe study, course planning, implementation and planning concept maps were created ontology basis.For this, Protégé used the ontology development editorial.The graphical interface of this ontology, 6-7-8.He visualized the concept maps of class mechanical topics (Gultepe & Memis, 2014).The display on the ontology-based concept map of the ordering of these classes is shown in Figure 3.In the study which was tested on ODTU-Sabancı Wooden Structure, statistical results were obtained based on decision support machines as a classifier and double dependency probabilities in loyalty analysis for Turkish (Eryigit, Adali & Oflazer, 2006). Results and Recommendations In this study, the meaning and concept extraction studies on Turkish Wordnet and conceptually related dictionaries were examined. Firstly, considering the studies, it is seen that the algorithms constructed in the field of NLD are basically the same as the conceptual inference of synonyms and semantic labels and that the definition label is effectively used when used as a side label. Secondly, two studies which can be used for concept inferences have been investigated using the ontological dictionary, which has the largest dictionary feature made in the previously created Turkish language.Future work will be done with artificial intelligence algorithms on concept inference for computer networks, and the success of existing algorithms will be examined. Figure 2 . Figure 2. Dictionaries and word counts used
2,112.6
2017-06-27T00:00:00.000
[ "Computer Science", "Linguistics" ]
Hyper-Spectral Speckle Imaging for Space Situational Awareness When observing targets in the near-Earth space environment using ground-based telescopes, the Earth’s turbulent atmosphere and the telescope’s aperture combine to act as a low-resolution spectrometer. The light at disparate wavelengths from a given point in the object lands at different spatial locations in the focal plane. Here we investigate the feasibility of capitalizing on this effect to provide snapshot hyper-spectral imaging of targets in the near-Earth space environment through ultra-broadband speckle imaging. In particular, we focus on the potential for using ultra-broadband speckle images for high-resolution, high-contrast imaging of closely spaced objects with a large contrast in brightness. It was found that this technique is capable of distinguishing closely spaced objects down to 0.14 arc-seconds and providing estimates on the spectra. The primary component of the closely spaced objects had very good recovery, while the secondary’s recovery was dependent on dependent on separation and contrast ratio. Introduction To comprehensively characterize a satellite for space situational awareness (SSA), we need to know its type, location, orientation, purpose, and health (i.e., level of deterioration and functionality). Obtaining this information is challenging, especially for satellites that are faint due to their size or distance away (or both). For example, we have limited surveillance capability for both the smallest objects in Low Earth Orbit (LEO) and satellites in the 4 -steradian volume of space from Geosynchronous Equatorial Orbit (GEO) out to cislunar orbit: a region of increasing concern. Currently, there is no way to methodically monitor satellites' health, even the largest and closest ones. Objects are characterized by their spatial, spectral, and temporal signatures. The ultimate goal for an object's characterization is a series of hyperspectral images that capture the object's full 3D surface area in fine spatial and spectral detail. The latter data allow us to identify the object's materials, shedding light on the object's capability. However, we are currently far from this ideal situation. Although instruments for hyperspectral imaging exist, they typically require scanning in wavelength or one of the spatial dimensions. This limits the usefulness of these instruments to targets that are not rapidly changing pose. To the best of our knowledge, "snapshot" hyperspectral imaging of resolved targets, i.e., simultaneous acquisition of the spatial and spectral information, is still in its infancy, and there are no techniques in routine use for SSA. Here we investigate the potential for a novel snapshot hyperspectral imaging technique that leverages the imaging instrument's aperture's diffractive properties and the dispersive properties of the atmosphere. Here we define snapshot to be an exposure time that is shorter than, or commensurate with, the atmospheric coherence time [10]. Diffraction and dispersion both cause light at different wavelengths from a point source to land on spatially different locations in the image. That is, they produce a spectrally smeared signal. If we can unscramble the spectral information on the target that is inherently encoded in a broadband image, then we have a way to perform snapshot hyperspectral imaging. In addition to the benefits discussed above, we note that snapshot hyperspectral imaging over a wide wavelength range offers the improved capability for detecting a faint object in proximity to a much brighter object. Accurate knowledge of the speckle structure in a short-exposure image due to the brighter object, requires knowledge of the target's spectrum and how the light at different wavelengths is redistributed across the image. We can remove the bright object's spectral contribution to the overall signal to facilitate the fainter object's detection. We note that in this context, bright means in relation to the faint object. The absolute brightness of the primary object can still be faint. As an example, we may want to look for debris near a high-value asset in GEO. Here the brighter of the two objects is still dim. In this paper, we focus on a proof-of-concept for the proposed atmosphere-instrument spectrometer concept. Is it even feasible to decode the target's spectral information from a series of broadband speckle images? To answer this question, we investigate the relatively simple problem of two closely-spaced point-source objects of different brightness. With this type of object, the signal's spectral spreading is not as smeared as it is for an extended object. The latter is a significantly more challenging problem to solve. In this feasibility study, we ignore photon noise and camera read noise. Phase Screen Generator We simulate the turbulence in the Earth's atmosphere using Kolmogorov phase screens and the split-step beam propagation method [12]. This approach models the distributed turbulence along the propagation path utilizing 15 discrete, infinitely thin, phase screens equally spaced over the path. The variation in the phase across each screen represents the fluctuations induced by changes of the refractive index in the atmosphere over the volume of space half-way to adjacent screens. The phase screen code [12] takes the initial wavefront and propagates it through a vacuum until it reaches a phase screen and repeats this process for the entire propagation path until every phase screen has perturbed the wavefront. The propagation between the phase screens also gives rise to amplitude fluctuations in the wavefront. The turbulence in each phase screen is defined by the refractive index structure parameter ( C 2 n ) which we model using the Hufnagel-Valley approximation [9] where the a 1 coefficient represents the strength of the ground layer of the atmosphere, and H 1 is the height of its 1/e decay. a 2 and H 2 are similarly defined for the turbulence in the troposphere, and a 3 and H 3 are related to the turbulence peak located at the tropopause. We select the values of the coefficients such that the C 2 N profile simulates the conditions at Mount Haleakala on Maui. We then make minor adjustments to the coefficients to obtain the desired Fried parameter [11], The resulting turbulence and physical parameters are listed in Table 1. At the end of the propagation process, the wavefront is in the optical system's aperture plane, and the values are wrapped between and − . To unwrap the phase, we use the Goldstein branch cut phase unwrapping algorithm [2]. We note that we have ignored the variation of the refractive index (n) on wavelength, i.e., dispersion [8]. The effects of dispersion will be included in future studies. Image Generation In the case of incoherent light, the model for the noise-free intensity of an image g(x i , y i ) is [3] where (x 0 , y 0 ) and (x i , y i ) are the coordinates in the object plane and the image plane respectively, f (x 0 , y 0 ) is the object intensity distribution and h(x 0 , y 0 ;x i , y i ) is the space-varying point spread function (PSF) of the imaging system. For our simulations we assume that the field-of-view for our images is less than the size of the isoplanatic angle ( 0 ) for the atmosphere during the observations, where In this case the PSF becomes spatially invariant over the field-of-view and Eq. 3 simplifies to a convolution [8]. Invoking the linear superposition principle for electromagnetic fields [3], which allows us to model a broadband image as a summation of monochromatic images, we then generate an observed broadband image using . where ⊛ denotes convolution and 2 − 1 is the spectral bandwidth for the observation. The spectral bandwidth is sampled at N discrete wavelengths such that N = ( 2 − 1 )∕N . We use 1 = 400 nm and 2 = 1000 nm. This is the wavelength range over which silicon-based cameras have sensitivity. We set N = 15 nm and generate monochromatic images for N = 40 color planes. This sampling represents the change in wavelength needed to provide a subtle visible difference in the speckle morphology of the PSFs created using the wavefronts generated at two adjacent wavelengths near 400 nm. For the object, we use point sources to simulate two unresolved, closely spaced objects. For simplicity, these sources are given by Blackbody spectra with different temperatures. We then generate a range of objects for our validation tests by varying the separation and difference in the objects' brightness. We model the monochromatic point spread functions using where FT −1 denotes the inverse Fourier transform operator. For this work we use wavefronts that represent turbulence conditions of D∕r 0 = 33 . This simulates observations with the AEOS telescope on Mount Haleakala through median daytime seeing conditions during the Fall (see Table 1 of [1]). Figure 1 shows example images of a turbulence degraded binary star object at selected monochromatic color planes of 400 nm, 700 nm, and 1000 nm, along with the ultra-broadband integrated over 40 color planes, covering 400 nm to 1000 nm (left to right), for four different realizations of the atmosphere (top to bottom). The contrast in the secondary component's brightness to that of the binary's primary component is 10 −2 . The images' pixel scale is set so that the data at 400 nm are Nyquist sampled (i.e. 0.014 arcsec/pixel). Image Reconstruction For our initial investigation into the feasibility of recovering the original object from a set of broadband images, we assume we have a high-quality measurement of the PSF and that we only need to recover the object from a time series of observed broadband images. We then minimize the least-squares cost function and r is the residual difference between the observed and modeled images. Here That is, we estimate the object at fewer color planes than were used to generate the broadband images. The pixels in the M color planes of f are the variables in the minimization. We note the "monochromatic" PSF in Eq. 8 represents an integral of N/M of the monochromatic PSFs used in the generation of the broadband images. We minimize the cost function using the variable metric limited memory optimization algorithm. This is a gradient-based algorithm and we calculate the gradient of the cost function with respect to the variables using where ⋆ denotes correlation. For our validation tests, we generate broadband data sets for two separations between the two unresolved objects (0.69 arc-seconds and 0.14 arc-seconds) and two contrast ratios for each separation (the brightness of the secondary component is 10 −2 and 10 −3 of the brightness of the primary component). For all the reconstructions presented in the paper, we use k = 11 broadband images and allow 1,000 iterations in the minimization routine. The number of iterations used was chosen to minimize computation time. Results For our initial proof-of-concept tests, we assume we have perfect knowledge of the wavefront for each of the M color planes of every broadband image. That is, ĥ = h . The initial guess for our object is a shift-and-add of our 11 data frames. Our first two data sets represent the two unresolved objects at 0.69 arc-second separation. The spectral recoveries for the two components of the object are shown in Fig. 2. The top row of the figure shows the results for the bright (primary) object, and the bottom row shows the results for the dim (secondary) object. The left-hand column represents a secondary:primary contrast in brightness of 10 −2 and the right column represents a contrast of 10 −3 . In both cases, the spectrum of the bright component is well recovered. This validates our hypothesis that spectral information on a target can be recovered from a series of ultra-broadband speckle images. Interestingly, Fig. 2 The recovered (solid symbols) and truth (solid line) spectra for a binary system with primary and secondary objects separated by 0.69 arc-seconds, 10 −2 contrast ratio (Left), and 10 −3 contrast ratio (Right) the secondary component's recovered spectrum is a reasonable representation of the truth spectrum when the contrast is 10 −2 . This suggests the spectral recovery is robust over a useful dynamic range. When the brightness of the secondary decreases to 10 −3 of the primary object, the recovered spectrum for the former has little similarity to the truth spectrum. If we are just interested in detecting the faint companion, however, then we can integrate the recovered object over all the color planes. In this case, we find for both contrast scenarios that 99% of the counts for the primary component are recovered, while the recovery of the secondary components varies slightly between the two contrast ratios: at 10 −2 contrast, 90% of the counts are recovered while at 10 −3 , 88% are recovered. That is, the secondary component is clearly seen in the restored images, even at the lowest contrast of 10 −3 . To assess the applicability of ultra-broadband speckle imaging for high-resolution, high-contrast imaging, we ran some additional tests where we reduced the separation between the two unresolved objects to 0.14 arc-seconds. The results are shown in Fig. 3 with the same format as in Fig. 2. Figure 3 shows that the primary component is recovered with similar accuracy to the earlier results for the 0.69 arc-second separation. However, the secondary component loses some accuracy compared to its counterpart in Fig. 2. Despite this, 95% of the total counts are still recovered. Finally, at 10 −3 contrast, although the secondary component's recovered spectrum is a poor representation of the truth spectrum, the secondary component's Fig. 3 The recovered (solid symbols) and truth (solid line) spectra for a binary system with primary and secondary objects separated by 0.14 arc-seconds, 10 −2 contrast (Left), and 10 −3 contrast (Right) integrated flux still contains 67% of the true flux. Looking at the spectrally integrated recovered object in Fig. 4, which is shown in fourth root scale to emphasize the dimmer secondary component, we see the secondary component start to become visible to the right of the primary, at redder wavelengths. In the final image, which is integrated over the broadband range, the center of mass of the object becomes more clear. This suggests the potential for high-contrast, high-resolution imaging. For our last test, we relax the requirement on the fidelity of the PSF. Again, we use a binary target, this time with a separation of 0.69 arc-seconds and a contrast of 10 −1 and 10 −2 between the two components. We replace the truth PSFs at each wavelength with PSFs generated from wavefronts with a root-mean-square phase error of 0.2 radians with respect to the truth wavefronts. We did this by smoothing the truth phases with a Gaussian kernel, which removes the wavefront's high spatial frequencies. This level of wavefront error at D∕r 0 = 33 is commensurate with the performance obtained with an imaging Shack-Hartmann wavefront sensor, and multi-aperture phase retrieval of the WFS data [4,6]. We note the impact of the missing high-spatial frequency information on the morphology of the PSFs will increase as the wavelength decreases. We also ignore variations in the wavefront amplitude when generating ĥ k, . The results in Fig. 5, which has similar format to Fig. 2, are similar to the results obtained with perfect knowledge of the PSFs, where the most deviation is found in the continuum of the secondary component.This suggests that the restoration's quality is not highly sensitive to the fidelity of the PSF estimates ( h k, ). This characteristic offers hope for the practical realization of ultrabroadband speckle imaging. We also note that with the addition of photon noise we would require more data images to compensate. Conclusion Our research has provided two advances. First, we have shown that for the idealized case of an unresolved binary object observed through D∕r 0 = 33 turbulence, without photon and read noise, it is feasible to recover the spectral information on the target, on a pixel-by-pixel basis, from a series of ultra-broadband images when we have high-quality estimates of the atmospheric PSFs and their variation with wavelength. Fig. 4 The recovered image of two point-sources separated by 0.14 arc-seconds with 10 −3 contrast. The three images, from left to right, are images of the recovered object at individual wavelength planes at 400, 700, and 1000 nm respectively. The final image on the right is the integrated object over the entire broadband range Fortunately, recent advances in wavefront sensing provide such estimates. In addition, we can improve any measured estimates of the wavefronts using the blind deconvolution technique during the image restoration process. In this case, we note that broadband images are inherently a source of wavelength diverse information. As the wavefront phases at different wavelengths are related to the optical path difference via this inherent wavelength diversity provides a strong lever for the blind deconvolution process. We only need to estimate the values of W in the pupil in order to estimate all h k, for a given frame k. The second advance from our research is that we have demonstrated that ultrabroadband speckle imaging has the potential to better resolve and detect closely spaced objects when one object is significantly brighter than the other, over what can be achieved using narrow-band speckle imaging. We note that all of the above results were obtained using 11 image frames. In practice, hundreds of frames will be available. Increasing the number of photons available Fig. 5 The recovered (solid symbols) and truth (solid line) spectra for a binary system with primary and secondary objects separated by 0.69 arc-seconds, 10 −1 contrast (Left), and 10 −2 contrast (Right), using degraded PSFs for the restoration, by increasing the spectral bandwidth or increasing the number of frames, will improve the recoveries' signal-to-noise ratio [7] and allow us to reach high contrast values. As an example, [5] showed that using 2,000 frames of monochromatic speckle data with = 40 nm obtained on an 8-meter telescope at a cadence of 1 kHz, they can detect a secondary object whose brightness is 10 −4 of the brightness of the primary object. This is the current state-of-the-art for speckle imaging. Extending the spectral bandpass to = 400 nm, as used in our simulations, offers the potential to improve the contrast to 3.3 × 10 −5 . In closing, we emphasize that although these initial studies are incredibly encouraging, we need to perform additional research on ultra-broadband speckle imaging before claiming it is a viable approach for snapshot hyper-spectral imaging for SSA. Objects in GEO would be the obvious first step as these would be point-source like objects while more adjustments would need to be made for observing extended objects in LEO.
4,296.6
2022-04-01T00:00:00.000
[ "Engineering", "Physics", "Environmental Science" ]
Threonine 680 Phosphorylation of FLJ00018/PLEKHG2, a Rho Family-specific Guanine Nucleotide Exchange Factor, by Epidermal Growth Factor Receptor Signaling Regulates Cell Morphology of Neuro-2a Cells* Background: RhoGEFs play important roles in regulation of the actin cytoskeleton. Results: The FLJ00018/PLEKHG2 was phosphorylated, and activated by EGF signaling might be interpreted as a direct effect on its GEF activity (which is not established). Conclusion: The phosphorylation of FLJ00018 was involved in the regulation of cell morphology. Significance: This study suggests that FLJ00018 receives and integrates different stimuli as a relay point in cell signaling. FLJ00018/PLEKHG2 is a guanine nucleotide exchange factor for the small GTPases Rac and Cdc42 and has been shown to mediate the signaling pathways leading to actin cytoskeleton reorganization. The function of FLJ00018 is regulated by the interaction of heterotrimeric GTP-binding protein Gβγ subunits or cytosolic actin. However, the details underlying the molecular mechanisms of FLJ00018 activation have yet to be elucidated. In the present study we show that FLJ00018 is phosphorylated and activated by β1-adrenergic receptor stimulation-induced EGF receptor (EGFR) transactivation in addition to Gβγ signaling. FLJ00018 is also phosphorylated and activated by direct EGFR stimulation. The phosphorylation of FLJ00018 by EGFR stimulation is mediated by the Ras/mitogen-activated protein kinase (MAPK) pathway. Through deletion and site-directed mutagenesis studies, we have identified Thr-680 as the major site of phosphorylation by EGFR stimulation. FLJ00018 T680A, in which the phosphorylation site is replaced by alanine, showed a limited response of the Neuro-2a cell morphology to EGF stimulation. Our results provide evidence that stimulation of the Ras/MAPK pathway by EGFR results in FLJ00018 phosphorylation at Thr-680, which in turn controls changes in cell shape. a ligand to its cognate GPCR causes a conformational change in the receptor, leading to the intracellular release of activated G␣ and G␤␥ subunits, which in turn signal the downstream effectors (1). GPCR signaling is involved in many cellular responses, such as proliferation, differentiation, and cytoskeletal regulation. Cytoskeletal regulation in particular is governed largely by the precise temporal and spatial modulation of the small G-proteins of the Rho family (Rho GTPase), RhoA, Rac, and Cdc42 (2). The Rho GTPases function as molecular switches. They are converted from the GDP-bound inactive form to a GTP-bound active state by a reaction catalyzed by Rho GTPase-specific guanine nucleotide exchange factors (RhoGEFs). RhoGEFs are large multidomain proteins that are tightly regulated to control their function. RhoGEFs can be subdivided into two main subfamilies. First, there are those that possess a Dbl homology (DH) domain that is found in tandem with a pleckstrin homology (PH) domain. This subfamily is currently represented by 70 members in mammalian genomes (3). Second, there are Dock180-related proteins containing the Dock homology region-2 domain (also known as the Docker-ZH2 domain), which form a subfamily of 11 mammalian members (4). The DH domain is responsible for catalytic activity, and the PH domain directs subcellular localization and can modulate the DH domain function. A number of the DH domain-containing RhoGEFs, including PSD-95/Dlg/ZO-1 (PDZ)-RhoGEF, leukemia-associated RhoGEF (LARG), and p115-RhoGEF, have a regulator of G-protein signaling (RGS) domain in addition to a DH domain and PH domain. PDZ-RhoGEF and LARG also have a PDZ domain. These RhoGEFs are regulated by activated G␣ 12/13 subunits through their interaction with the RGS domain to activate GDP/GTP exchange activity for RhoA (5,6). On the other hand, P-Rex1 and P-Rex2 are regulated by G␤␥ subunits and poly-phosphoinositide through direct interaction to activate the GDP/GTP exchange activity for Rac. We previously reported that one novel RhoGEF, FLJ00018/PLEKHG2, was activated by direct interaction with G␤␥ subunits and regulated cell spreading through the activation of Rac1 and Cdc42 (7). We also reported that FLJ00018 interacts with ␤-actin and ␥-actin. Both ␤and ␥-actin act as negative regulators of FLJ00018 (8). However, the details underlying the molecular mechanisms of FLJ00018 activation have yet to be elucidated. In contrast to these RhoGEFs, the regulation of FLJ00018 by phosphorylation has not been studied. In this report, we investigated the possibility that phosphorylation of FLJ00018 by ␤ 1 -adrenagic receptor (␤ 1 -AR) stimulation might regulate the function of FLJ00018 in cells. We found that phosphorylation of FLJ00018 by EGFR-activated Ras/MAPK signaling caused the activation of FLJ00018 and controlled the cell morphological change in Neuro-2a cells independently of interaction with G␤␥ subunits. EXPERIMENTAL PROCEDURES Materials-pcDNA3.1-␤ 1 -AR, pcDNA3.1-G␤ 1 , pcDNA3.1-G␥ 2 , pcDNA3.1-H-Ras G12V, pcDNA3.1-H-Ras T17N, pcDNA3.1-RhoA T19N, pcDNA3.1-Rac1 T17N, and pCDNA3.1-Cdc42 T17N were purchased from Missouri S&T cDNA Resource Center. These plasmids were subcloned into a pF4A-CMV or pF5A-CMV-neo Flexi vector (Promega) using polymerase chain reaction (PCR) amplification and a Flexi system (Promega). A complementary DNA (cDNA) clone for FLJ00018 genes was isolated during the Kazusa human cDNA project, which aimed to accumulate information on the coding sequence of long cDNAs for unidentified human genes (19). To construct the Myc-and Halo-tagged forms, native-form protein expression clones, the open reading frame of FLJ00018 was subcloned into a pFN21A-Myc vector or the pFN21A-Halo vector using PCR amplification and the Flexi system. The cDNA-encoding deletion mutants of FLJ00018 indicated in the corresponding figures were generated by restriction enzyme digestion or PCR amplification using pFN21A-Myc-FLJ00018. Monomeric Azami Green (mAG) is a fluorescent protein (20); an expression vector coded with this protein, phmAG1-MCLinker, was purchased from MBL. The pFN21K-mAG vector was made by PCR amplification using phmAG1-MCLinker and restriction enzyme digestion and ligation. pFN21K-mAG-FLJ00018 was made by restriction enzyme digestion of pFN21A-Myc-FLJ00018. pF1K-EGFR and pF1K-Vav3 were subcloned into pFC21A-Halo and a pFN21K-Halo vector using the Flexi system. The pSRE.L-luciferase reporter plasmid was purchased from Stratagene, and pRL-SV40 was purchased from Nippon Gene. Isoproterenol, KN-93, and SB202190 were purchased from Calbiochem, EGF was from Peprotech, and AG1478, U-0126, and Phos-tag AAL107 were from Wako. SP600125 was purchased from Sigma. Phosphatase inhibitor mixture tablet (PhosSTOP) and protease inhibitor mixture tablet (Complete mini) were purchased from Roche Applied Science. -Phosphatase and alkaline phosphatase were purchased from New England Biolabs. Cell Culture and Transfection-NIH3T3, HEK293, and Neuro-2a cells were grown in DMEM supplemented with 10% calf serum (NIH3T3) or 10% FBS (HEK293 and Neuro-2a) at 37°C. Transient transfection was performed using Lipofectamine Plus reagent according to the manufacturer's instructions (Invitrogen). The amount of plasmid DNA was 400 ng for the serum response element (SRE)-dependent gene transcription assay and 1000 ng for immunoblot analysis. Cells were transfected with DNA for 3 h then treated with 1ϫ insulin-transferrin-selenium-X (Invitrogen) for 6 -8 h. The cells were washed twice with serum-free DMEM and incubated for 16 -18 h in DMEM. Assay of SRE-dependent Gene Transcription-Cells seeded in 24-well plates were co-transfected with the indicated expression plasmids together with the pSRE.L-luciferase reporter plasmid and the pRL-SV40 control reporter plasmids. After transfection, the cells were treated with inhibitors before stimulation at the indicated times and concentrations. Cells were washed once with ice-cold PBS and lysed with passive lysis buffer. Luciferase activities were determined with a dual-luciferase reporter assay system (Promega). The activity of the experimental reporter was normalized against the activity of the control vector. Dephosphorylation of FLJ00018 by Phosphatase-Transfected cells were stimulated at the indicated times and concentrations. Then cells were washed once with ice-cold PBS and lysed with lysis buffer (50 mM Tris-HCl, pH 7.5, 100 mM NaCl, 0.5% Nonidet P-40, and protease inhibitor solution). Equal amounts of cell lysate were incubated with alkaline phosphatase solution (10 units/dish) or -phosphatase solution (400 units/dish) for 2 h at 30°C. The reaction was terminated by adding 2ϫSDS sample buffer. Then, the phosphorylation status of the reaction mixture was tested by immunoblot analysis as described below. Immunoblot Analysis-Transfected cells were treated with inhibitors before stimulation at the indicated times and concentrations. Cells were washed once with ice-cold PBS and lysed with 0.5% (w/v) SDS in distilled water. The protein amount of each lysate was quantified using a bicinchoninic acid protein assay kit (Thermo Scientific) with BSA as the standard. The same amounts of protein were subjected to 7.5 or 10% SDS-PAGE and immunoblotted with various antibodies, employing a chemiluminescence reagent (PerkinElmer Life Sciences). Densitometry was performed using an LAS-4000 image analyzer and LAS-4000-mini image analyzer (GE Healthcare). Mn 2ϩ Phos-tag SDS-PAGE-To visualize the levels of FLJ00018 phosphorylation, a newly developed immunoblotting technique was performed (21)(22)(23). In Phos-tag gel, phosphoprotein-binding molecules called AAL-107 Phos-tag ligands retain phosphorylated proteins, thereby separating them from their unphosphorylated counterparts during electrophoresis. This results in two or more distinct protein bands representing the phosphorylated and unphosphorylated protein. Cell lysates were resolved by 6% SDS-polyacrylamide gel containing 15 M Phos-tag and 30 M MnCl 2 or 3% SDS-polyacrylamide gel containing 13 M Phos-tag, 26 M MnCl 2 , and 1.5% agarose. After electrophoresis, the gels were washed in transfer buffer containing 1 mM EDTA for 10 min and then transferred to PVDF membranes. Immunofluorescence Analysis-After stimulation with EGF (20 ng/ml, 24 h), transfected cells cultured on coverslips were washed once with ice-cold PBS and fixed with 4% paraformaldehyde for 30 min. The cover slips were then mounted with Perma Fluor. Fluorescence images were acquired using a fluorescence microscope (BZ-9000; KEYENCE). Ten fluorescence images were taken from random fields, and the total neurite length of 50 cells was measured in each sample. The total neurite lengths were then analyzed using Image J software. Statistical Analysis-Data are expressed as the means Ϯ S.D. of values from at least three independent experiments. The significance of group differences was analyzed by Student's t test. A p value of Ͻ0.05 was considered significant. RESULTS FLJ00018 Is Activated by ␤ 1 -Adrenergic Receptor Stimulation through a G␤␥-dependent and -independent Pathway-We previously reported that FLJ00018 was activated by G␤␥ and m2 muscarinic acetylcholine receptor stimulation (7). To examine whether FLJ00018 is activated by stimulation of other GPCRs, we used a ␤-adrenergic receptor, ␤ 1 -AR, that is known to couple with multiple signaling pathways via G s protein. NIH3T3 cells were transiently transfected with ␤ 1 -AR and FLJ00018, and then luciferase assays were performed to measure FLJ00018-induced SRE-dependent gene transcription. Several reports have suggested that ␤ 1 -AR signaling participates in multiple pathways including the 1) G s /adenylate cyclase (AC)/cAMP/CNrasGEF/Ras/MAPK pathway, 2) Gs/ AC/cAMP/PKA/G i /G␤␥/Ras/MAPK pathway, and 3) GRK5 or GRK6/␤-arrestin/src/EGFR pathway (24 -26). As shown in Fig. 1A, stimulation with isoproterenol (20 M) enhanced FLJ00018-induced SRE-dependent gene transcription in NIH3T3 cells expressing FLJ00018 and ␤ 1 -AR. However, the protein expression levels or protein modification state of FLJ00018 was not changed by the ␤ 1 -AR stimulation in FLJ00018 and ␤ 1 -AR co-expressed cells (Fig. 1B). To investigate the involvement of G␤␥ subunits in the enhancement of SREdependent gene transcription, we used the C-terminal region of ␤-adrenergic receptor kinase (␤-ARKct), which is thought to interact with G␤␥ subunits and attenuate G␤␥-dependent signaling. Co-expression of ␤-ARKct attenuated the enhancement of FLJ00018-induced SRE-dependent gene transcription. However, co-expression of ␤-ARKct did not completely inhibit the transcription (Fig. 1A). This result suggests that ␤ 1 -AR signaling may be involved in both G␤␥-dependent and -independent FLJ00018 activation pathways, but only the former is attenuated by ␤-ARKct. FLJ00018 Is Activated by ␤ 1 -AR-mediated Transactivation of EGF Receptor-It was previously shown that EGFR is activated through GPCR signaling, including those from ␤ 1 -AR signaling (27,28). Some papers have shown that ␤ 1 -AR and ␤ 2 -AR also induce the transactivation of EGFR (26, 29 -31). This transactivation induces the activation of EGFR tyrosine kinase. To investigate the involvement of the EGFR transactivation in ␤ 1 -AR-stimulated FLJ00018-induced SRE-dependent gene transcription, we tested the autophosphorylation of EGFR by ␤ 1 -AR stimulation in cells co-expressing EGFR, ␤ 1 -AR, and FLJ00018. As shown in Fig. 2A, after stimulation with isoproterenol (20 M), tyrosine phosphorylation of EGFR was increased in a time-dependent manner, and the levels of phosphorylation peaked at 15 min. In general, it is known that constitutive (agonist-independent) activity is observed with the overexpression of many GPCRs (32). In Fig. 2A, it was thought that EGFR tyrosine phosphorylation in the absence of ␤ 1 -AR stimulation may have been caused by the agonist-independent activation by overexpression of ␤ 1 -AR, although the reason is not clear. To examine the involvement of transactivation in ␤ 1 -AR-stimulated FLJ00018-induced SRE-dependent gene transcription, we used AG1478, a specific inhibitor of EGFR tyrosine kinase activity. Stimulation of isoproterenol (20 M) increased FLJ00018-induced SRE-dependent gene transcription in cells expressing FLJ00018, ␤ 1 -AR, and EGFR. However, treatment with AG1478 (10 M) before isoproterenol stimulation decreased the enhancement of FLJ00018-induced SRE-dependent gene transcription (Fig. 2B). To examine the levels of expression of FLJ00018 in these cells, we performed immunoblot analysis. In isoproterenol-stimulated cells, the mobility of the FLJ00018 band was slower than that of the FLJ00018 band in unstimulated cells (Fig. 2C). However, the FLJ00018 band in cells co-expressing G␤␥ and FLJ00018 did not exhibit this mobility shift. These results suggested the possibility that FLJ00018 is modified in response to ␤ 1 -AR stimulation-induced EGFR transactivation. Recently, it was reported that the functions of several RhoGEFs, including Vav, Tiam-1, ␤-Pix, and Asef, are regulated through phosphorylation by extracellular stimulation (10 -18). To investigate whether FLJ00018 is phosphorylated by ␤ 1 -AR stimulation-induced EGFR transactivation, we used alkaline phosphatase and -phosphatase. As shown in Fig. 2D, both of the phosphatases attenuated the mobility shift of FLJ00018, whereas the mobility shift was visible in the cells not treated with phosphatase. A mobility shift of FLJ00018 by ␤ 1 -AR stimulation-induced EGFR transactivation was also observed in HEK293 cells (Fig. 2E). In the same set of experiments, we also performed a Phos-tag gel analysis using the HEK293 cells. The Phos-tag gels contain AAL-107 Phos-tag FIGURE 2. FLJ00018 is activated by ␤ 1 -AR-mediated transactivation of EGF receptors. A, NIH3T3 cells were co-transfected with expression vectors for EGFR, ␤ 1 -AR, and FLJ00018 (F018) as indicated. Transfected cells were stimulated with isoproterenol (20 M) for 0 -30 min. Equal amounts of protein were resolved by 7.5% SDS-PAGE. Immunoblotting (IB) was performed with antibodies against phosphotyrosine (P-Tyr) or EGFR. B, NIH3T3 cells were co-transfected with pSRE.L-luciferase, pRL-SV40 plasmid DNAs, and expression vectors for FLJ00018, ␤ 1 -AR, and EGFR as indicated. Transfected cells were stimulated with 20 M isoproterenol for 5.5 h before treatment with 10 M AG1478 for 1 h. Luciferase activity was determined by a dual-luciferase reporter assay. Luciferase activity obtained with mock cells was taken as 1.0, and relative activities are shown. Values are the means Ϯ S.D. from at least three experiments. #, p Ͻ 0.05. C, NIH3T3 cells were co-transfected with expression vectors for EGFR, ␤ 1 -AR, Myc-tagged FLJ00018, and G␤ 1 ␥ 2 as indicated. Transfected cells were stimulated with isoproterenol (20 M) for 15 min. Equal amounts of protein were resolved by 7.5% SDS-PAGE. To detect Myc-tagged FLJ00018, immunoblotting was performed with antibodies against Myc (Myc). D, NIH3T3 cells were co-transfected with expression vectors for EGFR, ␤ 1 -AR, and Myc-tagged FLJ00018 as indicated. Transfected cells were stimulated with isoproterenol (20 M) for 15 min. After lysis, the cell lysates were incubated with alkaline phosphatase (AP; 10 units) and -phosphatase (; 400 units) for 2 h at 30°C. The reaction was terminated by the addition of SDS sample buffer. Equal amounts of the reaction mixture were resolved by 7.5% SDS-polyacrylamide gel electrophoresis. To detect Myc-tagged FLJ00018, immunoblotting was performed with antibodies against Myc (Myc). ligands, which are phosphoprotein-binding molecules (21)(22)(23). AAL-107 Phos-tag ligands retain phosphorylated proteins, which thereby separate from their unphosphorylated counterparts during electrophoresis. In theory this will result in two or more distinct protein bands representing the phosphorylated and unphosphorylated status of the protein. Using this method, the mobility shift of FLJ00018 was found to be enhanced by isoproterenol stimulation in cells expressing ␤ 1 -AR, EGFR, and FLJ00018 compared with cells expressing FLJ00018 alone (Fig. 2F). Even though a band shift was not observed in SDS-PAGE (Fig. 1, B and E), when we used Phos-tag gel, ␤ 1 -AR stimulation caused a slight shift in the FLJ00018 band (Fig. 2F). In addition, the observed FLJ00018 mobility shift was attenuated in AG1478-treated NIH3T3 cells (Fig. 2G). These results suggest the possibility that FLJ00018 is phosphorylated and activated by ␤ 1 -AR stimulation-induced EGFR transactivation and ␤ 1 -AR stimulation. FLJ00018 Is Activated and Phosphorylated by Direct EGF Stimulation-As shown in Fig. 2, FLJ00018 was phosphorylated by EGFR transactivation via GPCR stimulation. To investigate whether FLJ00018 is phosphorylated and activated by direct EGFR stimulation, we measured EGF-induced SRE-dependent gene transcription in cells co-expressing EGFR and FLJ00018. After stimulation with EGF (20 ng/ml) for 6 h, FLJ00018-in- APRIL 4, 2014 • VOLUME 289 • NUMBER 14 JOURNAL OF BIOLOGICAL CHEMISTRY 10049 duced SRE-dependent gene transcription was greatly enhanced (Fig. 3A). Previously, we showed that FLJ00018 acts as a RhoGEF for Rac1 and Cdc42 (7). To investigate which types of Rho are involved in EGF-stimulated FLJ00018-induced SREdependent gene transcription, we used constitutively inactive forms of RhoA, Rac1, and Cdc42. When the cells were stimulated with EGF (20 ng/ml, 6 h), FLJ00018-induced SRE-dependent gene transcription was blocked by the inactive forms of Rac1 and Cdc42 (Fig. 3B). From these results it has been suggested that EGF-stimulated FLJ00018 activated Rac1 and Cdc42. When the cells were stimulated with EGF (20 ng/ml) for 15 min, the mobility of the FLJ00018 band was slower than that of the FLJ00018 band from unstimulated cells in the immunoblot analysis (Fig. 3C). However, the FLJ00018 band in cells co-expressing G␤␥ and FLJ00018 did not exhibit this mobility shift. In addition, the mobility shift of FLJ00018 was inhibited by alkaline phosphatase (10 units for 2 h) and -phosphatase (400 units for 2 h) treatment (Fig. 3D). Furthermore, the mobility shift of FLJ00018 also took place in HEK293 cells (Fig. 3E), and this mobility shift was enhanced in Phos-tag gel (Fig. 3F). These data suggest that FLJ00018 is phosphorylated by EGFR signaling. In addition, we performed immunoblot analysis and confirmed a mobility-shifted band of FLJ00018 at various time points. As we have shown in Fig. 3G, the shifted band of FLJ00018 was increased in a time-dependent manner and peaked at 5 min. Compared with EGFR transactivation ( Fig. 2A), EGFR stimulation by EGF is a rapid response and may cause differences in cell function. Because EGFR is a receptor tyrosine kinase, we examined whether FLJ00018 is tyrosinephosphorylated by EGFR stimulation. As shown in Fig. 3H, a FLJ00018 tyrosine phosphorylation band was not detected in cells expressing EGFR and FLJ00018 under our experimental conditions (Fig. 3H, dotted triangle), whereas Vav3 tyrosine phosphorylation in response to EGF stimulation was observed in cells expressing EGFR and Vav3 (Fig. 3H, open triangles). These results suggest that EGFR stimulation mainly induced the serine/threonine phosphorylation of FLJ00018. FLJ00018 Is Phosphorylated through H-Ras-dependent Signaling Pathways-Next, we investigated which signaling pathway of EGFR is involved in FLJ00018 phosphorylation. For this purpose, cells were co-transfected with FLJ00018 and a consti- FLJ00018 Is Activated through the Ras/MAPK Pathway-To determine which kinase phosphorylates FLJ00018, we examined the effect of the Ras/MAPK pathway inhibitor on FLJ00018 phosphorylation. In this experiment cells were pretreated with U-0126 (10 M), a MEK inhibitor, for 30 min before EGF stimulation (15 min, 20 ng/ml). No mobility shift of FLJ00018 bands was detected in U-0126-treated cells, whereas an FLJ00018 mobility shift was observed in cells treated with KN-93 (10 M), a calmodulin-dependent protein kinase II (CaMKII) inhibitor (Fig. 5A). These results suggest that FLJ00018 is phosphorylated by MEK or downstream effectors of MEK. We also measured FLJ00018-induced SRE-dependent gene transcription in cells treated with U-0126 (10 M, 1 h) or KN-93 (10 M, 1 h). Both inhibitors blocked FLJ00018-induced SRE-dependent gene transcription (Fig. 5, B and C). These results appeared to be in conflict with the immunoblot analysis showing that U-0126 inhibited the FLJ00018 mobility shift, whereas KN-93 did not. As a possible explanation for these disparate results, MEK or downstream effectors may regulate FLJ00018-induced SRE-dependent gene transcription through the phosphorylation of FLJ00018, whereas CaMKII may regulate FLJ00018-induced SRE-dependent gene transcription in an FLJ00018 phosphorylation-independent manner. Because the CaMKII inhibitor attenuated SRE-dependent gene transcription, we tested the possibility that the other MAPKs are also involved in FLJ00018 activation. However, neither the p38-MAPK inhibitor, SB202190 (10 M, 30 min), nor the JNK inhibitor, SP600125 (10 M, 30 min), affected the EGF-induced FLJ00018 mobility shift (Fig. 5D). These inhibitors failed to attenuate SRE-dependent gene transcription in FLJ00018transfected cells (Fig. 5E). From these results, it is suggested that FLJ00018 is phosphorylated by the MAPK signaling pathway after EGF stimulation, although FLJ00018-induced SREdependent gene transcription may be regulated by multiple signaling pathways including the MAPK and CaMKII signaling pathways. Identification of Phosphorylation at Threonine 680 in FLJ00018 via EGFR Stimulation-To identify phosphorylation sites of FLJ00018, we co-expressed EGFR with several FLJ00018 mutants. The structures of these mutants are indicated in Fig. 6A. Fig. 6B shows that mobility shifts were observed for the FLJ00018 WT and the FLJ00018 P1 mutant but not for the P1.5, P2, P3, and ⌬N mutants. These results suggest that FLJ00018 is phosphorylated between amino acid residues 615 and 970. To confirm that there is no phosphorylation site in the FLJ00018 P1.5, P2, P3, and ⌬N mutants, we used Phos-tag gel and performed immunoblot analysis. The result showed a weakly shifted band in FLJ00018 P1.5 and ⌬N mutants in response to EGF stimulation (15 min, 20 ng/ml), whereas FLJ00018 P2 and P3 had no shifted bands (Fig. 6C). These results suggest that the main FLJ00018 phosphorylation site exists between amino acid residues 615 and 970. However, it is likely that there is more than one phosphorylation site in FLJ00018. One of the amino acid residues in the region from 615 to 970 of FLJ00018, threonine 680, consists of the consensus sequence of extracellular signal-regulated kinase (ERK) phosphorylation (PXTP) (33). We prepared and used a FLJ00018 T680A mutant that mimics the non-phosphorylated state of FLJ00018. The mobility of FLJ00018 WT shifted in response to EGF stimulation, whereas only a slightly shifted band was observed in the FLJ00018 T680A mutant (Fig. 7A). To confirm that threonine 680 of FLJ00018 is phosphorylated, we used a Phos-tag experiment. As in the NIH3T3 cells (Fig. 7A), the mobility shift of the FLJ00018 T680A mutant was modest compared with that of the FLJ00018 WT in HEK293 cells (Fig. 7, B and C). Next, we examined whether the phosphorylation of threonine 680 affects the activity of FLJ00018. The FLJ00018 T680A mutant did not change EGF-stimulated SRE-dependent gene transcription (Fig. 7D). In addition, FLJ00018 T680A-induced SRE-dependent gene transcription was inhibited by Rac1 T17N and Cdc42 T17N (Fig. 7E). The levels of this transcription and the patterns of its inhibition by Rac1 T17N and Cdc42 T17N were not significantly different from those of the FLJ00018 WT (Fig. 3B). These results indicate that possibility that one of the phosphorylation sites of FLJ00018 is threonine 680, although threonine 680 phosphorylation did not affect any of the guanine nucleotide exchange activities of FLJ00018. Additional phosphorylation sites might be involved in EGFR stimulation-induced FLJ00018 activation. In this study, however, we did not examine the direct effects of GEF on Rho GTPase. Also, we did not detect another phosphorylation site(s) using a phospho-specific antibody. Further analysis is needed to detect the other exact phosphorylation site(s). Threonine 680 Phosphorylation of FLJ00018 Regulates Neurite Outgrowth via EGFR Stimulation in Neuro-2a Cells-Because some RhoGEFs have been implicated in cell morphological changes, including cell spreading and neurite outgrowth (13, 17, 34), we tested whether FLJ00018 was involved in neurite outgrowth. Neuro-2a cells were co-transfected with FLJ00018 or FLJ00018 mutants and EGFR. After stimulating the co-transfected cells with EGF, we performed immunoblot analysis and confirmed the mobility shift of FLJ00018 in the Neuro-2a cell line (data not shown). To visualize the morphology of the Neuro-2a cells, we used a fusion protein of FLJ00018 and mAG, a fluorescent protein. When the cells were stimulated with EGF, we observed neurite outgrowth and cell spreading in the cells transfected with mAG-tagged FLJ00018 and in the EGFR cells (Fig. 8A, g and h). However, no neurite outgrowth or cell spreading was observed in the FLJ00018 T680A-expressing cells (Fig. 8A, k and l). Quantitative analysis of the neurite length showed that the EGF-stimulated FLJ00018 T680A-expressing cells displayed a significant decrease in total neurite length compared with that in the EGF-stimulated FLJ00018WT-expressing cells (Fig. 8B). These results suggest that threonine phosphorylation at threonine 680 of FLJ00018 by EGFR stimulation plays a crucial role in neural outgrowth and cell spreading. DISCUSSION In our previous study FLJ00018 was activated by direct interaction with a G␤␥ subunit (7), and the activation was attenuated through interaction with cytosolic actin (8). In this study we also discovered another FLJ00018 activation mechanism by threonine phosphorylation at 680 through the EGFR/Ras/ MAPK pathway. It has been reported that various GPCR agonists, including endothelin, thrombin, carbachol, and lysophosphatidic acid, stimulate EGFR through GPCR activation. Because the signaling pathway from GPCR to EGFR differs by cell type and the condition of GPCR stimulation, various different mechanisms have been proposed. The most frequently described mechanisms of EGFR transactivation involve a disintegrin and metalloproteinase (ADAM)/heparin-binding EGF pathway. It has been shown that that when the lysophosphatidic acid receptor is expressed at endogenous levels, it can activate the phospholipase C␥1 and PI3K/AKT pathway through matrix metalloprotease-and heparin-binding EGF release-dependent EGFR activation. On the other hand, ␤ 2 -AR can activate the ERK pathway in a c-Src-and EGFR-dependent manner, but it is unable to activate the phospholipase C␥1 and PI3K/AKT pathways (35,36). In addition to this mechanism, some other signaling pathways have been proposed to play a role in ␤-AR stimulationmediated EGFR transactivation. Daaka et al. (25) have shown the role of the G s /AC/cAMP/PKA/G i /G␤␥/Ras/MAPK pathway in EGFR transactivation. Briefly, ␤ 2 -AR can switch the signal from G s to G i , and the G i signal of ␤ 2 -AR can activate MAPK via G␤␥, c-Src, and Ras. However, other researchers have proposed the existence of a pathway independent of G s /G i switching; this pathway requires Src activation to stimulate EGFR and downstream MAPK activation (37). In addition to these pathways, a number of other pathways have been proposed, including the GRK5 and GRK6/␤-arrestin/Src/EGFR pathway (26). These reports support the existence of an EGFR transactivation pathway, which activates Ras/MAPK from ␤-AR stimulation. However, further analysis is needed to elucidate which mechanism activates FLJ00018. Protease-activated receptors 1 activated by thrombin have been shown to induce EGFR transactivation in vascular smooth muscle cells. After EGFR is activated by thrombin, LARG is activated, which in turn activates RhoA (38). This result indicates that RhoGEF can be activated in an EGFR transactivationdependent manner. On the other hand, another researcher hypothesized that PI3K can directly activate the MAPK pathway via ␤ 2 -AR stimulation (31). Also, it was shown that the ␤ 2 -AR/Gi pathway activates platelet-derived growth factor receptor/PI3K signaling through Src activation (39). Moreover, another study has described the dependence of the interaction between ␤ 1 -AR and EGFR on ␤-arrestin, which causes differences between the signal from direct EGF stimulation and EGFR transactivation signal (30). Indeed, when we co-transfected FLJ00018 and ␤ 1 -AR with ␤-ARKct, ␤-ARKct could not completely inhibit the ␤ 1 -AR stimulation-induced SRE-depen-FIGURE 6. Identification of the structure involved in FLJ00018 phosphorylation by EGFR signaling. A, structure of FLJ00018 mutants. The WT, P1, P1.5, P2, P3, and ⌬N constructs code for amino acid (aa) residues 1-1386, 1-970, 1-615, 1-464, 1-310, and 965-1386 of FLJ00018, respectively. B and C, NIH3T3 cells were co-transfected with expression vectors for EGFR and Myctagged FLJ00018 mutants as indicated. Transfected cells were stimulated with 20 ng/ml EGF for 15 min. Equal amounts of proteins were resolved by 7.5% or 10% SDS-PAGE (B) or 6% SDS-polyacrylamide gel containing 15 M Phos-tag and 30 M MnCl 2 (C). Immunoblotting (IB) was performed with antibodies against Myc. Closed triangles, shifted bands; Open triangles, basal levels of proteins. dent gene transcription (Fig. 1). Also, the band shift of FLJ00018 T680A was not completely abolished (Fig. 7). These facts might suggest that the function of FLJ00018 is regulated by multiple signaling pathways after ␤-AR stimulation. There are several RhoGEFs that are mainly activated through tyrosine phosphorylation by receptor-tyrosine kinase signaling (10). On the other hand, ␤-Pix is phosphorylated by FGF stimulation via Ras/MAPK (13). P-Rex1 is phosphorylated at serine 1169 by insulin-like growth factor receptor signaling via the kinase related to AKT phosphorylation (40). These reports indicate that RhoGEF is phosphorylated by many signaling pathways downstream of receptor-tyrosine kinase. Considering that RhoGEF phosphorylation by receptor-tyrosine kinase signaling occurs via not only the Ras/MAPK pathway but also many other pathways, including the PI3K/AKT pathway, our results that FLJ00018 T680A mutants showed reduced but still existing band shifts compared with FLJ00018 WT (Fig. 7) might indicate phosphorylation by another pathway of Ras/MAPK. Some of the RhoGEFs, including Tiam1, p115-RhoGEF, and ␤-PIX, are also phosphorylated by Ca 2ϩ signaling or PKC (15,16,(41)(42)(43). Tiam1 is phosphorylated on threonine by PKC in lysophosphatidic acid-treated cells (15). Tiam1 is also phosphorylated by platelet-derived growth factor receptor signaling via CaMKII activation (41). It has also been reported that ␤-Pix was phosphorylated on Ser-516 by CaMKI in rat hippocampal neurons. The phosphorylation of ␤-Pix enhanced its RhoGEF activity for Rac1 and then promoted spinogenesis and synaptogenesis (43). In the present study we observed that a CaMKII inhibitor, KN-93, inhibited FLJ00018-induced SRE-dependent gene transcription but did not affect the electrophoretic mobility shift. These results suggest that CaMKII was not involved in FIGURE 7. FLJ00018 is phosphorylated by EGFR at amino acid residue threonine 680. A and B, NIH3T3 cells (A) or HEK293 cells (B and C) were co-transfected with expression vectors for EGFR and the Myc-tagged FLJ00018 or FLJ00018 T680A mutant as indicated. Transfected cells were stimulated with 20 ng/ml EGF for 15 min. Equal amounts of proteins were resolved by 7.5% (A and B) or 3% SDS-polyacrylamide gel containing 13 M Phos-tag, 26 M MnCl 2 , and 1.5% agarose (C). Immunoblotting (IB) was performed with antibodies against Myc). D and E, NIH3T3 cells were co-transfected with pSRE.L-luciferase, pRL-SV40 plasmid DNAs, expression vectors for FLJ00018 (F018), and EGFR, RhoA T19N, Rac T17N, and Cdc42 T17N as indicated. Transfected cells were stimulated with 20 ng/ml EGF for 6 h. Luciferase activity was determined by a dual-luciferase reporter assay. Luciferase activity obtained with mock cells was taken as 1.0, and relative activities are shown. Values are the means Ϯ S.D. from at least three experiments. WT, FLJ00018 WT; TA, FLJ00018 T680A mutant. the phosphorylation of FLJ00018 and that CaMKII participated in the FLJ00018-induced SRE-dependent gene transcription mechanism. The details of these phenomena are unknown at this time. It has been reported that neurite outgrowth is mediated by some RhoGEFs, such as Tiam1 and ␤-Pix. Tiam1 is activated by TrkA and TrkB signaling. When TrkB is activated by NGF, Tiam1 interacts with Ras and itself becomes activated. This in turn results in neurite outgrowth through Rac1 activation (17). Although Tiam1 receives intracellular signals from TrkA/B, ␤-Pix is phosphorylated through the Ras/ERK/PAK2 pathway upon FGF stimulation. This phosphorylation also induces neurite outgrowth in PC12 cell lines (13). Furthermore, Burridge and co-workers (44) have shown that RhoG is activated independently of Rac1 by EGF stimulation. Vav family RhoGEFs and PLEKHG6, a RhoG-specific RhoGEF, have been shown to be involved in RhoG activation. On the other hand, another paper has shown that a RhoGEF, Trio, activates neurite outgrowth in a RhoG-dependent manner (45). In line with these findings, our results suggest that FLJ00018 phosphorylated by Ras/MAPK signaling participated in morphological changes in Neuro-2a cells. From these results, FLJ00018 might also be physiologically involved in neurite outgrowth in neural cells. However, FLJ00018-induced SRE-dependent gene transcription was not related to threonine 680 phosphorylation of FLJ00018 in our study (Fig. 7D). A previous report suggested that G-protein signaling modulator-3 was spatiotemporally regulated by interaction with the 14-3-3 protein through its serine phosphorylation site (46). It is thus thought that the phosphorylation of FLJ00018 may be related to the spatiotemporal regulation of FLJ00018. These points will need to be studied in detail. FIGURE 8. FLJ00018 activation causes neurite outgrowth via EGFR activation. A, Neuro-2a cells were transfected with expression vectors for EGFR and the mAG-tagged FLJ00018 (F018) or FLJ00018 T680A mutant as indicated. Transfected cells were stimulated with 20 ng/ml EGF for 24 h. After stimulation, cells were fixed by 4% paraformaldehyde. Fluorescence images were acquired using a fluorescence microscope. Scale bar, 20 m. B, 10 fluorescence images were taken from random fields and analyzed using Image J software. The total neurite length of 50 cells was measured in each sample. Total neurite lengths were analyzed using Image J software. Values are the means Ϯ S.D. from at least three experiments. #, p Ͻ 0.05. In conclusion, our experiments revealed an additional activation mechanism of FLJ00018, distinct from the G␤␥ binding activation mechanism, by the phosphorylation at threonine 680 of through EGFR stimulation-dependent Ras/MAPK pathways. This threonine phosphorylation induces cell morphological change in Neuro-2a cells. These insights suggest new mechanisms of neurite outgrowth (Fig. 9). In the future it will be necessary to examine whether the signaling pathway operates under more than one physiological condition. Further analysis may allow understanding of the development of neurites into nerve systems and may uncover promising pharmacological targets to reduce aberrant neural development.
7,743.4
2014-02-19T00:00:00.000
[ "Biology" ]
The influence of medical insurance and social security cards on the floating population's settlement intention Background Medical insurance and social security cards are an important incentive for the floating population to live a stable life in their current residence, but there has been little studies on their effect on settlement intentions. Therefore, the purpose of this paper was to study the impact of basic medical insurance for urban employees and application for personal social security cards on the settlement intentions of the floating population. With the increase of the desire to settle, the health management and the development of public health will be improved. Methods Based on the 2017 survey data from the dynamic monitoring of China's floating population, we explored the influence of basic medical insurance for urban employees and social security cards on the floating population's settlement intentions. Additionally, this study also examined the comprehensive causal relationship, with social integration as the mediator variable. We used SPSS 21.0 software. The input method was used to analyze the influence of the above variables by binary logistic regression. Then we used AMOS22.0 software to establish the structural equation model of the relationship between the above three independent variables. Finally, we used bootstrapping method to analyze the direct effect, indirect effect and total effect of independent variables on settlement intention. Results The settlement intention of members of the floating population after participating in basic medical insurance for urban employees was 23.2% higher than that of those who did not participate. The decision as to whether to apply for a personal social security card is related to their settlement intention. The standardized regression coefficients among social insurance and security, social integration, and settlement intention were positive values, and the Z values of the overall effect, indirect effect, and direct effect were all greater than 1.96; the confidence interval of the indirect effect did not include 0. We found that this model is a partial intermediary model, with an intermediary ratio of 10.66%. Conclusions This article highlights the important impact of basic medical insurance for urban employees and individual social security cards on the floating population. The conclusions of this study provide suggestions for the government to use when designing policies to enhance the settlement intentions of the floating population and to improve the development of public health undertakings. Almanac > (2019), the floating population was 244 million in 2017 and 241 million in 2018 [1]. The "Statistical Bulletin of National Economic and Social Development 2019" released by the National Bureau of Statistics declared that there was a floating population of 236 million in 2019 [2]. Based on the current situation, the floating population in the future will maintain this considerable size [3]. The floating population is defined as individuals whose registered permanent residence is their original residence, and they live and work in a current residence that is not their registered permanent residence [4][5][6]. Settlement intention is defined as the thoughts of the floating population about their future relocation arrangements after they have been in their current residence for some time. Medical insurance is a social insurance system established to compensate workers for economic losses caused by disease risks, which comprises three schemes: Basic Medical Insurance for Urban Employees (BMIUE), the Rural New Cooperative Medical Scheme (RNCMS) and Basic Medical Insurance for Urban Residents (BMIUR) [7][8][9]. BMIUE was the medical insurance introduced in China in 1988. It is jointly paid by the unit and the employee; RNCMS was a form of community-based health insurance, was established and offered cover to rural residents in 2003. Its premiums are mainly subsidized by the local government; BMIUR, a local government-subsidized medical insurance scheme for the unemployed in urban areas, was launched on a pilot basis in 2007. In 2016, RNCMS and BMIUR were merged into medical insurance for urban and rural residents [10]. In this article, medical insurance refers to BMIUE. Social security cards are electronic certificates that provide labor security for workers who work in the fields. All people who participate in social insurance can apply for a social security card. The main items of social insurance include endowment insurance, medical insurance, unemployment insurance, work-related injury insurance and maternity insurance. Researchers have shown that the individual characteristics of the floating population and the characteristics of the place of origin and current residence can influence the floating population's settlement intention, such as whether to purchase urban housing and housing conditions [11,12], family migration [13], environment and regional differences [14,15], education level, work status, and social integration [5]. However, little attention has been paid to the impact of medical insurance and social security cards on the floating population's settlement intentions. The research on the medical security of the floating population is of great significance and value to the field of public health. If the floating population is not sure whether to participate in medical insurance at their current place of residence, this will pose a major challenge to the prevention and treatment of infectious diseases in the field of public health at their current residence [16,17]. There is also research evidence that providing medical insurance to the floating population in their current residence will significantly improve the stability of the floating population's life and work [18,19]. Ultimately, this will improve access to health services and the subjective well-being of the floating population [20]. In terms of BMIUE, BMIUE is jointly paid by units and individuals, which can reduce the economic burden of the floating population and help them settle down in the place where they live. Social security cards can be used to verify the identity of the patient when they purchase medicine or medical treatment, to store personal account funds and to record the medical treatment of the insured, which will encourage migrants to settle where they live [21]. It can also handle job-seeking registration and unemployment registration procedures; Claiming unemployment insurance benefits; Applying for employment training; Apply for labor ability appraisal and apply for and enjoy treatment of industrial injury insurance; Deal with related labor and social security affairs on the net. It is clear that these elements of social security and medical services are closely related to the social integration of migrants in their places of residence [20,22,23]. Social integration refers to the process of integrating into a new environment, which is a multidimensional concept [24][25][26][27]. It is a hot spot in the fields of public health and social science, and it not only promotes the resettlement intentions of the floating population [28,29] but also has health benefits [30][31][32]. Research on the willingness of the floating population to settle down will not only promote the development of urbanization [33] but also contribute to the control of infectious diseases and chronic noncommunicable diseases in their place of residence [34]. Finally, it will improve the level of public health services and accelerate social and economic development. Therefore, the objective of this article is to use the 2017 survey data about the dynamic monitoring of China's floating population (Volume A) to analyze the influence of BMIUE and social security cards on the floating population's settlement intentions. To examine the comprehensive causal relationships, "social integration" was introduced as a mediator variable. Data sources and Variables The data in this paper are from the existing questionnaire survey in China floating population data platform [35]. The questionnaires were collected from a total of 169,989 floating population from 31 provinces (autonomous regions and municipalities directly under the Central Government) and Xinjiang Production and Construction Corps. All of them came from the inflow areas where China's floating population is relatively concentrated [36]. After deleting part of the missing data and replacing the mean value, 154,586 people were finally included in the analysis. The proportion is 90.9%. The core independent variables of this article were BMIUE and social security cards, including the decision whether to participate in BMIUE and whether to apply for a personal social security card. The dependent variable was the settlement intention. This was measured according to whether the individual was willing to move their household registration to their current residence. The mediator variable was social integration. Social integration includes social, economic, cultural and other aspects of integration [37]. It is measured in terms of whether migrants agree to become part of the local population. The control variables were as follows: (a) demographic characteristics (i.e., age, gender, marital status, the household registration system, and education level), (b) economic characteristics (i.e., average monthly total local expenditure over the past year and whether a labor contract has been signed), (c) flowing characteristics (i.e., flowing range and flowing time), (d) health education (i.e., whether to receive health education on occupational disease prevention, whether to receive health education on STD and AIDS prevention, and whether to receive health education on the prevention and treatment of chronic diseases). Studies have proven that the factors influencing the settlement intentions inflow factors, outflow factors, barriers between the inflow and outflow areas, and the floating population's self-factors [37]. Therefore, these variables were also selected in this study. Descriptive statistics and logistic regression analysis Firstly, we used SPSS 21.0 software to describe the demographic indicators of the floating population, the participation in BMIUE, the application for social security card and the residence intention of the floating population by frequency and percentage. Then the input method was used to analyze the influence of BMIUE, social security card and social integration on settlement intention by binary logistic regression. Structural Equation Model (SEM) We used AMOS22.0 software to analyze the influence path and effect of various factors on settlement intention. Then the goodness of fit of the model was evaluated by Index of Good Fit (GFI), Adjust the Index of Good Fit (AGFI), Comparative Fit Index (CFI), Root Mean Square Error of Approximation (RMSEA). The criteria for model fitting are GFI > 0.9, AGFI > 0.9, CFI > 0.9, RMSEA < 0.05 [38][39][40]. Remember, if the model fitting result is poor, the model needs to be improved in combination with the professional. Finally, we used bootstrapping method [41][42][43] to analyze the direct effect, indirect effect and total effect of independent variables on settlement intention. This method provided direction for interventions [44,45]. Descriptive analysis According to Table 1, 39.9% of the floating population expressed their willingness to move their household registration system to the local area and settle there. This result is similar to the findings of previous research [46]. Of the respondents, approximately 77.7% did not participate in BMIUE, and 50.5% of the floating population had applied for a personal social security card. In addition, the survey found that the floating population had good social integration at their current residence (93.7%). Regarding demographic characteristics, the majority of the respondents were rural residents, married, and young or middle-aged men (95.2%), and nearly 43.5% of the respondents stopped their education in junior high school. In addition, 51.5% of the floating population was within the province of their registered residence, and their migration time was less than 8 years. Nearly 58% of them had signed labor contracts, and their average monthly expenditure in the past year on health care was 1000-3000 yuan. In terms of health education, the floating population received the most health education on STD and AIDS prevention, but their overall acceptance of health education was poor. Table 2 shows the results of binary logistic regression, which shows that participation in basic medical insurance for urban employees (BMIUE) has an impact on the resettlement intention of floating population. Specifically, the settlement intention of individuals in the floating population participating in BMIUE was 23.2% higher than that of those who did not participate. During the single factor analysis, it found that applying for a personal social security card was related to the settlement intention. Moreover, the proportion of floating population agreeing to settle that they were already part of the society was 2.026-fold greater than that of the floating population that did not agree to settle. This demonstrated that social integration has a positive impact on the settlement intentions of the floating population. Structural Equation Model and the mediating effect of social integration The structural equation model is shown in Fig. 1. SEM is composed of three latent variables: social insurance and security, social integration and settlement intention. Table 3 are positive, which indicates that each variable is statistically significant. Table 4 shows the goodness-of-fit indices. The fitness indexes (GFI/AGFI/CFI) were all greater than 0.9, and the RMSEA was less than 0.05. This shows that the model has a good fitting effect. Mediation analysis is an important tool for statisticians to study causality. The intention is to study whether or to what extent the independent variable acts on the dependent variable via the mediator variable and to clarify the direct effect, indirect effect, and total effect [41,42]. To further clarify the causal relationship, the study analyzed the intermediary effect by using the bootstrapping method. The reliability of the direct effect, indirect effect, and total effect was 0.600, 0.804, and 0.823, and the validity was 0.500, 0.695, and 0.719, respectively. The results of the mediation effect using the bootstrapping method are presented in Table 5. It was found that the Z values of the overall effect, indirect effect, and direct effect were all greater than 1.96, and the confidence interval of the indirect effect does not include 0, indicating that both direct and indirect effects exist in this model. This model is a partial intermediary, with an intermediary ratio of 10.66%. Discussion After binary logistic regression analysis and structural equation model mediation effect analysis, it was concluded that BMIUE, personal social security card application and social integration not only directly promote the settlement intentions of the floating population but Table 3 Regression weights A1: BMIUE; A2: Apply for personal medical insurance cards; A3: Health education in occupational disease prevention; A4: Health education on prevention and treatment of chronic diseases; A5: Health education on STD and AIDS prevention; Q503A: Do you agree with the statement I like the city/place where I live now? Q503C: Do you agree with the statement that I would like to be a part of the local people? Q503D: Do you agree with the statement "I think the local people are willing to accept me as a member"? ***p < 0.01 also promote the settlement of the floating population in their current residence through the intermediary variable of social integration. BMIUE, social security cards, and settlement intentions Participating in BMIUE in the current residence is very important for the future settlement intentions of the floating population. Participation in BMIUE has a positive impact on the settlement intentions of the floating population, and the impact on the settlement intention is greater than that of participating in other types of insurance [47], which is consistent with the findings of previous studies [48,49]. In addition, researchers previously found that the participation of the floating population in BMIUE in big cities would contribute to their intentions to settle down [50]. More importantly, participating in BMIUE can improve the risk resistance of the floating population at their current residence, reduce their cost of living and reduce their economic burden to a certain extent [51][52][53]. BMIUE is paid for by both the employer and the employee [54], and the employer pays more than each individual. Employees only need to pay for a certain number of years to enjoy higher rates of medical insurance reimbursement for life. Therefore, this will enhance the floating population's intention to settle down to some extent. Even without considering the preferences of migrant workers, the participation in medical insurance still has a positive impact on the willingness of the floating population to settle down [55][56][57]. In 2009, the state promulgated the Interim Measures on the Continuity of the Basic Medical Security for Floating Employed Persons, which clearly stipulates that local governments should not set up barriers to the participation of the floating population for reasons such as having registered permanent residence. If rural household registration personnel are employed in urban units and have stable labor relations, the employing units shall go through registration procedures in accordance with the provisions of the Interim Measures for the Administration of Social Insurance Registration and participate in the basic medical insurance for urban workers in the place of employment. Other mobile employment, can voluntarily choose to participate in the new rural cooperative medical care in the place of household registration or the urban basic medical insurance in the place of employment, and according to the relevant provisions to the new rural cooperative medical care agency in the place of household registration or the social (medical) insurance agency in the place of employment for registration procedures; Urban basic medical insurance ginseng protect personnel to cross flow as a whole the area of employment, without receiving unit, the individual should be suspended within 3 months after the original relationship of insurance of primary medical treatment to the new employment to social (medical) insurance agency organization is dealt with register formalities, participate in BMIUE in accordance with local regulations or BMIUR [58]. However, according to the survey results of this paper, there are still some people who do not participate in BMIUE. The reason that produces this kind of result may be floating population attends insurance in place of origin, the insurance after transforming the job is interrupted, cannot timely pay is sure; The floating population is not stable in their current residence, even in the informal sector of employment; Reimbursement procedures are complicated and the reimbursement ratio is lower than that of the place of origin. In violation of the Labor Contract Law, some units do not sign labor contracts with the floating population and refuse to pay insurance premiums [59]; The provinces where the floating population flows into have weak economic strength and insufficient medical insurance funds [60]. Social security cards had a positive effect on the settlement intentions of the floating population, which is consistent with previous studies [61,62]. It has been documented that the issuance of and application for a social security card is an important push to relieve the worries of the floating population and promote the construction of new urban areas. The social security card is an important guarantee for the floating population that encourages them to integrate into their current residence and live a stable life [63]. Migrants can receive a social security card as long as they are insured in their place of residence. These cards are used for the identification of their medical insurance, and they are also used to record the basic information of the insured persons, payment status, treatments, payment of medical funds and other information [64]. In addition, it can also handle job-seeking registration and unemployment registration procedures; Claiming unemployment insurance benefits; Applying for employment training; Apply for labor ability appraisal and apply for and enjoy treatment of industrial injury insurance; Deal with related labor and social security affairs on the net. Therefore, even though it had an effect on the settlement intention in the single-factor analysis and no effect on the regression analysis, this study still included the social security card in the final model. But we have to note that half of them still don't have a social security card. The reasons may be: The time limit of insurance payment is less than 3 months [65]; Change your name, photo and service bank before you can apply for again [66]; Failing to renew your social security card before it expires [67]. Social integration and settlement intention Social integration has become an important factor affecting the settlement intentions of the floating population [68]. With the development of urbanization, the floating population has paid increasing attention to their acceptance in the local society [50]. The "people-centered" urbanization policy put forward by the Chinese government in 2014 shows that urbanization is the urbanization of people, and the degree of social integration is closely related to the degree of the acceptance of the floating population by the local people [69][70][71]. This is consistent with the selection of the social integration measurement criteria in this paper because the current residence provides abundant social resources and conditions for the floating population, which will help them integrate into the local life as soon as possible [72]. The longer the influx takes, the stronger the positive effect of social integration on settlement intentions [73]. The mediating effect of social integration Participation in BMIUE in the place of residence is an important indicator to measure social integration [74]. Participation in BMIUE will promote the social integration of the floating population in the local area [75], which is consistent with the results of this study. This may be because the eligibility for BMIUE indicates that the floating population has a fixed source of income and a stable work unit in their current place of residence, and their living conditions are relatively good, which can meet the basic living needs of the floating population. Participating in social insurance is the only requirement for obtaining a social security card. Therefore, BMIUE and social security cards, which are closely related to social security, are important factors affecting the social integration of the floating population. They will increase the sense of belonging of the floating population in their current place of residence and ultimately improve their willingness to settle down [76]. Settlement intention and public health An increase in permanent residence will not only improve the health management level of the floating population in their place of residence but also promote the improvement of public health. The residents' health records are an important part of the twelve contents of the national basic public health service, which takes personal health as the core and satisfies the residents' self-care and health management [77]. Studies have shown that long-term settlement intention can significantly promote the service utilization of the health records of the floating population [78], which is beneficial to the physical health of the floating population. At the same time, settlement intention will also improve the level of public services [79][80][81]. As their willingness to stay increases, so does the prevention of communicable and noncommunicable diseases in the place of residence. This is because the willingness to stay promotes urbanization, which leads to an increase in global risk factors for infectious and noncommunicable diseases [82][83][84]. Specifically, China's public health service system for the floating population is not sound, and there is a lack of disease data from the floating population on infectious diseases and chronic noncommunicable diseases, making it difficult to establish a disease surveillance system [84][85][86]. This leads to the absence of prevention and the control of disease in vulnerable groups such as the floating population in China, and makes infectious diseases such as tuberculosis an important threat to public health [84,85]. For example, in the COVID-19 epidemic in 2019, the floating population will be at increased risk of COVID-19 infection due to their high rates of chronic disease comorbidities [87][88][89]. Additionally, because of their different medical insurance coverage, some vulnerable groups will not be able to timely treatment [84,88]. As a result, some countries, including China, have taken measures to control migration flows to control the COVID-19 epidemic [23]. To sum up, BMIUE and social security cards, as part of social security, have a direct positive impact on the floating population. Social integration, which is closely related to social security and public services, also has a direct positive impact on the resettlement intentions of the floating population. This study proved the mediating effect of social integration through constructing a model; BMIUE and social security cards can promote the settlement intentions of the floating population through social integration. The increase in permanent residence will eventually improve the health management level of the floating population and promote the development of public health. However, this study also has some limitations. The results of this study are based on mining of existing data. Due to the limited variables in the original data, the reliability of some indicators and the intermediary ratio is low. If one can add new effective variables in the future, the explanation of the influence of medical insurance and social security cards on the settlement intentions of the floating population will be more complete. Conclusions In this paper, the dynamic monitoring data of China's floating population in 2017 were used to study the influence of BMIUE and social security cards on the settlement intentions of the floating population, taking social integration as a mediating variable. According to the mediating effect method of the structural equation model, BMIUE and social security cards not only have a direct positive impact on the floating population but also indirectly affect their settlement intentions through social integration. Therefore, the government should further improve the insurance policies of the floating population in their current place of residence, encourage the floating population to integrate into society, and further enhance their intention to settle down. At the same time, settlement intention will also promote the development of public health and economic development. Although the data used in this paper were obtained from developing countries, this result also provides a theoretical basis for improving population migration policies internationally.
5,849.2
2020-08-24T00:00:00.000
[ "Economics" ]
A Survey of Automotive Radar and Lidar Signal Processing and Architectures : In recent years, the development of Advanced Driver-Assistance Systems (ADASs) is driving the need for more reliable and precise on-vehicle sensing. Radar and lidar are crucial in this framework, since they allow sensing of vehicle’s surroundings. In such a scenario, it is necessary to master these sensing systems, and knowing their similarities and differences is important. Due to ADAS’s intrinsic real-time performance requirements, it is almost mandatory to be aware of the processing algorithms required by radar and lidar to understand what can be optimized and what actions can be taken to approach the real-time requirement. This review aims to present state-of-the-art radar and lidar technology, mainly focusing on modulation schemes and imaging systems, highlighting their weaknesses and strengths. Then, an overview of the sensor data processing algorithms is provided, with some considerations on what type of algorithms can be accelerated in hardware, pointing to some implementations from the literature. In conclusion, the basic concepts of sensor fusion are presented, and a comparison between radar and lidar is performed Introduction In recent years, Advanced Driver-Assistance Systems (ADASs) have performed many steps forward, making autonomous driving a reality.Going toward full driving automation (Table 1), it is necessary to study sensors and acquisition systems that guarantee proper performance, mainly in terms of accuracy and resolution. Table 1.ADAS levels and related automation level. ADAS Level Automation Level Level 0 No Automation Level 1 Driver Assistance Level 2 Semi-Automated Level 3 Conditional Automation Level 4 High Automation Level 5 Full Automation Nowadays, the most promising sensor systems to support the development of ADASs are based on radar and lidar.Both permit detection and ranging but, while the former exploits radio-frequency (RF) signals, the latter exploits optical signals.The debate on which one will be the genuinely enabling technology for autonomous driving is still open.In fact, many of the solutions that are available on the market adopt either radar, lidar or a combination of them. Automotive applications of radar are well known, and have been highlighted during the last fifty years.An overview of the status during its first years can be found in [1].Many review articles have presented automotive radar focusing on the related signal processing algorithms [2] or the required hardware technology [3].More recent works report and discuss signal processing, modulation aspects and higher-level processing, such as tracking and classification [4,5]. Lidar's first applications were far from the automotive world, but today, this detection technique has become essential to develop ADASs and autonomous driving features.An overview of the principal modulation schemes and integration techniques can be found in [6], with a particular remark on electronic-photonic integration.An overview of the imaging systems with some technological information is available in [7].Some recent reviews highlight how lidar data processing is mainly based on machine learning [8][9][10].Li et al. [11] give an overview of how to use lidar as a detection system, and gives an overview of the detection and recognition algorithms.In recent years, lidar systems are available as plug and play detection systems, so Roriz et al. [12] give an overview of the devices available on the market.More recently, Bilik et al. [13] give a comparative overview of the radar and lidar systems, which focuses on differences and similarities of these two systems. This review mainly presents an overview of state-of-the-art radar and lidar algorithms and architectures.Then, the concept of sensor fusion is briefly outlined, and some architectures are shown as examples of hardware acceleration of radar-and lidar-related computation.In conclusion, a short comparative analysis is reported to highlight differences and similarities of lidar and radar. Radar At the beginning of the 20th century, radio detection and ranging (radar) systems were proposed to detect and locate aircraft [14].With the consolidation of technology and its availability at a relatively low cost, many different application domains have been found for radar systems.Nowadays, radar represents a well-established technology in the automotive industry [1]. Applications of radar systems on vehicles are numerous and extensive, mainly related to tasks in which the distance to a target vehicle and its relative velocity have to be measured, e.g., the autonomous emergency braking (AEB) and adaptive cruise control (ACC) systems [15]. Operating Principles The basic idea behind radar is to send a radio-frequency signal in free space and collect the echo generated by the presence of an obstacle.It is possible to determine the distance to the reflecting object by measuring the delay between the transmitted and the received signals.Figure 1 shows a conceptual example of how a radar system works.Equation (1) permits us to calculate the distance d of an object by measuring τ, the delay of the received signal, and c, the speed of light in the air [16]: reviews highlight how the lidar data processing is mainly based on machine learnin [9] [10].Li et al. [11] give an overview of how to use lidar as a detection system and an overview of the detection and recognition algorithms.In the last years lidar system available as plug and play detection systems, so Roriz et al. [12] give an overview o devices available on the market.More recently, Bilik et al. [13] give a comparative over of the radar and lidar systems which focuses on differences and similarities of these systems.This review mainly presents an overview of state-of-the-art radar and lidar algori and architectures.Then, the concept of sensor fusion is briefly outlined, some architec are shown as example of hardware acceleration of radar and lidar related computatio conclusion a short comparative analysis is reported to highlight differences and simila of lidar and radar. Radar At the beginning of the 20th century, radio detection and ranging (radar) sys were proposed to detect and locate aircraft [14].With the consolidation of techno and its availability at a relatively low cost, many different application domains have found for radar systems.Nowadays, radar represents a well-established technology i automotive industry [1]. Applications of radar systems on a vehicle are numerous and extensive, mainly re to tasks in which the distance to a target vehicle and its relative velocity have to be meas e.g., the autonomous emergency braking (AEB) and adaptive cruise control (ACC) sy [15]. Operating principle The basic idea behind radar is to send a radio-frequency signal in free space and c the echo generated by the presence of an obstacle.It is possible to determine the dis to the reflecting object by measuring the delay between the transmitted and the rece signals.Figure 1 shows a conceptual example of how a radar system works.Equat permits to calculate the distance d of an object measuring τ, the delay of the received s and c, the speed of light in the air [16]: Even if the main idea remained substantially unchanged, many algorithms and tems have been developed during the last years to increase performance and resoluti radar systems.Even if the main idea remained substantially unchanged, many algorithms and systems have been developed during the last years to increase performance and resolution of radar systems. The following sections present and analyze the main architectures and algorithms available nowadays for automotive radars. Radar Technology and Modulations The primary accepted and most widespread technology for radar systems relies on frequency-modulated continuous wave (FMCW) signals [17]. The key idea is to send a sequence of analog chirps; then, the transmitted signal is mixed with the received one, and the resulting beat frequency will be proportional to the target distance. Figure 2 reports the basic scheme of an FMCW radar.In particular, on the transmitter side (TX in Figure 2), the saw-tooth waveform generator produces a wave that then feeds a voltage-controlled oscillator producing a chirp signal with a linear frequency sweep.On the receiver side (RX in Figure 2), a mixer demodulates the received signal, and the resulting beat frequency f i f contains information related to both the range and the velocity of the target [18].Both the delay τ introduced by the reflection and the frequency shift introduced by the Doppler effect [19], will change the demodulated frequency, as stated by Equation (2): where f R is the frequency shift introduced by the presence of a target at a distance R, while f D is the shift introduced by a target moving at velocity v r due to the Doppler effect.The other terms in the equation are f sw , which is the sweep bandwidth of the chirp signal, T chirp is its duration, and λ is the carrier wavelength [18].Equation (2) shows how it is not possible to use a single chirp to estimate the range and radial velocity of a target directly.Indeed, R and v r are two dependent unknowns and cannot be determined by measuring f i f .So, many different solutions have been proposed for automotive applications over the years, such as using up and down chirps to decouple range and velocity information [20]. As explained by Winkler in [20], detection can be performed by sampling the baseband signal of an FMCW radar and applying the Fast Fourier Transform (FFT) on the sampled signal.The idea is to perform the FTT on L chirps and stack the obtained spectra in the columns of a matrix, then an FFT of every row of the matrix is performed.This leads to a map of target distance and velocity.Figure 3 shows an example of the resulting velocity/range map.As reported in [20], this processing technique can introduce some ambiguities in the obtained map, leading to the detection of a false positive target.This can be mitigated by applying further processing steps, like Constant False Alarm Rate (CFAR) algorithms [21], or modifying the modulation type [20]. An alternative to FMCW radar is based on digital modulation, such as Phase Modulated Continuous Wave (PMCW) [22] and Orthogonal Frequency Division Multiplexing (OFDM) [23].This technique consists of generating arbitrary digital waveforms and applying matched filter processing at the receiver. PMCW radar transmits bi-phase modulated waveforms with duration τ.The transmitted signal phase is encoded in a binary sequence denoted as ϕ:[0, τ] → {0, π}, which is repeated every τ.The resulting waveform is reported in Equation (3), where f c is the frequency of the modulated carrier and ϕ(t) is the aforementioned binary sequence [24]: Figure 4 describes the principle scheme of a simplified, single input single output, PMCW radar.On the transmitter side, a bit stream, which represents the modulation code, is fed to the modulator.This generates the transmitted signal with constant envelope and phase changing between 0 and π (Equation (3)).While at the receiver side, the signal represented by Equation ( 4) is received: where x(t − t d ) is the transmitted signal delayed of t d by the target reflection and f D is the Doppler effect frequency [24].The target range is estimated by calculating the discrete cyclic correlation of the received signal with the transmitted one.This can be performed by digital matched filtering the received signal [25].At the same time, the target velocity can be extracted from the signal phase by means of an FFT-based Doppler processing.Different modulation codes (i.e., ϕ(t) in Equation ( 3)) have been proposed during years [26,27]; they can guarantee different properties of autocorrelation and suppress the effect of possible range ambiguities.An alternative to PMCW digital radar is the OFDM radar.Its waveform is composed of a set of orthogonal complex exponentials, whose complex amplitude is modulated by radar modulation symbols [28].Therefore, the inverse-FFT (IFFT) of the modulation symbol can be used to generate the OFDM waveforms.Having orthogonal symbols guarantees an efficient digital demodulation of the received signal, enabling digital and flexible processing.At the transmitter side, it is sufficient to calculate the IFFT of the modulation symbols to generate the waveform and then to modulate it via quadrature modulation.In a specular way, to reconstruct the transmitted symbol at the receiver side, a quadrature demodulator and the computation of the FFT are required.Then, to obtain the range-Doppler matrix (Figure 3), the modulation symbols are removed from the demodulated signal and the usual two-dimensional FFT is calculated [29].Interference problems can be prevented by adding a cyclic prefix to the OFDM symbol, which has to be removed at the receiver side [30]. Another advantage of the digital radar is the possibility of encoding information in the generated waveform, embedding vehicle to vehicle/infrastructure communication inside the sensing task [30].It is possible to use a communication symbol as a modulation symbol of the radar, as in the OFDM case.Using only one waveform for the two applications, i.e., communication and sensing, permits not only occupying of the available spectrum very efficiently, but also guarantees the continuous operation of both the functionalities of communication and sensing. In conclusion, it is possible to state that analogue radars (e.g., FMCW) and digital one (e.g., OFDM) can achieve comparable performances in terms of accuracy and resolution.Digital radar guarantees a sufficiently high level of flexibility, but at a higher hardware cost, which is mainly due to the need for high-performance ADCs.On the other hand, analogue radar represents a relatively low-cost solution [4].However, unlike digital radar, it does not permit performing of communication tasks [31]. The target direction of arrival, both in elevation and azimuth, can be estimated by applying electronic beam forming on the receiver side of the radar system [32].In recent years, multiple-input, multiple-output (MIMO) radars have been addressed as a valid solution to increase the angular resolution.The main idea is that when using an array of M t transmitting antennas and an array of M r receiving antennas, using orthogonal waveforms, by exploiting time [33], frequency [34] and Doppler [35] division multiplexing, it is possible to obtain a synthetic array of M r M t antennas [36].For example, Doppler division multiplexing requires to add a different phase shift to the waveform transmitted by every antenna.This is performed by multiplying it by a complex constant, which can be selected to tune the parameters of the MIMO system. In conclusion, an extension of the 2D range-velocity matrix can be defined adding the direction of arrival of a target and its elevation.Thus, it is possible to define a point-cloud in a four-dimensional space defined by the range, velocity, direction of arrival and elevation of a target [13].If the sensitivity of the measuring system is high enough, it is possible to obtain an image of the surroundings of the vehicle with a quite high resolution, which can be used to extrapolate targets characteristics, and not only its presence [37].This is one of the main trends in automotive radars [38]. Signal Processing and Algorithms for Radars The radar data acquisition permits to obtain the range/Doppler matrix (Figure 3) or, if the information on azimuth and elevation is available, a 4D point-cloud, but to effectively detect a target, it is necessary to test and find the points of the matrix in which the normalized power is high enough to signal the presence of a target.To do so, a threshold can be defined. Considering the example reported in Figure 5, where a power spectrum obtained by a range/Doppler matrix is reported, two of the four highlighted bins represent actual targets: the green one and the yellow one.If the detection threshold is set to t 1 , the yellow target will not be detected, causing a target miss.On the other hand, if the threshold is set to t 3 , the two red spikes in the power spectrum will also be declared as detected targets (despite the fact that they are not), causing a false alarm.Indeed, threshold t 2 is the one that permits to detect both the green and yellow targets, ignoring the red ones.From the presented simple example, it is evident that, to effectively detect and confirm the presence of a target, it is necessary to estimate the noise power introduced by reflections on non-relevant targets, like parked vehicles, and adapt the threshold level accordingly [21].This permits us to reduce the probability of false alarms or to avoid ignoring relevant targets. Many solutions to this problem have been proposed; in particular, three will be briefly discussed here: cell-averaging CFAR (CA-CFAR) [21], ordered-statistic CFAR (OS-CFAR) [39] and deep-learning-based CFAR (DL-CFAR) [40].Figures 6 and 7 report a visual comparison of the presented algorithms.As can be clearly seen, the main and only difference is in the way the threshold is estimated. CA-CFAR estimates the detection threshold upon the range/Doppler matrix cells' average value surrounding the target cell: if its power is higher than the average power of the surrounding cells, the target is declared [21].This algorithm is quite simple, but in the case of two near targets, it is possible to lose the presence of one of the two.OS-CFAR estimates the threshold by ordering the surrounding cells and selecting the value of the k-th ordered cell as the threshold.This overcomes the problem of the multiple target detection introduced by CA-CFAR algorithm, but requires us to perform the sorting of the reference cells, which is a computationally intensive task.Different sorting techniques can be used to reduce the overall computational complexity of this algorithm, as the one proposed in [41].A different approach to CFAR problem is to use deep learning based techniques.The solution proposed by [40] is to train a neural network to recognize and remove targets from the range/Doppler map in order to estimate the noise level more precisely, without the target's influence in the map.As reported by [40], this approach guarantees the best detection rates with respect to other CFAR algorithms, maintaining a computational cost comparable to that of the OS-CFAR. The CFAR algorithm is used to perform the detection task.Once a target is detected, it is essential to estimate its possible trajectory.For example, to perform adaptive cruise control (ACC), it is necessary to track the velocity and position of a target vehicle and set it as a target for the host vehicle. One of the most widespread tracking techniques relies on applying Kalman filtering [42].This type of filter is represented by a recursive set of equations that efficiently estimates the state of a process by minimizing the mean of the squared error between the prediction and the actual measurement [43].It is possible to directly apply this concept to tasks like Adaptive Cruise Control (ACC), where the state to be estimated is the target vehicle velocity and position [44]. The current trend in radar data processing is to adopt a deep-learning-based approach to perform object detection, recognition and tracking.Comprehensive reviews on deep learning applications to radar processing can be found in [45,46]; here, some noticeable examples are reported. Cheng et al. [37] proposed the Radar Points Detector network (RPDnet) as a deep learning solution to the detection problem, showing promising results and a better accuracy when compared to a classical CFAR approach.As presented by Kim et al. [47], combining a Support Vector Machine (SVM) and a You Only Look Once (YOLO) model, it is possible to efficiently recognize and classify targets in radar points; in particular, they propose to identify the boundaries of the targets with the YOLO model and then use the SVM to classify them, obtaining an accuracy of 90%.Zheng et al. [48] propose Spectranet as a possible deep-learning-based approach to moving object detection and classification with an accuracy of 81.9%. Lidar Light-based measuring was initially proposed as a technique to measure the cloudbase height.This was performed by counting the time taken by a light pulse to travel to a cloud and back to the ground [49].Middleton et al. proposed light detection and ranging (lidar) [50] as a range measuring technique.However, as with the invention of the laser [51], the lidar technique and principle became the one known nowadays: measuring the round-trip time of a laser pulse traveling to the target and back. During recent years, lidar has become an essential component of ADAS systems; indeed, lidar systems have the possibility to guarantee automotive safety requirements [52]. Operating Principles Lidar's working principle is very similar to that of the radar.Despite the idea behind them actually being the same, the main difference is that in the transmitted signal, lidar uses a laser signal instead.The simplest possible lidar is composed of a laser and a photodiode.The delay measured between the transmitted and the received light is directly proportional to the target range; the basic ruling equation of this is the same as Equation (1).It is possible to construct a 3D point-cloud representing the vehicle surroundings by measuring the time of flight (ToF), i.e., the delay τ of Equation ( 1), in different directions [12], defining a point-cloud whether in spherical or Cartesian coordinates depending on the type of processing and acquisition system.The 3D point-cloud can also be mixed with the GPS data to perform Simultaneous Localization and Mapping (SLAM) [53]. While the main idea remained almost unchanged, many techniques to determine the ToF and the point-cloud have been proposed, mainly working on light modulation and the way to change the laser direction to determine the 3D point-cloud. Lidar Technology and Modulations Currently, lidar is mainly operated with three different modulation schemes [7]: Pulsed lidar relies on the direct measurement of the round-trip delay between the transmitted and the reflected signals.This is performed by means of high-resolution timers or time-to-digital converters (TDC) [56].In particular, the transmitted pulse fires the start of a counter, while the sensing of the reflected signal stops it. Figure 8 reports a pulsed ToF lidar scheme.First, the laser source produces a light pulse, which is detected by the start receiver that starts the count in the time to digital converter.Then, the target reflects the light pulse, which is sensed back by the stop receiver which stops the time counting.At the end, the round trip delay is measured, so Equation (1) can be applied directly to calculate the range.This technique is widely diffused, being a simple and low-cost solution [12].Moreover, adopting the high-performance TDC presented in [56], it is possible to achieve a spatial resolution in the order of a few centimeters with a range of almost two kilometers.In frequency-modulated continuous-wave lidars, the emitted light field is frequency modulated and an interferometric technique is applied [6] to estimate the frequency difference between the transmitted and the received signals.From the frequency difference, it is possible to estimate the distance to the target; indeed, they are proportional, as reported in Equation ( 5) [7]: where c is the speed of light, f r is the measured frequency difference, T is the duration of the modulation and B is its bandwidth.Using a coherent detector to estimate the frequency shift introduced by the target, it is possible to obtain a system resilient to the external light sources' interference, like the sun or lamps.Moreover, FMCW lidar introduces many other advantages, such as the possibility of estimating the target velocity by means of the Doppler shift introduced by a moving target and, operating in a continuous wave, it avoids high peak power emissions and the relative eye safety issues. The main problem related to the use of this modulation is that it requires the use of high-performance laser and optical systems, making the whole system expensive and complex [12].In fact, the laser has to be able to perform a wide frequency span and, at the same time, it has to maintain coherency on a wide bandwidth. In the amplitude-modulated continuous-wave lidar, intensity modulation of a continuous light source is exploited; in particular, the round trip to the target and back introduces a phase shift proportional to the distance from the target, as reported in [7]: where R is the target range, c is the speed of light, f M is the amplitude modulation frequency and ∆Φ is the phase shift introduced by the target.Figure 9 reports a scheme of principle of such a system.The modulation frequency heavily affects both the range resolution and the maximum detectable range, but it is not possible to increase the resolution without reducing the maximum range [7].Indeed, this type of system is mainly used in near-range, highresolution, applications. In conclusion, the pulsed lidar seems to be the more feasible and widespread solution [12], representing a good compromise in terms of accuracy, if supported by a performing TDC, and cost, being a relatively cheap system.On the other hand, the coherent detection, both FMCWand AMCW-based, permits it to mitigate the effects of sensing light from external sources [11].The presented measuring methods can be used to implement different lidar imaging systems, which can be categorized into: Rotor-based mechanical lidar is the most mature imaging technique used in automotive applications [12].This can provide a 360 • horizontal field of view (FoV) by mechanically rotating the scanning system, i.e., the laser source and the receiver, while imaging along the vertical dimension is obtained by tilting the entire acquisition system or its parts, like mirrors or lenses [11]. This solution is widely used and chosen by many manufacturers, since it represents a simple but effective solution [13], even if the rotating system can be bulky and it adds some inertia to the vehicle. An alternative to rotor-based mechanical lidar is the scanning solid-state lidar.While the former relies on rotating mechanical elements, the latter does not present any spinning mechanical parts.This permits a cheaper system with a reduced FoV.An optical phased array (OPA) [57] can be used to steer the laser beam and illuminate a specific spot at a time.On the receiver side, a similar technique [58] permits it to collect only the light arriving from the illuminated spot. OPA-based imaging is collecting growing interest; indeed, not only it can be fully integrated on a chip, obtaining a smaller system, but the lack of inertia also permits it to increase the scanning frequency. The full solid-state lidar is a system in which the laser source flashes and illuminates [59] all the surroundings of the vehicle.Then, an array of photodetectors collects the reflected light, and their number and density defines the spatial resolution of the imaging system.This type of system can be very expensive, due to the large number of receivers.Moreover, having to illuminate the entire FoV, the laser source requires more power than other scanning systems.Due to these limitations, solid-state lidar is applied mainly in short-range scenarios like blind-spot detection of big targets [52]. 253 The current trend is to move in the direction of solid-state lidar [60]; this is mainly due to its lower dimension compared to the mechanically rotating lidar and the absence of inertial components. Signal Processing and Algorithms for Lidar Lidar produces a huge amount of data to represent the 3D point-cloud, but this cannot be used as-is.Some processing has to be performed to effectively detect and recognize objects.Figure 10 reports the typical flow of a lidar-based perception system [11].Object detection aims to extract objects in the point-cloud and to estimate their physical characteristics, such as position and shape.Lidar points can be processed in spherical coordinates.In the case of a mechanically rotating lidar, the angular information can be obtained directly by the tilt and rotation angle of the apparatus, so the range points can be clustered as they are. Bogoslavskyi proposed an efficient clustering algorithm [61], which avoids explicitly generating the 3D point-cloud as a set of points in space, but exploits the range image generated by a scanning rotating lidar, i.e., an image in which every pixel contains the distance to the target and corresponds to a particular elevation and azimuth angle.The idea behind this clustering algorithm is to consider two neighboring points in different clusters if their depth difference is substantially larger than their displacement in the range image [61].Some ground-filtering or weather noise removal algorithms [62] can be applied in order to remove noise from the point-cloud and perform better clustering of the points. Following the flow in Figure 10, semantic information is added to the objects by a first feature extraction step with a subsequent classification step based on them.Many features can be used as descriptors of the detected objects [63], such as the object volume computed on its bounding box, the mean object intensity [64] or features based on the statics of the point-cloud [65,66].Then, the extracted features are used to train and evaluate machinelearning based classifiers, like support vector machines or evidential neural networks (ENN) [67].In particular, ENN permits it to label the objects whose features were not in the training set as unknown, reducing the labeling error rate [67]. After that, the object tracking task is performed.This can be performed by correlating the current object position with the previous one.Lee proposed a Geometric Model-Free Approach with a Particle Filter (GMFA-PF) [68] that is able to be performed during the sensing period on a single CPU and is agnostic to the shape of the tracked object; an alternative to this is the use of Kalman Filtering [11]. In conclusion, intention prediction is performed.Indeed, in an autonomous driving context, it is fundamental to predict the behavior of the detected object in order to take the correct decisions depending on the possible future behavior of the other road users.The prediction is performed mainly by the means of machine-learning methods that estimate the maneuvers of the detected object from its current state.Many driving models have been developed to predict surrounding vehicles' behavior [69], such as theory-based, physicsbased and data-driven.The behavior modeling will not be presented here.since it is beyond the interests of this review.What it is important to understand is that the lidar data can be used in such a context to provide real-time inputs to these models and support the decision-making process of autonomous driving vehicles. The current trend is to adopt deep learning methods to extrapolate information from the lidar point-cloud [10].Zhou et al. [70] propose VoxelNet as a deep neural network (DNN) for 3D point-cloud target detection and classification.Other examples of DNN application to lidar point-clouds are HVNet [71] and SECOND [72]. Sensor Fusion In a real-life traffic urban context, it is fundamental to accurately locate and recognize other road users.To do this, the actual trend is to mix data from different sensors [73] so that the object list stored on the vehicle, i.e., the list containing all the detected road users will contain data from different sensors.For example, a vehicle velocity can be detected by radar, while its position and shape can be detected by lidar. Sensor fusion can be performed at different levels [74]; in particular, low-level sensor data can be passed unprocessed to a fusing algorithm [75], mid-level features extracted by every sensor can be fused to define a global object list with fused features, or the high-level object lists of every sensor can be fused together in a global one. The overall effect of sensor fusion algorithms is to obtain an object list containing objects whose features and presence has been detected by different sensors.To do so, many algorithms have been proposed in the literature [76,77]. Radar and camera data can be mixed at mid-level [78].For example, it is possible to use the radar to detect the presence of a target in a particular area and then the visual data from the camera can be used to recognize the object in the surrounding of the detected target, avoiding the exploration of the entire image.The lidar point-cloud can be mixed with camera images [79] to add depth information to images. Digital Hardware Architectures for On-Vehicle Sensing Tasks Many sensor-related computational tasks can be accelerated in hardware.During last years, many solutions have been proposed to address different problems and algorithms, both with FPGA and ASIC implementations.Here, some examples are reported.Subburaj et al. [80] proposed a single-chip solution for radar processing in which an FFT accelerator for the sensing task, a high performance DSP core for digital filtering and an MCU for the detection task are present.The mixed software/hardware approach guarantees a higher level of flexibility but, on the other hand, power consumption and computational efficiency are limited.The main advantage of the single-chip solution relies in the fact that, with the same system, it is possible to both drive and sample the analogue front-end, perform the detection tasks and communicate with the rest of the vehicle all with the same platform.Other noticeable implementations of single-chip radar processing systems that include hardware acceleration of computationally intensive tasks are presented in [81][82][83]. Many aspects of radar signal processing can be accelerated in hardware [84].One of the heaviest computational tasks is the CFAR algorithm, since this needs to perform many operations on big matrices.Zhang et al. [85] report an FPGA implementation of the CA-CFAR algorithm obtaining real-time performances, while [86] reports an implementation of the ML-CFAR, i.e., a CFAR technique based on the estimation of the signal mean level.A more flexible and adaptive approach to the CFAR problem is presented by [87].A highly parametrizable and configurable implementation is reported in [88], which features 7 different CFAR algorithms and guarantees real time performances.Other noticeable hardware implementations of the CFAR algorithm are discussed in [89][90][91]. The literature provides other examples of hardware accelerators for radar data processing. Damnjanović et al. [92] present a CHISEL model to generate 2D-FFT accelerators, both for basic range-Doppler processing and specialized for range-angle processing; they also present a custom memory controller to deal with radar data.Saponara [84] proposes a set of hardware macrocells to perform detection starting from the base-band signal.In particular, range, Doppler and azimuth processing is performed with a set of hardware FFT, while a hardware implementation of CA-CFAR is adopted to perform peak detection. A speed up in the overall processing can be introduced by performing the interference mitigation task on the range-Doppler matrix in hardware; for example, Hirschmugl et al. [93] propose a quantized Convolutional Neural Network (CNN) implementation to perform this task directly in the frequency domain before the object detection task, and the hardware implementation is 7.7 times faster than the presented firmware implementation. Another hardware-compliant task is the early detection and recognition of obstacles; indeed, this is a priority task, since it is necessary to rapidly stop the vehicle if an obstacle is present and different alarm signals can be produced depending on the type of target.A system that can perform this task in real time is presented in [94].In [95], it is possible to find an early detection hardware system that guarantees a valid detection in 41.72 µs, giving optimal performances to the emergency breaking system.A deep learning approach that is based on quantized YOLO CNNs can be found in [96]. The problem of direction of arrival estimation is addressed in [97]; the authors present a hardware architecture to perform maximum-likelihood direction of arrival estimation that is 7 times faster than the proposed software version. Tables 2-4 report a brief overview of the presented hardware accelerators, mainly focusing on the implemented tasks and main reported features.Operating frequency ( f op ) and implementation information are also reported.A commercial lidar (for example the Velodyne VLS-128) can produce a point-cloud at a rate of 9.6 Mpoints/s [98].This can be quite a high data rate to be handled in real-time, so a hardware-accelerated data compression strategy, such as the one presented in [98], can be adopted to deal with real-time performances. In the context of lidar sensing, many tasks can be accelerated in hardware; they are mainly related to the point-cloud data processing, which is performed by adopting machine learning techniques.Therefore, the acceleration of neural networks is required.Some implementation of this type of accelerators specialized for lidar processing are present in the literature [99,100].Another computationally heavy task is the weather denoising of the point-cloud; this can also be accelerated in hardware, as proposed in [101].Real-time lidar localization can also be accelerated; in particular, Bernardi et al. [102] present a hardware accelerator for particle filter on an FPGA platform to perform this task in real time. A completely different approach to accelerate on-vehicle sensor processing is the use of GPU [103]; indeed, many lidar point-cloud related processing algorithms can be parallelized and efficiently executed on GPUs. Table 5 summarizes the implemented tasks and main features of the presented accelerators for lidar-related computational tasks; the number of LUTs used in the reported FPGA implementation is also reported. Comparison and Conclusions In conclusion, the idea behind radar and lidar is very similar, and many common aspects have been highlighted.However, some differences have to be considered.Table 6 presents a qualitative comparison between radar an lidar.First of all, the transmitted signal is different; indeed, radar uses RF signal, while lidar uses laser.Lidar produces a temporal consistent 3D point-cloud, while it is almost impossible to guarantee time consistency in radar detection array.Moreover, the DSP/data processing flow is quite different [13].From the performance point of view, lidar is more feasible when short-range and high-resolution capabilities are required, for example in a city-like scenario, while radar is more feasible when it is necessary to detect far fast-moving targets, like in a highway scenario [74].One of the main challenges for radar development is the low angular resolution.In fact, to achieve the sub-degree resolution of lidar, a large radar aperture, or a high number of antennas, is required making the actual on-vehicle integration unfeasible.On the other hand, lidar is a very expensive system, and its operational range remains quite limited due to eye-safety restrictions [13].Both sensing systems present many open challenges, and the research is still active both from academic and industrial point of view.This review's aim was to summarize the main aspects and the state-of-the-art of these two important technologies and to highlight their differences and open points relative to these two sensing systems. Figure 3 . Figure 3. Range Doppler matrix with a target at distance d t and velocity v t . Figure 5 . Figure 5. Illustration of the power spectrum along a column of the range/Doppler matrix. Figure 9 . Figure 9. Scheme of the principle of an AMCW lidar system. Table 2 . Examples of full radar detection system. Table 3 . Examples of radar hardware accelerators for CFAR. Table 4 . Examples of radar hardware accelerators. Table 5 . Examples of lidar hardware accelerators. Table 6 . Table with differences between radar and lidar.
8,808.8
2023-10-08T00:00:00.000
[ "Engineering", "Computer Science" ]
Synergistic Effects of Magnetic Z-Scheme g-C3N4/CoFe2O4 Nanofibres with Controllable Morphology on Photocatalytic Activity The rational design of interfacial contacts plays a decisive role in improving interfacial carrier transfer and separation in heterojunction photocatalysts. In Z-scheme photocatalysts, the recombination of photogenerated electron–hole pairs is prevented so that the redox capacity is maintained. Here, one-dimensional graphitic carbon nitride (g-C3N4)/CoFe2O4 fibres were synthesised as a new type of magnetic Z-scheme visible-light photocatalyst. Compared with pure g-C3N4 and CoFe2O4, the prepared composite photocatalysts showed considerably improved performance for the photooxidative degradation of tetracycline and methylene blue. In particular, the photodegradation efficiency of the g-C3N4/CoFe2O4 fibres for methylene blue was approximately two and seven times those of g-C3N4 and CoFe2O4, respectively. The formation mechanism of the Z-scheme heterojunctions in the g-C3N4/CoFe2O4 fibres was investigated using photocurrent spectroscopy and electrochemical impedance spectroscopy. We proposed that one of the reasons for the improved photodegradation performance is that the charge transport path in one-dimensional materials enables efficient photoelectron and hole transfer. Furthermore, the internal electric field of the prepared Z-scheme photocatalyst enhanced visible-light absorption, which provided a barrier for photoelectron–hole pair recombination. Introduction Environmental pollution and sustainable development of energy are among the most important issues in today's society; research in these areas has, therefore, become increasingly focused on cost-effective and efficient solutions [1][2][3][4][5]. In particular, effective methods are required for the removal of organic pollutants that are rich in aromatic ring structures, such as antibiotics, as they are typically toxic, carcinogenic, and mutagenic. Tetracycline (TC) is commonly studied as a model contaminant because it is used extensively in livestock, fisheries, and human medicine and as a feed additive [6][7][8][9][10]. However, conventional degradation and biodegradation methods have limited efficiency in eliminating TC, thus resulting in residues in surface water, mud, and drinking water that pose a threat to both the natural environment and human health [11][12][13][14][15]. Pioneering work on anatase titanium dioxide (TiO 2 ) semiconductors has provided a new avenue for addressing these challenges by employing photocatalysis [16][17][18]. However, TiO 2 only absorbs ultraviolet (UV) light with a utilisation rate of only 2%; thus, new methods need to be developed to improve visible-light absorption rates [19,20]. Graphitic carbon nitride (g-C 3 N 4 ), which is a non-metallic conjugated polymer, has recently attracted attention as a photocatalytic material owing to its satisfactory electronic energy band structure (forbidden band width of 2.7 eV) and physicochemical stability [21,22]. g-C 3 N 4 has the advantages of being both inexpensive and easily available owing to the abundance of carbon-and nitrogen-containing raw materials in nature. However, g-C 3 N 4 alone as a photocatalyst suffers from rapid charge recombination. To overcome this problem, g-C 3 N 4 has been integrated with various materials, including metal co-catalysts [23], metal oxides [24], chalcogenides [25], metal-sulfur compounds [26], and spinel ferrites [27]. CoFe 2 O 4 , a typical bimetallic spinel ferrite, is characterised by its high stability and low cost. Moreover, CoFe 2 O 4 is more biocompatible than monophase cobalt catalysts because the presence of Fe 3+ reduces Co 2+ spillover, thus inhibiting toxicity and secondary contamination arising from cobalt ion leaching from the material surface. Although CoFe 2 O 4 /g-C 3 N 4 binary nanomaterials have been reported for visible photocatalytic pollutant degradation, these materials have been applied in block (three-dimensional) or sheet (two-dimensional) forms. In contrast, one-dimensional fibrous structures have rarely been investigated [28]. One-dimensional fibres can limit charge transfer in space, thus improving the transport efficiency of photogenerated electrons and holes and providing large surface areas and high porosity. This increases the number of active sites for the catalytic reaction and promotes photocatalytic activity. Conventional heterojunctions, in which electrons are separated from holes by charge transfer, often have low redox potentials because active sites with high redox potentials are not retained [29]. In contrast, Z-scheme heterojunctions are depleted because of internal interactions between the materials, which result in the complexation of a specific fraction of electrons and holes, thus preserving the active sites with the highest redox potentials [30,31]. Zhang et al. used mixing and programmed heating to synthesise Z-scheme KBiO 3 /g-C 3 N 4 heterojunction photocatalysts, which exhibited considerably improved photooxidative activity for the degradation of crystalline violet and phenol. Javad et al. designed and demonstrated a Z-scheme g-C 3 N 4 /BiVO 4 heterostructure, which achieved a four-fold improvement in performance compared with pure BiVO 4 . Inspired by these findings, we synthesised a series of one-dimensional Z-scheme g-C 3 N 4 /CoFe 2 O 4 heterojunction catalysts using an electrostatic spinning technique combined with high-temperature calcination (Scheme 1). The prepared catalysts were found to have good physicochemical properties, including increased surface area, enhanced visible-light absorption, and high recyclability. These photocatalysts were shown to effectively enhance electron transfer efficiency and concomitantly prolong the involvement of photogenerated carriers over the degradation period. In addition, the charge transfer mechanism of the synthesised Z-scheme heterojunction structure was proposed. The active substances that play a leading role in the removal of organic pollutants were identified by conducting quenching experiments and electron paramagnetic resonance (EPR) spectroscopy. Nanomaterials 2023, 13, x FOR PEER REVIEW 3 of 16 Scheme 1. Synthesis of the composite photocatalysts used in this study. Synthesis of the One-Dimensional Z-Scheme g-C3N4/CoFe2O4 Heterojunction Catalysts The g-C3N4/CoFe2O4 binary catalysts were prepared using electrostatic spinning and high-temperature calcination. First, pure g-C3N4 and CoFe2O4 (denoted as CN and CFO, respectively) were prepared as described in the Supporting Information. Then, CoFe2O4 was mixed with dried electrostatically spun g-C3N4 fibre films to the desired mass ratio (0.01, 0.05, 0.1, 0.15, or 0.2 g-C3N4: CoFe2O4), soaked in a cuvette for 3 h, and dried at 60 °C. The one-dimensional g-C3N4/CoFe2O4 nanofibre films were then prepared by placing the dried mixture in a tube furnace under an argon atmosphere and ramping the temperature to 600 °C (1 °C/min) for 2 h. The products were denoted as CN/CFO-0.01, CN/CFO-0.05, CN/CFO-0.1, CN/CFO-0.15, and CN/CFO-0.2 based on the g-C3N4 mass ratio. Photocatalytic Degradation Experiments To study its photocatalytic degradation activity, the selected catalyst (25 mg) was placed in a quartz glass tube containing 50 mL of contaminant solution (TC 10 mg L −1 ) or methylene blue (MB, 50 mg L −1 ). The mixture was then subjected to a dark reaction for 60 min to bring the catalyst to a stable adsorption state. The system was then irradiated with a Xenon lamp (300 W) with a 420 nm cut-off filter to initiate the photodegradation reaction. Every 30 min, 3 mL of the solution was removed with a 5 mL pipette for concentration measurements. Synthesis of the One-Dimensional Z-Scheme g-C 3 N 4 /CoFe 2 O 4 Heterojunction Catalysts The g-C 3 N 4 /CoFe 2 O 4 binary catalysts were prepared using electrostatic spinning and high-temperature calcination. First, pure g-C 3 N 4 and CoFe 2 O 4 (denoted as CN and CFO, respectively) were prepared as described in the Supporting Information. Then, CoFe 2 O 4 was mixed with dried electrostatically spun g-C 3 N 4 fibre films to the desired mass ratio (0.01, 0.05, 0.1, 0.15, or 0.2 g-C 3 N 4 : CoFe 2 O 4 ), soaked in a cuvette for 3 h, and dried at 60 • C. The one-dimensional g-C 3 N 4 /CoFe 2 O 4 nanofibre films were then prepared by placing the dried mixture in a tube furnace under an argon atmosphere and ramping the temperature to 600 • C (1 • C/min) for 2 h. The products were denoted as CN/CFO-0.01, CN/CFO-0.05, CN/CFO-0.1, CN/CFO-0.15, and CN/CFO-0.2 based on the g-C 3 N 4 mass ratio. Photocatalytic Degradation Experiments To study its photocatalytic degradation activity, the selected catalyst (25 mg) was placed in a quartz glass tube containing 50 mL of contaminant solution (TC 10 mg L −1 ) or methylene blue (MB, 50 mg L −1 ). The mixture was then subjected to a dark reaction for 60 min to bring the catalyst to a stable adsorption state. The system was then irradiated with a Xenon lamp (300 W) with a 420 nm cut-off filter to initiate the photodegradation reaction. Every 30 min, 3 mL of the solution was removed with a 5 mL pipette for concentration measurements. Structure and Morphology of CN/CFO Scanning electron microscopy (SEM) images of the calcined CN/CFO composite fibre films are shown in Figure 1a, with the corresponding transmission electron microscopy (TEM) images shown in Figure 1b. These images depict that the intricate fibrous morphology of the CN/CFO-0.05 composite fibrous film consists of lamellar g-C 3 N 4 with granular CoFe 2 O 4 uniformly distributed through the nanofibres. The fibres, which had diameters of 300-400 nm, formed a rough mesh structure. Such a microstructure greatly improves the transport efficiency in solution, increases the amount of active reaction sites, and improves the stability of the catalyst. As shown in Figure 1d, the observed lattice widths of 0.25 and 0.48 nm correspond to the crystal planes of CoFe 2 O 4 (311) and (111), respectively, in agreement with the obtained X-ray diffraction (XRD) results ( Figure 2b). Energy-dispersive spectroscopy (EDS) element mapping (Figure 1f-j) also reveal that the elements are evenly distributed, indicating the successful formation of a composite between the two catalytic materials. The chemical structure and bonding of the composite fibre films were studied by Fourier transform infrared spectroscopy (FT-IR) ( Figure 2a). All the composite catalysts exhibit characteristic absorption bands at approximately 1628, 1538, 1394, 1318, and 1240 cm −1 , which correspond to the stretching vibrations of the triangular C-N(-C)-C and C-NH-C units in the C-N heterocyclic system. The absorption band at approximately 801 cm −1 can be attributed to the breathing mode vibration of the triazine unit, and the band at approximately 572 cm −1 is derived from Fe-O bonding at the octahedral site of the spinel CoFe 2 O 4 unit. The characteristic absorption bands of both CFO and CN correspond to those observed in the spectra of the various CN/CFO photocatalysts, indicating successful composite formation. In addition, the broad bands at 3000-3500 cm −1 may be attributed to the stretching vibrations of hydroxyl (-OH) and hydrogen bonds resulting from the adsorption of water from the environment [33]. The effect of composite formation with CN on the structure of CFO was investigated using XRD ( Figure 2b). Here, crystal diffraction peaks corresponding to g-C 3 N 4 (100) and (002) are visible at 13.0 • and 27.4 • , respectively [34]. The (100) and (002) crystal planes are formed by interlayer stacking of the repeating structure of the tri-s-triazine unit and the laminar structure of the conjugated aromatic ring, respectively. Conversely, CFO exhibited characteristic diffraction peaks at 18. [35]. The composite catalysts exhibited the characteristic peaks of both CN and CFO to higher degrees, indicating that the material is relatively crystalline. These results also indicate that composite formation has little effect on the lattice structure. Figure 2c displays the N 2 adsorption-desorption isotherms of CFO, CN/CFO-0.05, and CN/CFO-0.01. Each isotherm exhibits an adsorption hysteresis loop, which corresponds to a system in which capillary coalescence of the porous adsorbent occurs. These features are consistent with those of type IV isotherms, according to the latest IUPAC classification, and are characteristic of mesoporous materials [36]. As shown in Figure 2d, the catalyst materials exhibit pore size distributions of 4-15 nm. The similar porous structure provides more active sites for the adsorption process, thus enabling the enhancement of photocatalytic performance in this material. The chemical structure and bonding of the composite fibre films were studied by Fourier transform infrared spectroscopy (FT-IR) ( Figure 2a). All the composite catalysts exhibit characteristic absorption bands at approximately 1628, 1538, 1394, 1318, and 1240 cm −1 , which correspond to the stretching vibrations of the triangular C-N(-C)-C and C-NH-C units in the C-N heterocyclic system. The absorption band at approximately 801 cm −1 can be attributed to the breathing mode vibration of the triazine unit, and the band at approximately 572 cm −1 is derived from Fe-O bonding at the octahedral site of the spinel CoFe2O4 unit. The characteristic absorption bands of both CFO and CN correspond to those observed in the spectra of the various CN/CFO photocatalysts, indicating successful composite formation. In addition, the broad bands at 3000-3500 cm −1 may be attributed to the stretching vibrations of hydroxyl (-OH) and hydrogen bonds resulting from the adsorption of water from the environment [33]. The effect of composite formation with CN on the structure of CFO was investigated using XRD (Figure 2b). Here, crystal diffraction peaks corresponding to g-C3N4 (100) and (002) are visible at 13.0° and 27.4°, respectively [34]. The (100) Table 1 shows the specific surface area values of CFO, CN/Cfos-0.05 and CN/Cfos-0.01. CN/Cfos-0.05 has the largest specific surface area and provides the most reaction sites for the redox reaction in the photocatalytic process. Figure 3a shows the atomic composition of CN/CFO-0.05 as determined by X-ray photoelectron spectroscopy (XPS). Here, peaks corresponding to Co 2p, Fe 2p, O 1s, C 1s, and N 1s are observed, indicating the successful preparation of the Z-scheme heterojunctions. Figure 3b-f shows the high-resolution Co 2p, Fe 2p, O 1s, C 1s, and N 1s spectra of the catalysts. The observed peak values in the Fe 2p spectrum are 710.47 and 723.77 eV, respectively, which can be attributed to Fe 2p 1/2 and Fe 2p 3/2 , both of which are derived from Fe 3+ . These peaks were shifted towards lower binding energies by 0.05 and 0.08 eV, separately, for the CN/CFO composites (Figure 3b). Figure 3c further shows that because of the characteristics of the CN/CFO composites, the Co 2p 1/2 and Co 2p 3/2 spectral peaks of CFO (779.5 and 795.3 eV, respectively) also shift by 0.06 and 0.05 eV, respectively. In the fitted O 1s spectrum of CFO (Figure 3d), the observed lattice oxygen (529.41 eV) and hydroxyl-oxygen (531.84 eV) peaks indicate the adsorption of oxygen species onto the material surface. In the C 1s spectra (Figure 3e), the peaks of 284.82 and 288.18 eV are shifted towards higher binding energies in the composite by more than 0.1 eV because of the strong interactions between the CN and CFO components. The characteristic N 1s peak, which was attributed to C-N bonds, was shifted towards higher binding energies by 0.03 eV for the CN/CFO composites (Figure 3f). This higher binding energy arises because the electrons in the CN conduction band (CB) form complexes with the holes in the CFO valence band (VB) [37]. High performance Z-scheme photocatalysts can be prepared by edge selective deposition coupled with long terminal junctions and interfacial heterojunctions [37]. Photocatalytic and Photovoltaic Properties of CN/CFO The efficiency of photocatalysed pollutant degradation of different compositions of CN/CFO was investigated to determine the optimal material ratio. Photocatalytic degradation experiments were, therefore, carried out using TC and MB as the model pollutants (Figure 4a,d). To obtain absorption equilibrium, the catalyst/pollutant systems were placed in darkness for 60 min at the first stage of each reaction. The observed TC degradation rate of CN/CFO-0.05 was approximately 3 and 2.3 times that of CFO and CN, respectively, indicating that the Z-scheme heterojunction composite membrane provides a considerable improvement in photocatalytic efficiency. The degradation experiment with MB showed a similar trend, giving a maximum degradation efficiency of 95%. Photocatalytic and Photovoltaic Properties of CN/CFO The efficiency of photocatalysed pollutant degradation of different compositions of CN/CFO was investigated to determine the optimal material ratio. Photocatalytic degradation experiments were, therefore, carried out using TC and MB as the model pollutants (Figure 4a,d). To obtain absorption equilibrium, the catalyst/pollutant systems were placed in darkness for 60 min at the first stage of each reaction. The observed TC degradation rate of CN/CFO-0.05 was approximately 3 and 2.3 times that of CFO and CN, respectively, indicating that the Z-scheme heterojunction composite membrane provides a considerable improvement in photocatalytic efficiency. The degradation experiment with MB showed a similar trend, giving a maximum degradation efficiency of 95%. For the degradation of TC and MB, the CN/CFO-0.05 photocatalyst exhibited the largest promotion effect (Figure 4b,e). The photocatalytic reaction kinetic data was fitted to a Langmuir-Hinshelwood kinetic model (ln(C0/C) = kt, where C0 is the pollutant concentration before the reaction was initiated and Ct is the corresponding concentration at t min during the photocatalytic reaction process). CN/CFO-0.05 exhibited the highest catalytic activity for TC degradation with a reaction rate constant of 0.01207 min −1 , which was 4.6 and 3.8 times higher than those of CFO (0.00265 min −1 ) and CN (0.00319 min −1 ), respectively. The same trend was observed for the degradation of MB. This substantially enhanced catalytic performance of the composite fibrous membranes may, therefore, indicate the inhibition of photoelectron-hole pair recombination caused by the Z-scheme heterojunction. The recoverability and recyclability of the photocatalysts were also investigated (Figure 4c,f). After three cycles, the catalytic efficiency of the CN/CFO composite fibrous membrane for TC degradation decreased by only 6%, thus indicating that the onedimensional Z-scheme CN/CFO heterojunction structure has good chemical stability. EPR experiments using 5,5-dimethyl-1-pyrroline N-oxide (DMPO) as a free radical trapping agent were performed to reveal the mechanism of TC photocatalytic degradation. Figure 4g shows that the high-intensity DMPO-O 2− signal, which cannot be For the degradation of TC and MB, the CN/CFO-0.05 photocatalyst exhibited the largest promotion effect (Figure 4b,e). The photocatalytic reaction kinetic data was fitted to a Langmuir-Hinshelwood kinetic model (ln(C 0 /C) = kt, where C 0 is the pollutant concentration before the reaction was initiated and C t is the corresponding concentration at t min during the photocatalytic reaction process). CN/CFO-0.05 exhibited the highest catalytic activity for TC degradation with a reaction rate constant of 0.01207 min −1 , which was 4.6 and 3.8 times higher than those of CFO (0.00265 min −1 ) and CN (0.00319 min −1 ), respectively. The same trend was observed for the degradation of MB. This substantially enhanced catalytic performance of the composite fibrous membranes may, therefore, indicate the inhibition of photoelectron-hole pair recombination caused by the Z-scheme heterojunction. The recoverability and recyclability of the photocatalysts were also investigated (Figure 4c,f). After three cycles, the catalytic efficiency of the CN/CFO composite fibrous membrane for TC degradation decreased by only 6%, thus indicating that the one-dimensional Z-scheme CN/CFO heterojunction structure has good chemical stability. EPR experiments using 5,5-dimethyl-1-pyrroline N-oxide (DMPO) as a free radical trapping agent were performed to reveal the mechanism of TC photocatalytic degradation. Figure 4g shows that the high-intensity DMPO-O 2− signal, which cannot be detected in dark conditions, is observed under visible light. This indicates that superoxide radicals (·O 2− ) play a crucial role in the photocatalytic degradation mechanism [38]. Photocatalytic radical trapping experiments were also performed by adding free radical trapping agents in the presence of the catalyst (Figure 4h). All the trapping agents inhibit the degradation of TC, suggesting that the presence of oxides promotes the degradation mechanism. In particular, a high degree of inhibition is observed for TC degradation following the addition of TEMPOL as a superoxide radical (·O 2− ) trapping agent [39]. The contribution of superoxide radicals in the catalytic process of CN/CFO is dominant, as confirmed by the EPR and free radical trapping experiments. Subsequently, the energy band structures of the catalysts were plotted using the obtained XPS VB data (see Section 3.3) and the measured bandgap (E g ) of the catalysts, as shown in Figure 4i. The obtained CB potentials were −1.24 and −1.81 eV (vs. NHE) for CN and CFO, respectively, whereas the VB potentials were 1.48 and −0.41 eV (vs. NHE) for CN and CFO, respectively. The light absorption capacities of the composite fibre films were evaluated by ultraviolet-visible (UV-Vis) absorption spectroscopy (Figure 5a). CN, which is responsive to UV and visible light, exhibits an absorption edge of approximately 440 nm, which corresponds to a bandgap of 2.93 eV (Figure 5b). Conversely, CN/CFO exhibited lowerwavelength absorption edges with larger absorbances of the visible range. This indicates that the addition of CN broadens the absorption range of CFO in visible light. The red shift in the absorption band of the CN/CFO composite may result from the composition of the Z-scheme heterojunction, which allows the effective separation of electron-hole pairs, thus maintaining a high redox potential and increasing photocatalytic activity. detected in dark conditions, is observed under visible light. This indicates that superoxide radicals (·O 2− ) play a crucial role in the photocatalytic degradation mechanism [38]. Photocatalytic radical trapping experiments were also performed by adding free radical trapping agents in the presence of the catalyst (Figure 4h). All the trapping agents inhibit the degradation of TC, suggesting that the presence of oxides promotes the degradation mechanism. In particular, a high degree of inhibition is observed for TC degradation following the addition of TEMPOL as a superoxide radical (·O 2− ) trapping agent [39]. The contribution of superoxide radicals in the catalytic process of CN/CFO is dominant, as confirmed by the EPR and free radical trapping experiments. Subsequently, the energy band structures of the catalysts were plotted using the obtained XPS VB data (see Section 3.3) and the measured bandgap (Eg) of the catalysts, as shown in Figure 4i. The obtained CB potentials were −1.24 and −1.81 eV (vs. NHE) for CN and CFO, respectively, whereas the VB potentials were 1.48 and −0.41 eV (vs. NHE) for CN and CFO, respectively. The light absorption capacities of the composite fibre films were evaluated by ultraviolet-visible (UV-Vis) absorption spectroscopy (Figure 5a). CN, which is responsive to UV and visible light, exhibits an absorption edge of approximately 440 nm, which corresponds to a bandgap of 2.93 eV (Figure 5b). Conversely, CN/CFO exhibited lowerwavelength absorption edges with larger absorbances of the visible range. This indicates that the addition of CN broadens the absorption range of CFO in visible light. The red shift in the absorption band of the CN/CFO composite may result from the composition of the Z-scheme heterojunction, which allows the effective separation of electron-hole pairs, thus maintaining a high redox potential and increasing photocatalytic activity. The Eg values of the photocatalysts were calculated using Equation (1): The E g values of the photocatalysts were calculated using Equation (1): where α is the absorption coefficient, h is the Planck's constant, ν is the optical frequency, A is the constant, and E g is the bandgap energy [40]. n is related to the optical transition type of the semiconductor (direct-transition semiconductor, n = 1; indirect-transition semicon-ductor, n = 4). CFO and CN are direct-transition semiconductors [41]; thus, the band gaps of the hybrid fibrous films, CFO, and CN were obtained by plotting (αhν) 2 vs. hν ( Figure 5 band listed in Table 2 To investigate the separation efficiency of the photogenerated electrons (e − ) and holes (h + ), photoluminescence (PL) spectra of the photocatalysts were obtained (Figure 5c). Because the PL emission peaks are caused by photogenerated electrons and holes, the relatively low emission peak intensity corresponds to a reduced carrier complexation efficiency [42]. CN/CF-0.05, which exhibits the largest photocatalytic activity, has the smallest PL spectral intensity, which is consistent with the results of previous photocatalytic studies. The electron-transfer efficiencies and pathways of the catalysts were further assessed using electrochemical impedance spectroscopy (EIS). In general, the smaller the radius of the arc in the electrochemical impedance spectrum, the smaller is the charge-transfer resistance. Figure 5d shows that CN/CFO-0.05 exhibits the smallest radius, indicating that this catalyst has the largest electron-transfer efficiency [43]. Thus, the CN/CFO Z-scheme heterojunction structure enhanced the conductivity and mobility of the photocatalyst and contributed to the improved photocatalytic performance, as shown by the PL and UV-Vis results. Transient photocurrent response (I-t) experiments were then performed to investigate the photogenerated carrier separation efficiency and current density properties of the catalysts [44]. As shown in Figure 5e, CN/CFO-0.05 exhibits a higher photocurrent density than the other samples under cyclic light switching conditions. This behaviour may be attributed to the suppression of photogenerated carrier recombination by the Z-scheme heterojunction, which results in a higher visible-light absorption capacity, electron transport capacity, and electron-hole separation efficiency, thus leading to enhanced photocatalytic activity. In addition, the I-t curve for CN/CFO-0.05 remained constant after five on/off cycles, which demonstrates the good electrochemical stability of the catalyst. The degradation efficiencies of various previously reported catalysts for TC degradation were compared with those of the catalysts in the present study, as shown in Table 3. The one-dimensional Z-scheme heterojunction photocatalyst prepared in this study showed improved catalytic efficiency than other types of photocatalysts for the degradation of TC at the same concentration under visible-light irradiation. Determination of the Photocatalytic Mechanism The valence band energies (E VB ) of CFO and CN were determined using XPS VB spectroscopy (Figure 6a,b). The E VB values of CFO and CN were −0.41 and −1.48 eV, respectively. The E g values for CFO and CN were 1.4 and 2.72 eV, as calculated from Figure 5b. The conduction band energies (E CB ) of CFO and CN were then calculated using Equation (2) [45]: Determination of the Photocatalytic Mechanism The valence band energies (EVB) of CFO and CN were determined using XPS VB spectroscopy (Figure 6a,b). The EVB values of CFO and CN were −0.41 and −1.48 eV, respectively. The Eg values for CFO and CN were 1.4 and 2.72 eV, as calculated from Figure 5b. The conduction band energies (ECB) of CFO and CN were then calculated using Equation (2) [45]: The calculated ECB values for CFO and CN were −1.81 and −1.24 eV, respectively. To understand the energy relaxation and recombination processes of photogenerated carriers in the photocatalysts, their kinetic decays were investigated using transient fluorescence lifetime measurements (Figure 6c). Fitting revealed that a ternary exponential function best described the carrier decay kinetics [13], as follows: where y is the PL intensity at a given time; A1, A2, and A3 are the weighting factors of the different time components; and τ1, τ2, and τ3 are the time decay constants of the different time components. The weighted average lifetime of the fitted data (τavg = 0.98 ns) was calculated using Equation (4): The smaller time constants (τ2 and τ3) are attributed to the elimination of excited emission and the relaxation process associated with carrier recombination in the electric field within the Z-scheme heterojunction. The larger time constant (τ1) is attributed to the complexation process between carriers involving the surface states, where a longer carrier lifetime leads to higher photocatalytic activity [46]. The calculated ECB values for CFO and CN were −1.81 and −1.24 eV, respectively. To understand the energy relaxation and recombination processes of photogenerated carriers in the photocatalysts, their kinetic decays were investigated using transient fluorescence lifetime measurements (Figure 6c). Fitting revealed that a ternary exponential function best described the carrier decay kinetics [13], as follows: where y is the PL intensity at a given time; A 1 , A 2 , and A 3 are the weighting factors of the different time components; and τ 1 , τ 2 , and τ 3 are the time decay constants of the different time components. The weighted average lifetime of the fitted data (τ avg = 0.98 ns) was calculated using Equation (4): The smaller time constants (τ 2 and τ 3 ) are attributed to the elimination of excited emission and the relaxation process associated with carrier recombination in the electric field within the Z-scheme heterojunction. The larger time constant (τ 1 ) is attributed to the complexation process between carriers involving the surface states, where a longer carrier lifetime leads to higher photocatalytic activity [46]. A possible electron-transfer pathway for the Z-scheme CN/CFO heterojunction composite photocatalyst is shown in Figure 7. The photogenerated electrons in the g-C 3 N 4 CB are complexed with holes in the CoFe 2 O 4 VB, which results in the in situ accumulation of electrons in the CoFe 2 O 4 CB and holes in the g-C 3 N 4 VB, thus preserving the optimum redox potential. This effectively separates the electrons and holes and deposits them in the g-C 3 N 4 VB and CoFe 2 O 4 CB, respectively [47]. The pathway for the photodegradation of TC and MB by the catalyst can be expressed by the following equations: O 2 − + TC/MB → Intermediates + CO2 + H2O (7) h + + H2O→ (•OH) (8) •OH + TC/MB → Intermediates + CO2 +H2O (9) h + + TC/MB → Intermediates + CO2 +H2O (10) Figure 7. Electron-transfer pathways in the Z-scheme CN/CFO heterojunction composite photocatalysts. Based these findings, a model of the Z-scheme heterojunction catalyst constructed from CN and CFO is proposed, as shown in Figure 7. The photogenerated electrons (e − ) are transferred from the g-C3N4 CB position to the CoFe2O4 VB position, where they combine with the photogenerated holes (h + ), thereby increasing the number of electrons and holes. For the degradation of TC or MB, the CB reacts with dissolved O2 to form ·O 2− . The photogenerated pores in g-C3N4 VB can then directly participate in the degradation Based these findings, a model of the Z-scheme heterojunction catalyst constructed from CN and CFO is proposed, as shown in Figure 7. The photogenerated electrons (e − ) are transferred from the g-C 3 N 4 CB position to the CoFe 2 O 4 VB position, where they combine with the photogenerated holes (h + ), thereby increasing the number of electrons and holes. For the degradation of TC or MB, the CB reacts with dissolved O 2 to form ·O 2− . The photogenerated pores in g-C 3 N 4 VB can then directly participate in the degradation reaction or form hydroxyl radicals (·OH) from H 2 O, leading to the degradation of TC or MB. Conclusions In this study, a type of one-dimensional g-C 3 N 4 /CoFe 2 O 4 Z-scheme heterojunction photocatalyst was prepared by electrostatic spinning combined with high-temperature calcination. The Z-scheme g-C 3 N 4 /CoFe 2 O 4 composites exhibited good physicochemical properties, including high specific surface areas, visible light absorption, and recyclability. Subsequently, the prepared composites exhibited clear photocatalytic activity for TC and MB degradation: in particular, the degradation efficiency of the g-C 3 N 4 /CoFe 2 O 4 -0.05 catalyst for TC degradation was increased 4.6-and 3.8-fold compared with those of CoFe 2 O 4 and g-C 3 N 4 , respectively. Additionally, this catalyst achieved the highest photocatalytic efficiency of the composites synthesised in this study. Furthermore, the catalytic efficiency of g-C 3 N 4 /CoFe 2 O 4 -0.05 for MB degradation was 3.3 and 2.4 times larger than those of CoFe 2 O 4 and g-C 3 N 4 , respectively. The dominant contribution of superoxide radicals in this process was confirmed by quenching experiments and EPR spectroscopy. Finally, on the basis of the experimental results and theoretical calculations, the core mechanism of charge transfer in the Z-scheme heterojunction structures was proposed.
7,225.4
2023-03-23T00:00:00.000
[ "Chemistry" ]
University of Birmingham A benchmarking system for domestic water use : The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to “internal” demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating) that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies ( i.e. , Rainwater harvesting—RWH and Grey water—GW) and including “external” gardening demands are investigated. This includes the impacts (in isolation and combination) of the following: occupancy rates (1 to 4); roof size (12.5 m 2 to 100 m 2 ); garden size (25 m 2 to 100 m 2 ) and geographical location (North West, Midlands and South East, UK) with yearly temporal effects ( i.e. , rainfall and temperature). Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH) accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system. Introduction The World Economic Forum's Risk 2014 identified "water supply crisis" as the third highest risk impact (behind fiscal crises and structurally high unemployment) facing the world over the next twenty years [1].Within this time period national UK demand for potable (i.e., drinkable quality) domestic mains water, ultimately requiring wastewater treatment, will increase as the population grows from 63.7 million in mid-2012 to a projected figure of 73 million in 2035 [2].With unabated domestic demands (estimated currently to be 150 L/person/day) an extra 10 million customers would increase daily UK supply requirements by at least 1.5 billion litres.This would reduce to 1.3 billion litres if the demands were reduced to 130 l/d, as proposed in UK Water policy [3].When set against the context of overstretched existing supply sources, the water-energy-food nexus, vulnerability to droughts and leakage (exacerbated by an ageing water infrastructure system) the delicate balance of matching minimal demands with resource secure supplies within a Sustainable Water Management (SWM) plan, particularly in increasingly dense city centres, becomes critical.(SWM is defined here simply as "The ability of current generations to responsibly manage water resources without detrimentally impacting upon the planet whilst taking account of the needs of both present and future users.").Clearly water demand reduction is one strategy within SWM to help reduce the 1.5 billion litre daily deficit and one fully endorsed by UK Sustainability rating systems such as British Research Environmental Assessment Method (BREEAM) and the Code for Sustainable Homes (CSH)-the later, being aimed specifically at domestic buildings [4].Each of these systems requires use of sustainability indicators and metrics, of which there are many, to define and measure performance moving us steadily towards a more sustainable future [5][6][7].Here, the performance indicator is "potable mains water use" and typically for domestic properties two water-use metrics are adopted; one specifies how many litres are consumed per person (L/person) while the other specifies litres required per square metre of property per year (L/m 2 ).A benchmark sets a target and thereby allows us to move from where we are now to where we want to be (in terms of sustainable water-use performance).In terms of domestic water benchmarking an appropriate indicator would also show how much water is being consumed (Litres) by whom (person) and a specified time period (day).The benchmarks set currently in Code for Sustainable Homes (CSH), the UK's premier benchmarking system for domestic water use, go beyond the 130 L/person/day by 2030 set by Defra (Department for Environment, Food and Rural Affairs), they are:  <80 L/person/day for Levels 5 and 6 (the best performing benchmarks);  <105 L/person/day for Levels 3 and 4 (mid-range benchmark);  <125 L/person/day for Levels 1 and 2 (lowest performing benchmarks). These are appropriate levels for urban city living and the use of a "per person" measure instead of a "per m 2 " measure is appropriate given the human element of water use-after all an empty city property does not of its own consume water (unless leaks ensue).Unfortunately strict criteria are given for how these levels are achieved in the short-term through adoption of water efficient appliances.They do not however consider long-term use once this level has been assigned.Moreover, Hunt et al. have suggested that the amount of water resources consumed within the home are equally influenced by the technologies that are adopted (i.e., their technological efficiency) and how they are used (i.e., user behaviour) [8].Therefore a better understanding of these influences with respect to water benchmarking should be considered.If we pose two questions this highlights very quickly the shortfall of existing benchmarking systems like CSH and the thinking behind this paper; Q1.Which of (A) to (D) uses least water?Q2.Which one is accredited under the benchmarking system "code for sustainable homes"? A. 8 min in a highly water efficient shower; B. 4 min in a standard shower; C. 2 min in a power shower; D. 30 min in a quarter filled 230 L bath. The answer is they all use the same volume of water, however, only A receives accreditation in CSH.Therefore benchmarking and accreditation in the UK is more readily focused on technology adoption as opposed to water use per se.This paper questions whether this goes far enough and whether it subsequently provides a robust platform for future city water considerations. CSH also provides accreditation for complementing supply-side flows with alternative locally available supply sources such as Rainwater Harvesting systems (RWH-collection of rainwater from impermeable surfaces for use in WC flushing and washing machines [9]) and Grey water recycling systems (GW-water collected from baths, showers and basins for use in WC flushing and first rinse on washing machines [10,11]).In CSH, achievement of Levels 5 and 6 requires adoption of GW recycling (for WC flushing) and/or RWH.Some well-known UK examples (e.g., The Lighthouse, [12]) in an aim to reduce potable demands further have adopted both.Unfortunately, within current benchmarking systems like CSH no consideration is made for factors that can affect current and future performance.This means that two systems adopted in the same (or different) locations may perform very differently over time and yet receive the same amount of credits. It has been reported also that households who enjoy a "green" environment, display high interest in gardening, and as a consequence use more water externally [13].In CSH these demands are specified as 5 L/person/day [4].Unfortunately, as an average figure, this takes no account of influencing factors such as time of year; garden size (e.g., the average garden size in the UK is estimated to be anywhere between 50 m 2 and 100 m 2 -although considerably lower in new build in dense urban areas).Neither does it consider water supply (i.e., mains and/or GW and/or RW) and geographical location-that could impact considerably on water performance.All of these examples go some way toward building a case that there could be a significant shortfall within the existing 'performance' measuring systems for water.Given the context of increasing demands perhaps a re-conceptualized approach is required. In the energy sector in a drive to reduce energy demands the EU introduced an energy consumption labelling scheme EU Directive 92/75/EC in 1992 [14], superseded in 2010 by the Energy Performance of Buildings Directive 2010/31/EU (EPBD) [15].These directives utilize band ratings (A best to G worst) to categorize energy use performance (i.e., energy efficiency) of electrical appliances and "in-use" yearly energy consumption of buildings.A legal requirement exists for all UK domestic properties to have an Energy Performance Certificate (EPC) when sold, built or rented [16].However, this considers only the heating requirements of the dwelling which is dependent upon the insulating quality of the building structure (e.g., loft and wall insulation, window glazing) and its heating system (e.g., radiators, pumps and boiler).The certification does not consider other energy using technologies (e.g., cold and general household appliances) or user behaviour (how hot the house is kept) and takes no cognizance of actual household energy use (i.e., from a domestic bill), which may be significantly different.This is understandable, as these will vary dramatically with household.Moreover a new band rating would be required when the household structure changes (e.g., children move away) or when new tenants/owners move in.It is difficult therefore to perceive of its feasibility beyond how it is currently applied.The same arguments could be used for water and therefore it is not surprising that a band rating approach has not been considered.Although a more inclusive assessment which aims to reduce resources use further should consider the individual and their energy (or water) use. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating) that allows individual users to assess their current water use performance against what is possible.The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption.This includes user behaviour, technology (in isolation and combination) and use of localised water supplies (i.e., RWH and GW).Section 2 describes the overarching methodology and outlines any assumptions made.The results of a sensitivity analysis are presented in Section 3 where the impact on RWH systems of roof size, location (rainfall), occupancy and garden demands on the band rating system are considered.A broader discussion is then given within Section 4. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing code for sustainable homes accreditation system.Conclusions are subsequently drawn for the robustness of the proposed system. Methodology The four-step methodology is listed below. Step 1: Define a benchmarking system (using a band rating approach) for domestic mains water use performance of an individual (Section 2.1); Step 2: Five design cases (high to low water use) are developed for an "individuals" domestic demand (Section 2.2):  The role of "Technological efficiency" and "User behaviour" is identified;  The respective performance of an individual is plotted within the proposed band rating system and used as the baseline against which the sensitivity analysis (Step 3) is performed. Step 3: Sensitivity analysis (Results in Section 3):  By using the five design cases, the influence on mains water demand and water band rating from using mains, Grey water (GW) and Rainwater harvesting RWH (in isolation and combination) is assessed;  This includes investigating the influence of changes to: i. Rainfall in 3 Geographical locations (Midlands, South East, North West); ii.Roof size (12.5 to 100 m 2 ); iii.Occupancy rates (1 to 4); iv.Garden size (25 to 100 m 2 ). Step 4: Discussion in light of results (Section 4). The following additional assumptions are made for the domestic property (unless stated otherwise):  Internal demands only are included;  2.4 occupants per household [17];  Potable demands are met through mains water alone;  Non-potable demands (including gardening) can be met in four different ways as listed below: i. Option 1-Mains only supply, for all non-potable needs (i.e., no RWH and/or no GW); ii.Option 2-GW for WC flushing, first rinse on washing machine and gardening; iii.Option 3-RWH for WC flushing, washing machine and gardening; iv.Option 4-GW for WC flushing & RWH for washing machine and gardening.  A pitched roof for rainwater collection with a 90% runoff coefficient [11];  Tank(s) are sized according to British Standard BS 8515, 2009 [18]-this uses the lesser of 5% annual rainfall and non-potable demands.It is assumed that an empty tank is installed in January, it has been in operation for at least 12 months, and is filled/emptied assuming a "yield before spillage" approach [19][20][21];  Rainfall and Temperature data are taken directly from the UK met-office [22] and use average monthly values of rainfall over 25 years (up to 2012) to calculate a daily average supply of rainwater.(n.b. stored water dictates available supply whilst spare capacity dictates flash flood protection, the influences of which can are reported by Hunt et al. [23,24] Consideration of the RWH supplies in July is adopted-this being the driest average month within the UK;  Garden watering demand is based upon a generic model developed by Food and Agriculture Organization (FAO) [25] by which monthly climatic data are translated into a soil water balance for a given month [26,27].The availability of water for any given plant type (assumed here to be grassland, flowers and shrubs) is a function of available soil water (root zone for grass is relatively shallow, i.e., <50 mm), rainfall and evapo-transpiration (ET) for each plant type (calculated according to Blaney Criddle method, see Doorenbos and Pruit [28].ET is influenced by temperature which is, as rainfall, location specific. Step 1: Define a Benchmarking System (Using a Band Rating Approach) In this step an appropriate benchmarking system (using a band rating approach) is proposed (Figure 1).The band ratings relate to the total daily volume of mains water consumed per person and range from A + (i.e., best performance, up to 67% reduction compared to average UK demand) to G (i.e., worst performance, >40% increase compared to average UK demand).The position of the marker (Level D = 147 L/person/day) shown represents the current UK average.Band ratings A, B and C (and associated performance levels) are aligned directly with levels adopted within CSH, ensuring full compatibility with an accreditation system now widely recognised and accepted by builders and end-users within the UK [7].For example, the best performing CSH levels (i.e., 5 and 6) require 80 litres (or less) daily consumption per person-this is represented by band A in the proposed band rating system.The demands associated with band levels D to G, are not present in CSH and we contest are a valuable addition as demands can go up (or may already be high), as well as down.In addition band A + pushes the bounds of what could be achieved far beyond CSH levels; <50 L/person/day is wholly appropriate given that it is equivalent to the minimum level required to live, as specified by the UN [29]. Step 2: Development of Five Design Cases for Domestic Demand In this step five different design cases (S1 to S5 in Figure 2) are used to represent specified levels of performance that might be achieved (in terms of litres of water consumed per person per day) ranging from high water use (S1) to low water use (S5).In this methodological approach we assume either technological efficiency (2.2.1), a more detailed description of which can be found in [7,30,31], or user behaviour (2.2.2) may change.This is important as improved water-using performance is not currently rewarded in CSH.Zadeh, Rogers and Hunt [31] report on the implications of both changing concurrently, this is not included here.S2 is assumed to be equivalent to the current average water consumption levels in the UK (147 litres/person/day) and represents a baseline against which all other design cases are compared.S1 represents a 33% increase in demands compared to this design case whereas S3 to S5 represent decreases from 30% to 53%.S2 is dependent on S2 t (technological efficiency of domestic appliances) multiplied by S2 u (user behaviour, i.e., the way they are used).The breakdown for each is shown in Table 1 a, b respectively.This is considered to be the baseline against which changes can be made.For example, the mains water demands in S1 to S5 can be achieved by keeping "user behaviour" constant (i.e,S2 u in Table 1b) and by changing "Techological efficiency" values alone (S1 t to S2 t in Table 1a).Alternatively, the same values can be brought about by keeping "Techological efficiency" constant (i.e., S2 t in Table 1a) and by changing values of user behaviour' alone (S1 u to S5 u in Table 1a).[24,30] * User behaviour included. User Technologies S1 can be achieved merely by changing the shower to a power shower with a flow rate of double that adopted in S2.The reduced water demands and improved band rating achieved in S3 assume a lower flush toilet, lower flow shower, smaller bath and more water efficient washing machine and dishwasher.S5 achieves the lowest demand through adoption of appliances that are more water efficient and the removal of bath(s) and dishwasher(s). The water use assigned to sinks (as assumed within the CSH methodology) is unchanged (10.4 L/person/day) in all design cases.It is a small % of total use (<10%) but still an area where improvements could be made-or where demands may be significantly higher than the static value assumed in CSH.(n.b.The flow rate of taps (typically 4 L/min) is important and could be used.) User Behaviour S1 can be achieved also though a 100% increase in water use from showering made by doubling the time spent in the shower (rather than adopting a power shower).S3 can be achieved by a 25% reduction in water use for WC flushing this is achieved through a 25% reduction in the number of flushes made per person per day, from a UK average of 4.42 to 3.31 flushes/person/day.(An estimate for user behaviour of sinks could be achieved by dividing 10.4 l by 4 L/min = 2.6 min-this could be assumed as the base case, against which changes are made.Accordingly a 1 minute change in use would result in either 4 litres being used or saved. Influence of Supply Figure 3 shows the impact of water using performance on the band rating when alternative supplies are used.(n.b.It is important to remember that the band rating presented here relates to the total daily volume of mains water consumed by an individual and not the total volume of water consumed by an individual-this is an important distinction when it comes to resource use.) When compared to a mains only supply it can be seen that an additional GW supply will reduce mains water requirements by between 20% (S1) and 25% (S5) year round, whereas an additional RWH supply will reduce mains water requirements by between 35% (S1) and 37% (S5) placing demands in the A + band for the S5 design case.The best band ratings (i.e., lowest mains water demand) were achieved where internal demands were lowest.In all cases the RWH system out-performed the GW system in the NW due to an abundant supply of rainwater throughout the year.(n.b.Moving from S1 to S5 there are changes to GW in terms of both supply and demand-S5 has the lowest production of GW, although it also has the lowest demand.The reverse is true for S1, the power shower produces a significant additional supply of GW as compared to S2 and yet the demand for GW in each design case is identical.) Influence of Roof Size The influence of roof size on water using performance for each of the design cases is shown in Figure 4a.Increasing the roof size improves the band rating that can be achieved.However, a limiting value is approached if the catchment area provides sufficient supplies to meeting internal non-potable demands.This means an RWH system adopted with an undersized roof will result in a reduction in the band rating.This under-sizing may have occurred due to limitations of available roof space or the long term influence of overhanging buildings, trees, for example.Similarly over-sizing the catchment area, say to 75 m 2 brought about no benefit in terms of band rating in all design cases; once non-potable demands are met any excess rainwater is surplus to requirements and is stored (where available capacity exists) or directed to the wastewater system.A surplus of stored water acts as a buffer to ensure the best band rating is achieved.A performance band of "A +" (45.5 L/person/day) could be achieved in design case S5 with a roof size of 50 m 2 and 25m 2 , although this dropped to band "A" (53.9 L/person/day) for a 12.5 m 2 roof.For design cases S4 and S5 a catchment area of 50 m 2 could be considered oversized, as the non-potable demands are low, half of this would have been more than adequate to meet the specified internal demands.All of these considerations are beyond the scope of CSH and highlights the requirement for benchmarking longer-term water using performance.In contrast, Roof size has no influence on domestic GW supplies hence, not surprisingly there is no influence upon the band rating previously achieved in Figure 3.This explains the absence of an additional Figure for GW. Influence of Geographical Location: Rainfall The influence of rainfall on RWH system performance in three different UK locations (Midlands, North West and South East, UK) is shown in Figure 4b.The best performance levels were achieved where rainfall was highest (i.e., 1268 mm/year in NW) and worst performance levels were achieved where rainfall was lowest (780 mm/year in SE).In all cases RWH contributed to reducing mains water demands and improving the benchmark that could be achieved.Most notable was the fact that a performance level of 45.5 L/person/day could be achieved in all locations in design case S5.As the internal demands begin to increase (i.e., S3 to S1) location becomes more influential as in these design cases RWH supplies can no longer meet demands.This is because the performance of any RWH systems is directly related to rainfall, which is region specific, and this appears to impact directly on the band rating that could be obtained, lowest in SE and highest in NW.Currently this is not a consideration for CSH.Once again, GW production is not influenced by regional rainfall patterns hence there is no impact upon the levels of performance achieved in Figure 3. Influence of Occupancy Rates Figure 4c shows the influence of occupancy rates (1 to 4) on band rating.When 1 or 2 occupants are present performance band ratings appear to be unaffected for all design cases.When 3 or more occupants are present the band rating of S4 and S5 are unchanged and the highest performance levels are achieved (45.5 and 51.2 L/person/day respectively).However, in S1 an 8% increase in water consumption occurs, (i.e., from 145.7 to 153.7 L/person/day) and in S2 a 12% increase in water consumption occurs (i.e., from 93.3 to 101.3 L/person/day).This worsens further when one extra occupant is added-moreover the performance in S4 worsens also.These performance levels are further decreased when five or more occupants (not shown) are included, although the % increase becomes less noticeable for each extra occupant added.As internal demands and GW supplies are measured per person and increase with occupancy rate they do not affect the water band rating that can be achieved.Occupancy rates are currently outside the scope of CSH and it appears that as long as water consumption per person is low, i.e., CSH Levels 5 and 6, occupancy is not influential, however where it is higher (i.e.CSH Levels 1 to 4) some influence will occur. Influence of Time of Year Figure 5a shows the water demands throughout the year for each design case which can be seen to increase considerably from April through to October (i.e., the summer growing season).The gardening demands are assumed to be identical in each design case, the breakdowns in July are shown in Figure 5b from where it can be seen that they amount to <10% for design case S1 and almost 20% for design case S5.This is important as it means that a domestic property might perform to CSH level 6 (i.e., 80 L/person/day-band A) during winter months and yet perform only to Level 3 and 4 (105 L/person/day or lower band B to C) in summer months-this variability is not considered in CSH.Garden demands are typically highest when temperatures are highest (evapo-transpiration is temperature dependant) and rainfall levels are lowest necessitating garden watering.This becomes particularly problematic when hosepipe bans are in place in summer months significantly reducing the ability of urban householders to successfully grow vegetables unless alternative watering measures are adopted.The difficulty for benchmarking occurs when non-potable supplies are insufficient to meet internal plus external non-potable demands in summer months.Figure 6 shows that in the presence of 50, 75 and 100 m 2 gardens performance band ratings can be improved beyond those for the base case (i.e. a domestic property without a garden supplied by mains water alone) through adoption (in isolation or in combination) of RWH and GW systems.For an RWH only option (Figure 6a) the band ratings achieved in S4 and S5 were not influenced by any changes made to garden size and matched those achievable in the absence of any garden demands (RWH option in Figure 4).However, as internal demands increased (i.e., from S3 to S1) so the mains water using performance was influenced.Most notably in S1 where a doubling in garden size (i.e., 50 to 100 m 2 ) caused a 12% increase in demands with no change in band rating E. For a GW only option (Figure 6b) the band ratings achieved in S1 and S2 were not influenced by any changes made to garden size and matched those achievable in the absence of any garden demands (GW option in Figure 4).However, as internal demands increased (i.e., from S3 to S1) mains water using performance decreased (i.e., from S3 to S1.Most notably in design case S5 where an increase in garden size from 50 to 100 m 2 resulted in a 35% increase in mains water demands, although with no change in band rating A. However, the band rating drops from A to B in design case S3 when increasing the garden size from 50 to 75 m 2 .The RWH and GW combined option (Figure 6c) allows for the highest band ratings to be achieved in all design cases and these are little influenced by changes to garden size. In Figure 7 the impact of changing locations from NW (UK) to SE (UK) is very noticeable.For RWH supplies; an increase in garden size increases water demands and influences band ratings for all design cases.The most noticeable impact here is in S5 where a band rating change from A to D occurs as the garden is increased from 50 to 100 m 2 (Figure 7a).In all supply cases the performance is two to three band ratings lower than that achieved in NW (UK) and significantly less than the base case.The most notable exception to this is the adoption of a GW recycling system (Figure 7b) in S1 (where approximately 20% less water use is achieved than the base case for all garden sizes considered) and S2 (where 20% and 6% less water use than base case are achieved respectively for 50 m 2 and 75 m 2 gardens).The dual approach system (Figure 7c) offers better performance than individual approaches in S3, S4 and S5.However, the performance in S4 and S5 is worse than an individual GW recycling system, although better than an individual RWH system.These subtleties were not apparent in the NW and certainly are not considered in CSH accreditation. Influence of Occupancy (and Location) The band ratings for an RWH supply are marginally influenced in NW (UK) by occupancy rates (Figure 8a) in S1 (D to E) and S2 (B to C).The slight deviations from the baseline are due to subtle changes in internal non-potable demands within design cases; the influence being most noticeable in S1 followed by S2 and S3 where non-potable demands are higher than RWH supplies. In the case of a GW supply band ratings are not affected by occupancy numbers (Figure 8b).This is due to the fact that GW supplies increase relative to occupancy (i.e., four occupants produce four times as much GW as one occupant)-hence water using performance will only be impacted by low occupancy rates.For example, GW supplies from one occupant are insufficient to meet non-potable demands in S2 (marginally), S3,S4 and S5 (most notably) and hence the performance is impacted. When both RWH and GW are used (Figure 8c) the lowest water use and hence highest band ratings of all supply options are achieved.The advantage of this dual supply in NW (UK) is that occupancy no longer becomes an influential factor because there is more than enough non-potable water to meet demands. In Figure 9 the impact of a change in location to SE (UK) can be seen.The influence of occupancy numbers is far more noticeable and in all cases performance improves as occupancy numbers increase.In Figure 9a it can be seen that the band ratings for an RWH supply will also improve only in S1 (G to F) and S5 (B to A).The impact of a single occupant on the performance of the GW system and achievable band rating (Figure 9b) is more pronounced in SE (UK) than NW (UK); most notably in S5 where the water use for a single occupant is 120% larger than for 4 occupants.For NW (UK) this was 47% larger (Figure 8b).Providing the occupancy rates are 3 or less a dual system will offer better water use performance and band rating than a RWH system in the SE (UK).However, if the occupancy rate is 4 or more the best water use performance and band ratings are achieved with a GW supply.In a mixed supply system (Figure 9c), higher levels of performance are obtained in all design cases and the influence of occupancy rates less pronounced. Discussion The analyses performed here have highlighted the sensitivity of the proposed band rating system to a range of influences and discussed the relevance with respect to the existing code for sustainable homes where credits are awarded (sometimes prior to occupation of dwellings) without consideration of longer-term performance.The proposed benchmarking system appears to fill this shortfall.This paper does not suggest that CSH should be replaced-its limitations just need to be understood.In many respects it does what it set out to do (i.e., crediting the adoption of water efficient technologies) moreover it has buy-in from house builders and home owners alike.The proposed benchmarking system can be used alongside CSH certification (the two integrate seamlessly) giving the individual user a better idea of their actual water using performance and whether it is good (requiring little or no change) or bad (requiring significant change). Whilst this goes a significant part of the way to achieving improved water performance within cities there are many wide ranging considerations.This section provides a broader discussion on the role technology and user behaviour (Section 4.1), policy (Section 4.2), economics (Section 4.3) and in light of these considers the feasibility and acceptability of the proposed benchmarking system (Section 4.4). Technology and User Behaviour There is an assumption that adoption of more water efficient technologies is the solution to reduce water consumption.Whilst this is true we need also to consider influences related to impaired functionality (i.e., the inability of the technology to deliver what it designed to do) and reduced comfort levels (i.e., the inability of the technology to deliver expected "user experience") as water efficiency is improved.These will have a strong influence on "public acceptability".As such there are many questions that need to be asked of a technology or of an individual using that technology, for example:  What is the lower limit to a showers flow rate, has it been reached? Can a very low flow rate shower (i.e., < 6 L/min) deliver the same shower experience as a 12 L/min or even a 24 L/min power shower?If not, then would acceptability and widespread adoption be inhibited? Is a (re)design (e.g., aeration technology) possible to deliver the same user-experience and function (i.e., personal washing and relaxation)? The equally strong influence(s), of "duration" of use and "frequency" of use [13,30,31] should not be ignored and yet in CSH their influences are underplayed.Whilst CSH does accredit the adoption of smaller baths, perhaps accreditation should consider the quota of water an individual uses for daily personal washing (be it through bathing or showering-the choice being theirs alone).The choice of technology and user behaviour is then accredited equally.This allows those that cannot afford a change in technologies to be accredited for more "sustainable" consumption.Likewise it incentivises those that can afford a change in technology to consider their long-term behaviour; avoiding consumers buying a water efficient shower that uses half the water of the old shower only then to use it for twice as long. The human element is strongly influential in water demands within the home and can be highly variable-even within a single household [32].Therefore achievement of the highest performance levels for user behaviour shown in Table 1b may require incentives, help (or nudges) to achieve such a "step-change" in user behaviour.For example, 4 min timers (Figure 10a) and information leaflet (Figure 10b) were given out freely in the South East of England by Thames water during the early drought in 2012.Although, now the drought has subsided how many of these are still being used today-moreover did users really stick to four minutes?So perhaps in this case technologies could be used to influence more strongly user behaviour.A sensor could "time-limit" or "volume-limit" bathroom water supplies ensuring set volumes of water flow from taps, showers and baths.This does not prevent the user overriding them-nor should a democracy should seek to limit free will.In a modern society this is more about making people aware of their water using behaviour and facilitating, rather than enforcing, a change.Coin operated showers, for example on camp sites, already operate in this way, although for domestic adoption perhaps the technology would benefit from electronic "bleeps" according to the minutes taken in the shower, or perhaps green, amber and red light gauges so the water does not run out before all washing is completed.Even timing devices for sinks (i.e., timed for flows of around 10 seconds-adjustable by the installer) already exist in office/hotel settings and therefore it is not beyond the realms of possibility that they could be introduced in a domestic setting. The question remains as to "how low could one go to ensure resource security whilst maintaining user comfort and performance levels, i.e., liveability for the user and for the city of the future?"More importantly can we think outside the box when it comes to future user behaviour-will it be significantly different from what we currently know and do?In most cases the thinking can be prompted by a "what if" question.For example, "what if' we wore clothes for twice as long between washes-facilitated by manufacturers engineering stay clean clothes?"What if" we produced significantly less washing-up-by eating out more or by ordering ready cooked foods?These user changes need to be considered in parallel to technology changes and any benchmarking system should allow for this. For some water uses it might be deemed impossible to reduce down the frequency of use below UK average values, for example WC flushing, as this is related to a natural necessary bodily function-in terms of urine we all produce around 2.5 L/person/day.Or perhaps it prompts the more pertinent question of how do we reduce flushing, if it is not through a step-change in technology?In cases of drought in Brazil a television advertisement campaign urged water users to urinate in (their own) showers whilst showering to conserve water [33]. For a western society this may on the face of it appear shocking, but it is not dissimilar to the mantra "when it's brown we flush it down, but when it's yellow we let it mellow" passed on to children who live in areas ridden by water droughts and rationing [34].This paper does not suggest we all adopt these actions, far from it, however it does suggest that by thinking along these lines more inventive solutions for the way we consider sanitary waste within cities.Perhaps a dual track approach could be adopted within the home.Offices already separate urine flushing (for males) from WC flushing (for solids only).After all the volumes of "held" water within toilets in urban areas (i.e., in a U-bend-required to keep out odours out; and in a cistern-to flush) is not inconsiderable.What if, for example, the design brief for a city is to "design" these out how might this be achieved?Perhaps some type of holding mechanism for urine that is automatically flushed once a day?This is eminently possible, not least if urine became a more valuable resource?Research is already looking at it from an energy supply perspective [35].The answer is not clear cut, although pneumatic systems are an option, and there could be numerous other options-available or yet to be thought of.One important consideration obviously would be upon outward flows to the sewerage system and this would need to be investigated further also.The point here is about moving us beyond what we now or accept as the "norm" and to provide us with a robust band-rating (including benchmarking) framework that allows for monitoring and measurement along the way.The role of user behaviour and technological efficiency (in isolation) is used here to exemplify the role of the proposed band rating system, a more in depth analysis of these highly influential factors (in isolation and conjunction) can be found in Zadeh, Hunt and Rogers [31].The way in which water using behaviour and technological efficiency adoption changes by age, gender or demographics is unclear and perhaps this is a research challenge to be answered. Policy This paper does not advocate that policy should be used for water certification (using the proposed band rating system) of a building or an individual.However, when we consider that Domestic properties are currently metered it would be useful if the proposed benchmarking system (incorporating band ratings) were adopted within existing billing systems giving the user an idea of an individuals' water use.When used alongside smart metering (advocated by existing UK policy and very much aligned with CSH philosophy, [36]) users might then begin to understand how each water use impacts upon the total water demand for a household. The role of occupancy has been shown here to be influential on daily water use and band ratings or benchmarks that use the "per person" measure.Including this finding within policy is likely to be fraught with difficulty.Particularly so when considering the example of garden watering where the current value of 5 L/person/day significantly underestimates actual daily demands and the influence of location, occupancy, rainfall, temperature, plant type and garden size.Perhaps then the daily banding should be replaced with a yearly benchmark (per household) that allows for incorporation of fluctuations throughout the year. On the supply-side perhaps systems should not be accredited just for their adoption more for how non-potable needs are being met through non-potable water supplies-thereby reducing consumption of mains water (see Lombardi et al. [37] for a description of the efficacy of water resilient solutions in city designs).Policy could stipulate that all non-potable demands are met through non-potable supplies.In this case a simple banning of outside taps would not be inappropriate.In addition policy could advocate the adoption of dual approaches (RWH and GW) for meeting non-potable demands-this paper has shown the significant advantages in terms of flattening the peaks and troughs in daily water-using performance, in particular where gardens are watered in summer months.This is particularly important if homeowners are being influenced to be more self-sufficient in home-growing and "greening" city centres where roof space is at a premium.One policy (or accreditation requirement) might suggest that all internal non-potable demands are met through GW and all outdoor demands such as gardening, but also car washing, are met by harvested rainwater.Green roof gardens (not least with food crops) or green walls can then be adopted only where they do not produce any additional pressure on city centre water supplies. Economics Economics (i.e., the costs to the water company and the costs to the consumer) is an important driver within the water sector.The water companies want to supply clean water whilst maintaining a profit for shareholders, whilst the consumer wants to minimise the outlay on water resources.The role of the policy maker (and water regulator) is to track water pricing and ensure that tariffs incentivise efficient, rather than profligate, water use whilst managing to alleviate "water-poverty". Payback periods for investment in water efficiency measures or localised supplies (e.g., RWH/GW) are highly influential in terms of whether the investments are small-scale (i.e., consumer invests within their property) or large-scale (i.e., city scale investment).Payback at small-scale is well reported within the literature, although the feasibility of making an economic investiment in a non-potable city network allowing all non-potable demands to be met through non-potable supplies is not.Such a network would seek to make the links between producers and users of non-potable water, for example a producer may have a large roof space and capture rainfall yet have very low or no non-potable demands (e.g., a warehouse).Whereas a user might have high non-potable demands yet have low supplies (e.g., offices).Further work is needed in this area.In addition the impact of the highly influential parameters highlighted herein should to be investigated.In all cases transport and treatment of water will have an additional cost that needs to be factored in [38].As water efficiency measures are improved the requirement for non-potable supplies within buildings (be it domestic or other) decreases and hence the payback period will become longer, although if external demands, in particular gardening (but also vehicle washing) are included payback periods will improve significantly.In addition home urban growing of vegetables could reveal economic (and health) benefits.(The water demands are broadly similar to those for grassland and shrubs analysed here (in fact the Blaney Criddle assumes grassland as the highest base case against which "crop" water demands are calculated.)This benefit comes from the reduced cost of going to the supermarket in addition to the reduction in "embedded" transportation (and importing) costs which are not insignificant-this argument strengthens further when carbon costs are factored in.However, there is the important question of whether we will, in the not too distant future, have enough water to go round within a city centre landscape for all these different "internal" and "external" demands, even with the addition of non-potable supplies?If we do not then the "value' and "cost' of water will increase substantially.(Interestingly in Southern Ireland Domestic water rates were abolished in 1997 and without them there has been little to incentivise the "value" of water.With their imminent (re)introduction in January 2015 it will be interesting to see how its "value" changes). The benchmarking system presented here could have a role in improving users value of water as it could align directly with a pricing structure-rewarding "non-profligate users" (i.e.,those that reduce consumption of mains water) whilst alleviating "water poverty" (i.e.,band A + use could be for free with price tariffs increasing for each band rating).The big users (band G) would pay the most and drives home the ethos of "the polluter pays" principle-which is definitely the case when water related carbon costs are included.(Tiered rate systems operate in many countries, including parts of the USA but not so for the UK).This does assume that water use is price-elastic [39][40][41][42] and many authors report this is simply not the case [43][44][45][46][47] therefore it is debatable whether such an approach would be successful in reducing water demand.Even if peoples' water use was not changed through such an approach any increase in revenue could be used to invest in additional water supplies such as non-potable networks.However, if it were successful any loss in revenue may also be unacceptable to shareholders and lead to decreased investment in water infrastructure per se. This may produce difficulties for water companies in knowing exactly what the total daily water use is per person based on metered records alone.Although information on occupancy rates does form part of the council tax assessment system-however loopholes might then exist where occupants are registered "at home" and yet live away for substantial periods inadvertently bringing down the measured daily water usage per person.This merely illustrates the difficulties of introducing any "fit-for-purpose" measurement/charging system that is fair for all. Feasibility and Acceptability of a Benchmarking Approach (Using Band Ratings) The feasibility for adopting any daily benchmarking system for a household will always come into question given the fluctuations that have been highlighted within this paper.In many respects no two days are the same when it comes to water demands of a household or an individual.For example, seasonality is statistically significant on both water demands [48] (not least due to changes in behaviour or additional demands brought on through hotter weather, e.g., more frequent showering) and water supply availability due to variability in precipitation (although RWH tanks are sized to account for this variability so they should not run short).And whilst a yearly household band rating system might flatten out the peaks and troughs, it would be hard to perceive based on the changing dynamics of a household over a lifetime.The essential element is "feedback" systems that inform individuals of their daily use (compared to a standard) in order that behavioural and/or technological efficiency adjustments are made.This is possible through adoption of the benchmarking system (using a band rating) as advocated within this paper and its impact, acceptability and feasibility would be improved if twinned with a smart metering system (one that informs the user how much water they use in real-time). Whilst Policy can go so far, there are always going to be issues around public acceptability when bringing in policy to engender adoption of new approaches.For example, would the benchmarking system and associated band rating be viewed as overly intrusive?The answer is likely yes, only if water using performance is made public knowledge-and this might be considered intrusive for those that have something to hide but less so for those that do not.Certainly by adopting such an approach and by making users/owners/operators aware of how their water use compares (perhaps through league tables) to the norm gives precedence to a more pertinent worry of accountability.And it could be argued that this is good if we truly are to have sustainable water management. The benchmarking system proposed here was developed for an individuals' domestic use merely to show the principles on which a band rating could be based and the influences that could impact upon it.In essence the methodological approach could be used by individuals within any water using sector.The important point here is for water users to be informed of how they are performing in relation to how they could perform -in order that a step change in water use is achieved. Conclusions This paper introduces a new benchmarking system (using a band rating) for measuring water-use performance of a domestic water user in the UK.It has been shown to provide significant advantages when compared to existing accreditation systems like the CSH.Its generic approach and the various band ratings adopted therein are appropriate for use in any country and this lends itself well to ease of comparison.The novel approach to analysis which compares five options side-by-side helps significantly when making an informed assessment, not least when so many interdependencies are at play.The influences of demand-side and supply-side approaches were considered from where it was found that user behaviour has an equally, if not more influential role on demands than technological efficiency alone.It is suggested that any band-rating (and associated accreditation) is best aligned with overall potable mains water consumption (as adopted here) and should not have a preference for either user technology or user behaviour.The choice of how reductions are achieved should be down to the end-user.The influence of location, occupancy rates, demand profiles and supply type were found to be highly influential on water using performance and yet these are ignored by accreditation systems like CSH.This is particularly true in summer months where water demands are highest and localised supplies (e.g., RWH) are lowest.The proposed "A+" to "G" band rating system holds great potential for allowing swifter progress towards achievement of a more sustainable "city" water management where "liveability" options and a valid form of water-use measurement are required.However, Social, Economic, Political Technical and Environmental influences (i.e., resource security and carbon reduction) will all have major roles to play in achievement of this aim. Figure 2 . Figure 2. The influence of technologies and user behaviour on band ratings. Figure 3 . Figure 3. Influence of supply options on demand band ratings in NW (UK). Figure 4 . Figure 4. Influences of Rainwater Harvesting (RWH) non-potable supply option on water band ratings in UK (Location NW (UK) and month of July unless stated otherwise). Figure 5 . Figure 5. Water demands when including a 50 m 2 garden in NW (UK). Figure 6 . Figure 6.Influence of garden size on band ratings for various non-potable supply options in NW (UK). Figure 7 . Figure 7. Influence of garden size on band ratings for various non-potable supply options in SE (UK). Figure 8 .Figure 8 . Figure 8. Influence of occupancy rates and non-potable supply options on band rating for a 50 m 2 garden in NW (UK). Figure 9 . Figure 9. Influence of occupancy rates and non-potable supply options on band rating for a 50 m 2 garden in SE (UK). Table 1 . Parameters used to calculate demand in Design Cases.
11,457.6
2014-05-19T00:00:00.000
[ "Engineering" ]
Analysis of Propagation of Electromagnetic Waves in Atmospheric Hydrometeors on Low-Elevation Paths Attenuation of electromagnetic waves in millimeter wave bands is analyzed by means of experimental measurement of received signal fluctuations on terrestrial radio links operating in frequency bands 58, 94 and 122 GHz. Long-term time series of the received signal are processed to obtain annual and two-year cumulative distributions of attenuation due to hydrometeors. The measured statistics give the attenuation higher than predicted by the model of the Recommendation ITU-R P.530. Rain intensity measured simultaneously with rain attenuation is used to obtain fitted parameters of an attenuation/intensity power-law relationship. The empirical data extracted from the experiment are compared with the results of numerical simulations of attenuation due to rain and hailstones. Introduction Wireless communications are steadily shifting toward utilization of higher frequency bands that can satisfy an increasing transmission capacity demands.Terrestrial and low elevation links in particular operating in millimeter wave bands are expected to be utilized more frequently in foreseeable future.Electromagnetic waves of millimeter wavelengths propagating in atmosphere are mostly impaired by atmospheric hydrometeors [1] due to scattering/absorption of mm waves on droplets forming a hydrometeor.In order to estimate the quality and availability of the designed mm-wave radio links, the statistics of attenuation are thus needed [2].Nevertheless long-term empirical statistics of attenuation are not widely available in literature so far, especially for frequencies around and above 100 GHz. Attenuation due to atmospheric hydrometeors is known to be proportional to the intensity of precipitation.However the incidence of precipitations is locally specific and has to be determined experimentally [3][4][5].The goal of this work is therefore to obtain so far not unavailable empirical local statistic of attenuation that can be used 1) to assess propagation models and 2) to design mm wave links in the area.Furthermore, the dependence of mm wave attenuation on the physical parameters of hydrometeors such as rain intensity is studied.The experimental measurement of the received signal fluctuations have been carried out simultaneously on three mm-wave links [6], [7].The statistics of attenuation have been obtained from the long-term dataset. Experimental Setup Experimental observation of path attenuation fluctuations is carried out continuously on several millimeter wave links with receiver site located at the Czech Metrology Institute (CMI) building (Hvozdanska street) in Prague, see Fig. 1. The two links operating at 58 GHz and 122 GHz [8] on the same path with the transmitters located on the rooftop of the Senohrabska street (Sen) building are almost perpendicular to the 94 GHz link with the transmitter located on the building of the Geophysical Institute (GFU) (Bocni II street).Table 1 summarizes basic parameters of mm links.An automatic gain control (AGC) voltage is measured using an A/D converter on a PC card and received signal level (RSL) is calculated via calibration curves that were obtained during the calibration before a radio link deployment.The 58 and 94 GHz links were calibrated at the CMI by means of precise attenuators connected between a transmitter and receiver via a rectangular waveguide.The 122 GHz link was calibrated by its manufacturer [8]. The nominal received signal level was determined dominantly as a monthly median value of the RSL such that the nominal RSL is taken as a zero attenuation level.In some particular cases of fading events on the 122 GHz link, the zero level was obtained from the RSL immediately before and after the individual attenuation event. Beside other relevant meteorological quantities, rain intensity is measured simultaneously near the receivers by means of two tipping bucket heated rain gauges with sensitivities: 0.1 mm/hour per tip and 0.2 mm/hour per tip respectively [6].Two additional microwave links in the frequency band 58 GHz with V and H polarizations are observed on the path GFU-CMI.The links provided rain attenuation data as presented below in Fig. 2. Attenuation Statistics Attenuation statistics from the period from May 2013 to April 2014 were presented in [7].More than 100 hydrometeor events were individually selected and cumulative distributions (i.e.percentages of time when attenuation exceeded a given level) of (specific) attenuation given in (dB/km) were obtained.Annual cumulative distributions obtained from the period from May 2014 to April 2015 are shown in Fig. 2. The cumulative distributions of attenuation obtained from the whole two-year period from May 2013 to April 2015 are shown in Fig. 3. The annual distributions are compared with the predicted ones using the Recommendation ITU-R P.530 [9].P.530 applies the value of rain intensity R 01 exceeded for 0.01% of time at a particular location to predict rain attenuation statistics at that location.Two sets of distributions depicted are predicted using particular values R 01 = 24 mm/h and R 01 = 32 mm/h.The former value is obtained from ITU-R world maps [10], the latter is obtained from long-term local measurements of rain in the climate of CR.The reasonable agreement between measured and predicted distributions is seen but the measured attenuation seems to be systematically higher than the predicted one.One must also take the year-to-year variability into account. Attenuation vs Rain Intensity Figure 7 shows a scatter plot of two-year data of simultaneously observed attenuation and rain intensity.The data depicted are the average values over the 20 second time intervals.This averaging length serves as a compromise between typical attenuation sampling (1 sec.) and a usual rain intensity averaging interval (1 min.).Figure 8 show interval statistics obtained from the scatter plot (see above) by dividing a full rain intensity range into subintervals and calculating statistics of attenuation data falling into each subinterval. Furthermore, the power law models of the form where A (dB/km) is specific attenuation and k, q are the parameters of the model, are fitted to the interval median values (i.e.50% percentiles) of attenuation. Scattering Modeling The measurement results, specific for 3 millimeter wave bands, presented in the previous section is to be confronted with the numerical modeling predictions.Specifically, the frequency dependence attenuation due to scattering on the water particles (rain) and on ice particles (hailstones) is investigated in the following.The specific attenuation A (dB/km) is obtained by integrating droplet extinction cross sections C ext weighted by droplet size distribution (DSD) (3) [11], [12] as follows: where the cross section is a function of the droplet radius r, the wavelength λ and of the complex refractive index of water or ice m.Consistent units are assumed in (2).The extinction cross section C ext is calculated using Mie scattering formulas [11], [13].A gamma distribution is used which is common to describe the realistic DSD of real hydrometeors n(r) = ar α exp(−br) where the parameters a, α and b depend on the physical parameters of a particular hydrometeor.Several models have been proposed for DSD of rain parametrized by rain intensity. The widely used classical Marshal-Palmer (MP) model [14] is defined by an exponential distribution which is a special case of (3) with α = 0.A slightly modified MP model is used here with the parameters of the gamma model ( 3) are a = 16000 mm −1 m −3 , α = 0.5 and b = 8R 0.2 mm −1 .In this case, n(r) has units mm −1 m −3 and the drop radius is expressed in millimeters. The attenuation calculated using numerical integration of ( 2) is shown in Fig. 9.The results of theoretical calculation seems to be fully consistent with the results of measurement in three mm wave bands presented in previous sections.Modeling of scattering on ice particles show that with the same DSD attenuation attenuation due to hailstones is lower than the rain attenuation but only for frequencies lower than 100-150 GHz. Conclusion The long-term statistics of attenuation due to hydrometeors in three mm wave bands were obtained from the measured received power levels on terrestrial microwave links in Prague.The measured attenuation levels are slightly larger than predicted by the ITU-R P.530.This may be caused by a higher incidence of stronger rain events. Attenuation observed in all three frequency bands under study was generally of very similar level and with estimated measurement accuracy of the order 1-3 dB it is difficult to distinguish different frequencies even on the same path.Nevertheless, attenuation of 58 GHz band is consistently lower than attenuation of two higher frequency bands, this fact is confirmed also by a numerical model in Fig. 9. On the other hand the difference between 94 and 122 GHz seems to be less significant.The observed attenuation on the 94 GHz link is actually slightly higher than on the 122 GHz link which seems to be in contrast with the theoretical predictions in Fig. 9. However one must note that a) their propagation paths are different (perpendicular) and b) the measurement accuracy of the order 1-3 dB mentioned above has to be taken into account.Therefore one can conclude that the trend of increasing attenuation due to hydrometeors with increasing frequency is limited to frequencies below about 100 GHz and attenuation does not increase significantly in higher mm wave bands. Rain and snow attenuation was not specifically distinguished in our experiment in a sense but the attenuation events estimated to be due to snow covering antennas were removed from the processing.However pure snow events usually caused attenuation of several dB and the strongest attenuation events were caused by a heavy rain as was checked by comparison with concurrent on-site meteorological measurements. Fig. 1 . Fig. 1.The map of experimental microwave wave paths orientation. Fig. 9 . Fig. 9. Frequency dependence of rain (solid) and hailstone (dashed) attenuation obtained by Mie scattering calculations for different rain intensities. Scatter plot of rain attenuation vs intensity. Table 2 summarizes fitted parameters of the model.Note that only R > 1 mm/h values were actually fitted since the region of low attenuation/intensity is relatively less important in practical applications.There is also a larger relative measurement error in this region.Tab.2. Parameters of fitted power law models (1).
2,273.6
2018-04-13T00:00:00.000
[ "Physics" ]
μ -Fuzzy Filters in Distributive Lattices . In this paper, we introduce the concept of μ -fuzzy filters in distributive lattices. We study the special class of fuzzy filters called μ -fuzzy filters, which is isomorphic to the set of all fuzzy ideals of the lattice of coannihilators. We observe that every μ -fuzzy filter is the intersection of all prime μ -fuzzy filters containing it. We also topologize the set of all prime μ -fuzzy filters of a distributive lattice. Properties of the space are also studied. We show that there is a one-to-one correspondence between the class of μ -fuzzy filters and the lattice of all open sets in X μ . It is proved that the space X μ is a T 0 space. Introduction In 1970, Mandelker [1] introduced the concept of relative annihilators as a natural generalization of relative pseudocomplement and he characterized distributive lattices with the help of these annihilators. e concept of coannihilators and μ-filters in a distributive lattice with greatest element "1" was introduced by Rao and Badawy [2] and they characterized μ-filters in terms of coannihilators. For a filter F in L, μ(F) � (x) ++ : x ∈ F is an ideal in the set A + (L) of all coannihilators, and conversely μ ← (I) � x ∈ L: (x) ++ ∈ I is a filter in L when I is any ideal in A + (L). A filter F of L is called a μ-filter if μ ← μ(F) � F. In 1965, Zadeh [3] mathematically formulated the fuzzy subset concept. He defined fuzzy subset of a nonempty set as a collection of objects with grade of membership in a continuum, with each object being assigned a value between 0 and 1 by a membership function. Fuzzy set theory was guided by the assumption that classical sets were not natural, appropriate, or useful notions in describing the real-life problems, because every object encountered in this real physical world carries some degree of fuzziness. A lot of work on fuzzy sets has come into being with many applications to various fields such as computer science, artificial intelligence, expert systems, control systems, decisionmaking, medical diagnosis, management science, operations research, pattern recognition, neural network, and others (see [4][5][6][7]). Alaba and Norahun [29] studied the concept of α-fuzzy ideals of a distributive lattice in terms of annulates. ey also studied the space of prime α-fuzzy ideals of a distributive lattice. In this paper, we introduce the dual of the concept of α-fuzzy ideals which is called μ-fuzzy filters in a distributive lattice with greatest element "1." We study the special class of fuzzy filters called μ-fuzzy filters. We prove that the set of all μ-fuzzy filters of a distributive lattice forms a complete distributive lattice isomorphic to the set of all fuzzy ideals of A + (L). We also show that there is a one-to-one correspondence between the class of prime μ-fuzzy filters of L and the set of all prime ideals of A + (L). We prove that every μ-fuzzy filter is the intersection of all prime μ-fuzzy filters containing it. Moreover, we study the space of all prime μ-fuzzy filters in a distributive lattice. e set of prime μ-fuzzy filters of L is denoted by X μ . For a μ-fuzzy filter θ of L, open subset of X μ is of the form X(θ) � η ∈ X μ : θ ⊈ η and V(θ) � η ∈ X μ : θ ⊆ η is a closed set. We also show that the set of all open sets of the form X(x β ) � η ∈ X μ : x β ⊈ η, x ∈ L, β ∈ (0, 1] forms a basis for the open sets of X μ . e set of all μ-fuzzy filters of L is isomorphic with the set of all open sets in X μ . Preliminaries We refer to Birkhoff [30] for the elementary properties of lattices. Definition 1 (see [2]). For any set S of a lattice L, define S + as follows: Lemma 1 (see [2]). For any x, y ∈ L, the following conditions hold. e set of all coannihilator denotes A + (L). Each coannihilator is a coannihilator filter, and hence, for two coannihilators (x) + and (y) + , their supremum and infimum in respectively. In a distributive lattice L with 1, the set of all coannihilators A + (L) of L is a lattice (A + (L), ∩ , ∨ ) and a sublattice of the Boolean algebra of coannihilator filters of L. For a filter F in L, is an ideal in A + (L) and the set Definition 2 (see [3]). Let X be any nonempty set. A mapping μ: X ⟶ [0, 1] is called a fuzzy subset of X. e unit interval [0, 1] together with the operations min and max form a complete lattice satisfying the infinite meet distributive law. We often write ∧ for minimum or infimum and ∨ for maximum or supremum. For any collection, μ i : i ∈ I of fuzzy subsets of X, where I is a nonempty index set, and the least upper bound ∪ i∈I μ i and the greatest lower bound ∩ i∈I μ i of the μ i 's are given for each x ∈ X, respectively. For each t ∈ [0, 1], the set is called the level subset of μ at t [3]. Definition 4 (see [27]). A fuzzy subset μ of a lattice L is called a fuzzy ideal of L if; for all x, y ∈ L, the following conditions are satisfied: Definition 5 (see [27]). A fuzzy subset μ of a lattice L is called a fuzzy filter of L if, for all x, y ∈ L, the following condition is satisfied: In [27], Swamy and Raju observed the following: (1) A fuzzy subset μ of a lattice L is a fuzzy ideal of L if and only if (2) A fuzzy subset μ of a lattice L is a fuzzy filter of L if and only if μ(1) � 1 and μ(x ∧ y) � μ(x) ∧ μ(y), for all x, y ∈ L. Let μ be a fuzzy subset of a lattice L. e smallest fuzzy filter of L containing μ is called a fuzzy filter of L induced by μ and denoted by [μ) and Lemma 2 (see [23]). For any two fuzzy subsets μ and θ of a distributive lattice L, we have e above result works dually. For any two fuzzy subsets μ and θ of a distributive lattice L, we have e binary operations "+" and "·" on the set of all fuzzy subsets of a distributive lattice L are as follows: If μ and θ are fuzzy ideals of L, then μ · θ � μ ∧ θ � μ ∩ θ and μ + θ � μ ∨ θ If μ and θ are fuzzy filters of L, then μ + θ � μ ∧ θ and μ · θ � μ ∨ θ e set of all fuzzy filters of L is denoted by FF(L). μ-Fuzzy Filters In this section, we introduce the concept of μ-fuzzy filters in a distributive lattice with greatest element "1." We study some basic properties of the class of μ-fuzzy filters. We prove that the class of μ-fuzzy filters forms a complete distributive lattice isomorphic to the class of fuzzy filters of A + (L). We also show that there is a one-to-one correspondence between the set of all prime μ-fuzzy filters of L and prime fuzzy ideals of A + (L). Finally, we observe that every μ-fuzzy filter is the intersection of all prime μ-fuzzy filters containing it. roughout the rest of this paper, L stands for the distributive lattice with greatest element "1" unless otherwise mentioned. Theorem 1. Let θ be a fuzzy filter of L. en, the fuzzy subset On the other hand, Hence, μ(θ) is a fuzzy ideal of A + (L). Theorem 2. e set FI(A + (L)) of all fuzzy ideals of A + (L) forms a complete distributive lattice, where the infimum and supremum of any family θ j : j ∈ J of fuzzy ideals are given by Corollary 1. For any two fuzzy filters θ and η of Proof. For any Lemma 6. For any fuzzy ideal θ of us, μμ Lemma 7. For any fuzzy filter θ of L, the map θ ⟶ μ ← μ(θ) is a closure operator on FF(L). at is, , for any two fuzzy filters θ, η of L Now, we define μ-fuzzy filter. us, μ-fuzzy filters are simply the closed elements with respect to the closure operator of Lemma 7, and μ ← μ(θ) is the smallest μ-fuzzy filter containing θ, for any fuzzy filter θ of L. Theorem 4. For a nonempty fuzzy subset θ of L, θ is a μfuzzy filter if and only if each level subset of θ is a μ-filter of L. Proof. Let θ be a μ-fuzzy filter of L. en, en, (x) ++ ∈ μ(θ t ), and there is y ∈ θ t such that (x) ++ � (y) ++ . us, μ Conversely, assume that each level subset of θ is a μ-filter. en, θ is a fuzzy filter and θ ⊆ μ Corollary 2. For a nonempty subset F of L, F is a μ-filter if and only if χ F is a μ-fuzzy filter of L. Theorem 5. Let θ be a fuzzy filter of L. en, θ is a μ-fuzzy filter if and only if, for each x, y ∈ L, (x) + � (y) + implies θ(x) � θ(y). Theorem 6. A fuzzy filter θ of L is a μ-fuzzy filter if and only if Proof. Suppose a fuzzy filter θ of L is a μ-fuzzy filter. en, by eorem 4, every level subset is a μ-filter of L. Let x ∈ L such that θ(x) � t. Since θ t is a μ-filter of L, then (x) ++ ⊆ θ t , which implies a ∈ θ t for all a ∈ (x) ++ . us, θ(a) ≥ θ(x) for each a ∈ (x) ++ . So, Suppose conversely that the condition holds. To prove θ is a μ-fuzzy filter, it suffices to show that μ Let us denote the set of all μ-fuzzy filters of L by FF μ (L). Proof. Clearly, (FF μ (L), ⊆ ) is a partially ordered set. For η, θ ∈ FF μ (L), define en, clearly η ∧ θ, η ∨ θ ∈ FF μ (L). We need to show η ∨ θ is the least upper bound of η, θ . Since θ, η ⊆ η ∨ θ ⊆ η ∨ θ, η ∨ θ is an upper bound of μ, θ . Let λ be any upper bound for η, θ in FF μ (L). en, η ∨ θ ⊆ λ, which implies that μ We now prove the distributivity. Let η, θ, λ ∈ FF μ (L). en, us, FF μ (L) is a distributive lattice. Now, we proceed to show the completeness. Since 1 { } and L are μ-filters, χ 1 { } and χ L are least and greatest elements of FF μ (L), respectively. Let θ i : i ∈ I ⊆ FF μ (L). en, ∩ i∈I θ i is a fuzzy filter of L and ∩ i∈I θ Theorem 8. e set FF μ (L) is isomorphic to the lattice of fuzzy ideals of A + (L). Proof. Define Hence, f is one to one. Let λ ∈ FI(A + (L)). en, by Lemma 3, μ ← (λ) is a fuzzy filter of L. Now, we proceed to show that μ . us, by Lemma 6, we get that μμ erefore, f is an isomorphism of FF μ (L) onto the lattice of fuzzy filters of A + (L). Theorem 9. e following are equivalent for each nonconstant μ-fuzzy filter λ of L. We have proved in eorem 8 that there is an order isomorphism between the class of μ-fuzzy filters and the set of fuzzy ideals of A + (L). Now, we show that there is an isomorphism between the prime μ-fuzzy filters and the prime fuzzy ideals of the lattice of coannihilators. Theorem 10. ere is an isomorphism between the prime μ-fuzzy filters and the prime fuzzy ideals of the lattice of coannihilator. Proof. By eorem 8, the map f is an isomorphism from FF μ (L) into FI(A + (L)). Let σ be a prime μ-fuzzy filter of L. Proof. Put P � σ ∈ FF μ (L): σ ⊆ η and η ∩ σ ≤ α . Since θ ∈ P, P is nonempty, and it forms a poset together with the inclusion ordering of fuzzy sets. Let A � θ i i∈I be any chain in P. en, clearly ∪ i∈I θ i is a μ-fuzzy filter. Since By applying Zorn's lemma, we get a maximal element, say σ ∈ P; that is, σ is a μ-fuzzy filter of L such that θ ⊆ σ and σ ∩ η ≤ α. Now, we proceed to show σ is a prime fuzzy filter. Assume that σ is not prime fuzzy filter. Let c 1 ∩ c 2 ⊆ σ such that c 1 ⊈σ and c 2 ⊈σ, c 1 , c 2 ∈ FF(L). If we put , then both σ 1 and σ 2 are μ-fuzzy filters of L properly containing σ. Since σ is maximal in P, we get σ 1 , σ 2 ∉ P. us, σ 1 ∩ η≰α and is is a contradiction. us, σ is prime μ-fuzzy filter of L. Proof. Put P � σ ∈ FF μ (L): σ ⊆ η and η ∩ σ ≤ α . Since θ ∈ P, P is nonempty, and it forms a poset together with the inclusion ordering of fuzzy sets. Let A � θ i i∈I be any chain in P. Clearly, ∪ i∈I θ i is a μ-fuzzy filter. Since θ i (a) ≤ α for each i ∈ I, α is an upper bound of θ i (a): i ∈ I . us, ∪ i∈I θ i (a) ≤ α. So, ∪ i∈I θ i is a μ-fuzzy filter containing θ and ∪ i∈I θ i (a) ≤ α. Hence, ∪ i∈I θ i ∈ P. By applying Zorn's lemma, we get a maximal element, say σ ∈ P; that is, σ is a μfuzzy filter of L such that θ ⊆ σ and σ(a) ≤ α. Now, we proceed to show σ is a prime fuzzy filter. Assume that σ is not prime fuzzy filter. Let c 1 ∩ c 2 ⊆ σ and c 1 ⊈σ and c 2 ⊈σ, c 1 , c 2 ∈ FF(L). If we put and σ 2 � μ ← μ(c 2 ∨ σ), then both σ 1 and σ 2 are μ-fuzzy filters of L properly containing σ. Since σ is maximal in P, we get σ 1 , σ 2 ∉ P. The Space of Prime μ-Fuzzy Filters In this section, we study the space of prime μ-fuzzy filters of a distributive lattice and some properties of the space also. is shows that x, y ∉ λ * . Since λ is prime fuzzy filter, card Im λ � 2 and λ * is prime. Lemma 11. Let θ i : i ∈ I be any family of fuzzy filters of L. en, Conversely, let λ ∈ ∪ i∈I V(θ i ). en, λ ∈ V(θ i ) for each i ∈ I. is implies θ i ⊆ μ. us, for any x ∈ L, μ(x) is an upper bound of θ i (x): i ∈ I . is implies that is shows that ∪ i∈I θ i ⊆ λ and 13. e collection T � X(θ): θ { is a fuzzy filter of L} is a topology on X μ . Next, let X(θ 1 ), X(θ 2 ) ∈ T. en, by Lemma 8, we get that is shows that T is closed under finite intersection. Now, we proceed to show that T is closed under arbitrary union. Let θ i : i ∈ I be any family of fuzzy filters of L. en, by Lemma 11 we have which implies ∪ i∈I X(θ i ) � X([ ∪ i∈I θ i )). us, by Lemma 9, we get that So, T is closed under arbitrary union. erefore, T is a topology on X μ . e space (X μ , T) will be called the space of prime μ-fuzzy filters in L. In the above theorem, we proved that the family of X(θ) is a topology on X μ . In the following result, we show that the set of all open sets of the form X(x β ) is a basis for the topology on X μ . For any fuzzy subset η of L, X(θ) � η ∈ X μ : θ⊈η is an open set of X μ and V(θ) � η ∈ X μ : θ ⊆ η � X μ − V(θ) is a closed set of X μ . In the following result, we prove the closure of a fuzzy set. Theorem 18. For any family F ⊆ X μ , closure of F is given by F � V( ∩ λ∈F λ). Conclusion In this work, we studied the concept of μ-fuzzy filters of a distributive lattice. We proved that the set of all μ-fuzzy filters of a distributive lattice forms a complete distributive lattice isomorphic to the set of all fuzzy ideals of A + (L). We observed that every μ-fuzzy filter is the intersection of all μ-fuzzy filters containing it. We also studied the space of all prime μ-fuzzy filters in a distributive lattice. Our future work will focus on α-fuzzy ideals of a C-algebra. Data Availability No data were used to support this study. 8 Advances in Fuzzy Systems Conflicts of Interest e author declares that there are no conflicts of interest regarding the publication of this paper.
4,265.8
2020-12-29T00:00:00.000
[ "Mathematics" ]
Remaining Useful Life Prediction for Two-Phase Nonlinear Degrading Systems with Three-Source Variability Recently, the estimation of remaining useful life (RUL) for two-phase nonlinear degrading devices has shown rising momentum for ensuring their safe and reliable operation. The degradation processes of such systems are influenced by the temporal variability, unit-to-unit variability, and measurement variability jointly. However, current studies only consider these three sources of variability partially. To this end, this paper presents a two-phase nonlinear degradation model with three-source variability based on the nonlinear Wiener process. Then, the approximate analytical solution of the RUL with three-source variability is derived under the concept of the first passage time (FPT). For better implementation, the offline model parameter estimation is conducted by the maximum likelihood estimation (MLE), and the Bayesian rule in conjunction with the Kalman filtering (KF) algorithm are utilized for the online model updating. Finally, the effectiveness of the proposed approach is validated through a numerical example and a practical case study of the capacitor degradation data. The results show that it is necessary to incorporate three-source variability simultaneously into the RUL prediction of the two-phase nonlinear degrading systems. Introduction With the rapid development of technology, the equipment in the fields of aerospace, high-speed rail, and ship manufacturing tend to be large-scale, complex, sophisticated, and long-life, which puts forward higher requirements for system reliability [1,2].Recently, owing to the important role in realizing predictive maintenance, improving system reliability, reducing maintenance costs, and avoiding catastrophic eventualities, prognostics and health management (PHM) has obtained extensive attention both in academia and industry [3][4][5].As an essential part of PHM, the remaining useful life (RUL) is defined as the length from the current time to failure [6].RUL prediction results provide sufficient information support for decision-making and maintenance scheduling.The performance of RUL prediction primarily relies on the degradation modeling approaches.Generally, the RUL prediction approaches can be systematically classified into model-based methods, data-driven methods, and hybrid methods [7].More detailed discussions about those three types of methods can be found in [8][9][10].As one category of data-driven methods, the stochastic model-based methods have been widely used attributable to their great potential in characterizing the stochastic dynamic degradation process and providing the probability distribution of RUL [11,12].Stochastic process models mainly include the Wiener process, the Gamma process, and the Inverse Gaussian process [13].Among them, the Wiener process model has attached increasing interest due to its explicit statistical interpretation and illustrious mathematical properties in describing the non-monotonic degradation process [14][15][16]. In practice, the degradation process of many engineering systems occurs in a stochastic way.Thus, the RUL is a random variable, which is difficult to estimate with certainty [6]. Therefore, in the Wiener process-based RUL prediction, the commonly used approach is to estimate the probability density function (PDF) of the system RUL by modeling the sensed degradation data.Due to some unobservable internal and external factors, the degradation process of a system is often influenced by multiple sources of variability, which leads to uncertainty in RUL prediction.Typically, the degradation process is affected by three main sources of variability: temporal variability, unit-to-unit variability, and measurement variability [17].To be specific, temporal variability depicts the inherent stochasticity of the degradation process over time, unit-to-unit variability indicates the heterogeneity of degradation trajectories among different units, and measurement variability describes the inevitable measurement error caused by noise in condition monitoring (CM) data [18].Hence, to improve the accuracy of RUL prediction, the multi-source variability should be incorporated into the degradation modeling process simultaneously [19].Over the past decade, various published works have described such variability in degradation modeling [20][21][22][23][24].However, most studies only focused on the RUL estimation with one or two sources of variability, which limits their adaptivity. Recently, for RUL prediction with three-source variability, Si et al. [19] presented a linear Wiener process-based degradation model considering the three-source variability simultaneously and derived the analytical expressions of RUL distribution.In this work, the random drift coefficient and the underlying degradation state were updated jointly via a state-space model and the Kalman filtering (KF) technique.To deal with nonlinear patterns, Zheng et al. [25] extended the work in [19] and constructed a degradation model based on the nonlinear Wiener process, integrating both nonlinearity and three-source variability into the RUL prediction.For newly developed small sample systems, to address the lack of historical data and prior information, Wang et al. [26] proposed an adaptive RUL estimation method with three-source variability based on the expectation maximization (EM) algorithm.Aiming at the unbalanced historical degradation measurements, Yu et al. [27] established a nonlinear-drift-driven prognostic model considering three-source variability and formulated the approximate analytical expressions of the RUL for both online and offline estimation scenarios.On this basis, the MLE method in conjunction with a downsampling strategy was utilized for parameter estimation and the underflow issue averting in this study.It is noteworthy that the aforementioned linear or nonlinear Wiener processbased prognostic methods with three-source variability only focused on the single-phase degradation cases.However, in practical engineering, owing to changes in the external dynamic environment and internal degradation mechanisms, the degradation trajectories of products such as batteries [28], liquid coupling devices [29], and light emitting diodes [30] exhibit obvious two-phase characteristics with evident inflection points.Under these circumstances, the traditional single-phase model is insufficient to track the dynamics of such degradation processes.Therefore, formulating two-phase degradation models to guarantee the accuracy of RUL prediction is practical and necessary. In recent years, researchers have reported various RUL prediction approaches considering multiple sources of variability for two-phase degradation processes.Zhang et al. [31] proposed a two-phase linear degradation model based on the Wiener process and derived the analytical forms of the RUL estimation.Chen et al. [32] extended the work in [31] and presented an extreme learning machine (ELM) algorithm to adaptively detect the random changing time of the two-phase degradation trajectory.For two-phase nonlinear degradation patterns, Lin et al. [33] formulated a nonlinear Wiener process-based degradation model and obtained an approximate analytical solution of the lifetime estimation.Hu et al. [34] constructed two RUL prediction models based on the nonlinear time-scale transformation model and the stochastic process model with purely time-dependent parameters.The commonality of the aforementioned works is that the unit-to-unit variability is considered.However, a common deficiency shared by these models is that the measurement variability is omitted.Taking into account the variability of measurement, Wang et al. [35] proposed a change-point Wiener process model with measurement errors to fit the twophase deteriorated OLEDs and adopt the hierarchical Bayesian method to implement parameters estimation, but the number of change-points was assumed to be known a priori.Guan et al. [36] established a two-phase degradation model with measurement errors and random drift terms for RUL estimation, however, the changing time of the model was fixed, and the nonlinearity was not considered in this work.To achieve RUL prediction in the case of imperfect prior degradation information, Chai et al. [37] proposed a linearnonlinear two-phase Wiener process-based degradation model considering measurement errors and the unknown degradation state at the change point simultaneously, which is a pioneering work.Nevertheless, the authors only described the unit-to-unit variability in parameter estimation, while ignoring the random effects of drift coefficients in the PDF expressions of RUL.In addition, the first phase of the proposed model is linear, which limits the method's adaptability. The above-mentioned research achieved promising results in the two-phase linear or nonlinear degradation modeling with multiple sources of variability.However, most of the existing works about two-phase patterns only considered one or two sources of variability.In contrast, the research on RUL prediction using two-phase degradation models taking into account three-source variability simultaneously is very limited.Furthermore, degrading systems exhibiting two-phase nonlinear degradation patterns are extensively encountered in practice [38,39].Therefore, determining how to achieve two-phase nonlinear degradation modeling and RUL estimation considering three-source variability simultaneously is a compelling practical issue, which motivates our research in this paper. To achieve this goal, several issues still need to be further investigated.First, the degradation state at the changing point of the two-phase degradation path is an unknown random variable until the changing point appears, and it is related to the changing time as well as the degradation rate of the first phase [31].Thus, it is natural to incorporate the uncertainty of the degradation state at the changing point into RUL prediction.Currently, the state transition probability function is often applied to solve this problem in most studies [33,34].However, these works did not consider the random degradation state at the changing point and the three-source variability simultaneously in the PDF of RUL.Second, it is noted that the changing points and the degradation rates exist difference for devices of the same batch owing to the unit-to-unit variability, and the degradation states are underlying due to the measurement noise.Therefore, determining how to realize parameter identification and real-time degradation state updating in the context of three-source variability poses an interesting challenge. Motivated by these practical issues, the major contributions of this paper lie in the following aspects: (1) A two-phase nonlinear Wiener process-based degradation model is formulated, where the drift coefficient and the measurement error of each phase are assumed to be random variables to describe the three-source variability.(2) Taking into account the nonlinearity, the uncertainty of the degradation state at the changing point as well as the three-source variability simultaneously, the approximate analytical expressions of RUL estimation are derived under the concept of the first passage time (FPT).(3) Based on the historical degradation observations of multiple units from the same batch, the offline parameter estimation is conducted by the MLE method.Subsequently, with the newly obtained degradation data of the certain operating device, the random drift coefficients and the underlying degradation states are real-time updated by combining the Bayesian rule, state-space model, and KF technique.(4) Finally, a numerical simulation and a practical case study about the degradation data of the high-voltage pulse capacitors are implemented to verify the effectiveness and applicability of the proposed approach. The remainder of this paper is organized as follows.In Section 2, the two-phase nonlinear Wiener process-based degradation model with three-source variability is formulated.Section 3 presents the approximate analytical solutions of the RUL estimation considering three-source variability.The model parameter estimation is conducted in Section 4. Section 5 shows the implementation details and results analysis of the experiments.The conclusions are summarized in Section 6. Degradation Modeling Description To describe the degradation process with two-phase nonlinearity and three-source variability, the nonlinear Wiener process is employed in our work.Inspired by the single-phase nonlinear degradation model considering three-source variability discussed in Ref. [25] and the two-phase nonlinear degradation model with unit-to-unit variability presented by Refs.[33,34], a general nonlinear Wiener process-based degradation model can be described as, where X(t) denotes the actual degradation state at time t, x 0 is the initial value.For simplifying later calculation, we assume x 0 = 0. τ represents the changing time, thus, x τ is the degradation state at the changing time.λ 1 , λ 2 and σ 1 , σ 2 represent the drift and diffusion coefficients of each phase, respectively.Meanwhile, µ 1 (ρ; ϑ 1 ) and µ 2 (ρ − τ; ϑ 2 ) are nonlinear functions with time t and unknown parameters ϑ 1 , ϑ 2 , which are used to describe the nonlinear characteristics of X(t).B(t) denotes the standard Brown motion.For simplicity, we assume that the abovementioned parameters are independent of phase, meanwhile, the two phases are independent of each other [33]. Owing to the dynamics of {B(t), t ≥ 0}, the temporal variability of Equation (1) could be described.Such a modeling approach has been widely applied to characterize the stochastic degradation process of dynamic systems [27].Further, we assume that ) to describe the unit-to-unit variability, and they are statistically independent of {B(t), t ≥ 0}.Moreover, ϑ 1 , ϑ 2 , σ 1 , σ 2 are fixed parameters used to characterize the common degradation features of all systems from the same batch.In addition, due to the influence of the dynamic environment and the non-ideal instruments, the obtained observations inevitably contain noise.Therefore, to characterize the measurement variability between observed values and the true degradation states, the degradation measurement process {Y(t), t ≥ 0} is formulated as follows [25,37], where ε 1 , ε 2 are the measurement errors of each phase, and assumed to be independent and identically distributed (i.i.d.) with ε 1 ∼ N(0, σ 2 1ε ) and ε 2 ∼ N(0, σ 2 2ε ), respectively.It is further assumed that the measurement errors are independent of X(t). Generally, the lifetime is usually defined as the FPT when the degradation process exceeds the preset failure threshold w [40].Based on the concept of FPT, the lifetime T of a degrading system can be defined as [19], Then, the expression of RUL at the current time t k can be defined as [25]: where l k represents the time from t k to the failure time, L k is the RUL with conditional PDF Initially, we only consider the temporal variability in lifetime estimation.In this case, the degradation process {X(t), t ≥ 0} described by Equation (1) could be directly observed.If the changing time and all parameters in Equation ( 1) are known fixed values, the PDF of the lifetime T, which only considers the temporal variability could be summarized as follows [33,34], (1) if 0 < t ≤ τ, (2 It is noticeable that the randomness of all parameters is neglected in Equations ( 5) and (6).For the two-phase degradation process with a random changing time, the degradation state at the changing point is usually unknown and could be obtained through a transition probability density from x 0 to x τ under an absorbable boundary w [31].Thus, taking into account the unit-to-unit variability and the randomness of the degradation state at the changing point jointly, the PDF of the lifetime T could be obtained via the law of total probability as follows, where h τ (x τ λ 1p , σ 1p ) is the transition probability density function considering the randomness of the first phase model, i.e., λ 1 ∼ N(λ 1p , σ 2 1p ).Then, according to the relationship between the lifetime and RUL [19,25], the PDF of RUL that considers both temporal variability and unit-to-unit variability could be derived as follows, Theorem 1.For the two-phase nonlinear degradation process in Equation ( 1) and the definition of RUL proposed in Equation ( 4), given the actual degradation state x k at the current time t k and ), the PDF of RUL with a certain changing time τ considering the temporal variability, unit-to-unit variability and the random degradation state at the changing point jointly can be formulated as, Case 1: The current time t k is smaller than the changing time τ (i.e., t k < τ) where S = S 1 − S 2 , T = T 1 − T 2 , and Case 2: The current time t k is larger than the changing time τ (i.e., t k ≥ τ). Proof.See Appendix A. It is worth noting that the aforementioned results assume that the degradation state x k could be directly and accurately observed.However, measurement errors are inevitable in practice.Therefore, to consider the impact of measurement errors, the measurement variability should be incorporated into the RUL estimation results of Theorem 1. RUL Estimation Considering Three-Source Variability Owing to the influence of measurement errors, the true degradation state x k at the current time t k is unknown.Therefore, we assume that x k ∼ N( xk|k , P k|k ) and x k could be updated based on the degradation observations Y 1:k = {y 1 , y 2 , • • • , y k } via the updating procedure proposed in Section 4.3.Then, the PDF of RUL with three-source variability could be derived based on x k ∼ N( xk|k , P k|k ) and Theorem 1. Theorem 2. For the two-phase nonlinear degradation process in Equation ( 1) and the definition of RUL proposed in Equation ( 4), given the degradation observations Y 1:k up to the current time t k , the PDF of RUL with a certain changing time τ considering the random degradation state at the changing point and the three-source variability simultaneously can be formulated as follows, Case 1: where where and where In addition, it is worth mentioning that the changing time of the above results is a fixed value.In the case of three-source variability, considering the randomness of the changing point τ, the PDF of RUL could be derived as follows [31], where p(τ) represents the PDF of the changing time τ.Equation ( 13) could be solved by some numerical methods. Model Parameter and Degradation State Estimation Before conducting RUL prediction, the unknown model parameters should be identified first.Considering the common characteristics of devices from the same batch and the individual features of the specific operating device, the offline and online methods are adopted jointly to identify the model parameters.Firstly, based on the historical observations of devices within the same batch, the MLE method is used for offline parameter estimation.Then, the changing point detection method is constructed on this basis.Finally, according to the real-time monitoring data of one specific operating device, the online updating procedures of the model parameters are realized via the Bayesian rule and KF algorithm. Offline Parameter Estimation In this subsection, we mainly focus on the estimation of the unknown model parameters, which are denoted as ) are random variables that describe the unit-to-unit variability, and ϑ 1 , σ 1 , σ 1ε , ϑ 2 , σ 2 , σ 2ε are deterministic parameters that depict the common degradation features of devices from the same batch.It is noted that τ will be estimated via the changing point detection method in Section 4.2. It is assumed that the historical observations of N devices within the same batch are known, i.e., Y 1: , and the corresponding actual degradation state data is denoted as m n } and m n denotes the available number of the observations.We define that the measured increment of the n-th device is For simplicity, the time interval is assumed to be a constant, i.e., ∆t = t n,j − t n,j−1 .Suppose τ n denotes the changing time of the n-th device, and it is assumed that τ n = τ n /∆t ∈ {0, 1, . . . ,m n } denotes the changing point location for simplifying the later calculation.Scilicet, the changing time τ n only appears at the measurement time {t n,0 , t n,1 , • • • , t n,m n }.As a result, ∆Y 1n = ∆y n,1 , ∆y n,2 , • • • , ∆y n, τ n represents the measured increments in the first phase, and ∆Y 2n = ∆y n, τ n +1 , ∆y n, τ n +2 , • • • , ∆y n,m n represents the measured increments in the second phase. According to the definition in Equation ( 1) and the properties of the Wiener process, the measured increments ∆Y n = {∆y n,1 , ∆y n,2 , • • • , ∆y n,m n } follow the multivariate normal distribution with expectation and covariance matrix as follows, where Thus, the log-likelihood function of Θ could be expressed as follows, To facilitate calculation, let , and Then, the log-likelihood function in Equation ( 16) could be written as, Under the framework of the MLE algorithm, taking the first partial derivatives of ln(Θ|Y 1:N ) with respect to λ 1p and σ 2 1p , respectively.Then, the MLE of λ 1p , σ 2 1p could be obtained by setting these two derivatives to zero. Changing Point Detection For the two-phase nonlinear degradation process, changing point detection is crucial for parameter estimation and RUL prediction.It is assumed that the changing time is a random variable and follows the normal distribution, i.e., τ ∼ N(µ τ , σ 2 τ ).Then, for a specific operating device, the log-likelihood function of τ can be formulated as follows, where Similar to the offline parameter estimation process, the MLE of the deterministic parameters and the expressions of λ1p , σ2 1p , λ2p , σ2 2p could be derived.Then, by substituting these MLE values and expressions into Equation ( 22), a log-likelihood function that is only related to the changing point location τ n can be formulated.Thus, the optimal changing time τn of the n-th device could be expressed as follows, On this basis, maximizing ln L( τ n |X n ) by enumerating all possible values of τ n , the optimal changing time τn could be obtained. In addition, for a specific operating device, if t k > τn at the current time t k , the changing time has appeared and is a certain value.However, if t k < τn , which means that the changing point has not appeared and is an unknown random variable.Consequently, we need to update its distribution.In this case, the above-estimated value τn can be treated as the observations of the changing time.Then, the mean and variance of τ could be formulated as [33], Online Implicit State Updating For the RUL estimation with three-source variability, the random drift coefficients λ 1 , λ 2 and the underlying degradation state x k need to be updated jointly based on the observations.In practice, the posterior estimates of the drift coefficients have little effect with x k [24].Without loss of generality, it is assumed that they are independent of each other.On this basis, the update mechanism of implicit states could be conducted through two steps: firstly, the random drift coefficients are updated by the Bayesian method, and then the KF algorithm is adopted to update x k . As to a specific operating device, if the current time is t k , the degradation observation is denoted as Y 1:k = {y 1 , y 2 , • • • , y k }, and the corresponding actual degradation state is denoted as It is defined that the observation increment is ∆y j = y j − y j−1 , and the time interval is ∆t dρ, and we have To update the drift coefficients, let λ 1p,0 , σ 1p,0 and λ 2p,0 , σ 2p,0 attained in Section 4.1 represent the prior values of λ 1 and λ 2 , respectively.Then, according to the Bayesian rule [24], the following results could be obtained, where where ,0 + 1 To update the underlying degradation state, i.e., x k ∼ N( xk|k , P k|k ), the state and measurement formulas in Equations ( 1) and ( 2) should be converted into discrete time equations.In this case, a general state space model is constructed as follows, where s = 1, 2 denotes the phase.If ).In addition, it is assumed that {v k } k≥1 and {ε sk } k≥1 are independent of each other. Iterating the above KF process step by step, the posterior distribution of x k could be solved analytically.Based on the implicit state updating results, the RUL estimation could be conducted. Case Study To verify the feasibility of the proposed method, a numerical simulation and a practical example of the high-voltage pulse capacitor are provided.For performance evaluation, the prediction results of the proposed method are compared with the two-phase nonlinear degradation model considering unit-to-unit variability (Lin's method) [33], the two-phase linear-nonlinear degradation model with measurement errors (Chai's method) [37], and the traditional single-phase nonlinear degradation model with three-source variability (Zheng's method) [25].The implementation details of the experiment are as follows. Numerical Example In this subsection, we focus on validating the effectiveness of our method for parameter identification and RUL estimation.For illustration purposes, the following power model is applied to define the nonlinear integral term in Equation (1), i.e., This kind of power function has been widely used in existing literature [22].Similar to the work of Feng et al. [20] we adopt the state space model to generate the simulation data and the parameters are given as: 4, σ 2 = 0.08, σ 2ε = 0.4 and the distribution parameters of the random changing time, i.e., µ τ = 50, σ τ = 2. On this basis, we generate 100 sets of degradation trajectories with random changing time, drift parameters, and measurement errors.Figure 1 shows several typical degradation paths of the sample, and it is observable that the degradation paths exhibit obvious two-phase nonlinear deteriorating patterns.Next, the unknown model parameters are identified based on the changing time detection and parameter estimation methods proposed in Section 4. The parameter estimation results with different sample sizes are detailed in Table 1.It can be found from Table 1 that the obtained results could gradually approach the true values as the size of the degradation paths is increased.In addition, the histogram of the detected changing time based on 100 sample paths is presented in Figure 2 and it manifests that the estimated changing points are concentrated around t = 50, which are very close to the true value of µ τ .Thus, the effectiveness of the offline parameter identification method is demonstrated.For the online parameter and the underlying degradation state updating, we chose one specific degradation path from the above-presented 100 degradation trajectories and the true drift parameters are λ 1 = 1.1133, λ 2 = 1.4690, and τ = 51.4 as shown in Figure 3. Then the changing point detection procedure is applied for the online path, and the estimated changing time is 51.3, which is very close to the true value.Meanwhile, the parameter estimation results of n = 10 size in Table 1 are used as the prior information, when newly observed data are coming, the hyper-parameters of the drift coefficients and the underlying degradation state could be updated via the methods proposed in Section 4.3.Figure 4a shows the parameter updating process, it could be observed that the parameter updating curves could gradually approach the true values.Furthermore, it is clear from Figure 4a that as the observations accumulate, the values of σ 1p and σ 2p are gradually decreasing, which means the uncertainty of estimation is reduced.In addition, Figure 4b presents the comparison of the predicted underlying degradation path and the observed degradation path.It is clear from Figure 4b that our degradation state updating method could track the actual observed degradation state well.Based on the results of the parameter identification process, the PDFs of RUL estimation at different observed time points could be obtained, as shown in Figure 5a.It is worth noting that the preset failure threshold w = 570, thus the actual lifetime is T = 80.It can be observed from Figure 5a that the PDFs of RUL estimated by our model are closely distributed around the actual RUL values, verifying that our proposed method could effectively estimate the RUL of the two-phase nonlinear degradation process.For comparative purposes, we further obtained the mean RUL prediction results of our method, Lin's method [33], Chai's method [37], and Zheng's method [25], as shown in Figure 5b.It is observable from Figure 5b that the mean predicted RUL of our method is more accurate compared to the other three methods.Additionally, the mean RUL prediction curve of Lin's method [33] is relatively stable due to the degradation model they constructed considering two-phase nonlinearity.However, since they ignored the measurement variability, the deviations of their mean RUL prediction curve from the actual values are bigger than our method.Furthermore, the mean RUL curve of Chai's method [37] in the first phase has large bias, but it gradually becomes accurate in the second phase.Because Chai's method [37] only consider the nonlinearity of the second phase.In addition, the deviations between the mean RUL prediction curve of Zheng's method [25] and the actual RUL are the largest among the four methods in the first phase, especially near the changing point where the deviation is more pronounced.It is mainly because the degradation model they constructed is single-phase, which is insufficient to describe the two-phase nonlinear degradation process effectively.It is interesting to note that after the changing point has appeared, the deviations of Zheng's method [25] gradually decrease as the observations accumulate.The reason for this phenomenon is that the degradation trajectory is equivalent to a single-phase nonlinear degradation process after the changing point appears, thus, Zheng's method [25] that considers three-source variability simultaneously could effectively fit the degradation process.For performance evaluation, two metrics are employed to evaluate the performance of RUL prediction among different methods, including the mean square error (MSE) [22] and the absolute error (AE) [37].The MSE at each observation point could be defined as follows, where l k denotes the actual RUL at t k , and f L k |Y 1:k (l k Y 1:k ) is the corresponding conditional PDF of the RUL estimated by Theorem 2 in Section 3.2. The second metric is the AE of actual RUL and the estimated RUL at each observation point, which could be represented by where lk denotes the estimated RUL at t k . It is noted that in both criteria, the smallest value of MSE and AE corresponds to the best RUL prediction result. The following Figure 6 presents the comparison results of the four methods.It can be seen from Figure 6a that our method provides smaller MSE values than the other three methods.Additionally, the AE values of our method in Figure 6b are also the smallest of the four models.The comparison results of these two criteria indicate that our RUL prediction method could effectively improve the prediction accuracy of RUL.Moreover, it can be observed that the MSE and AE values of Zheng's method [25] are the largest of the four methods near the changing point.Because the single-phase nonlinear degradation model constructed by Zheng's method [25] did not consider the impact of the random degradation state at the changing point on RUL estimation, and cannot estimate the RUL of the two-phase nonlinear degradation process well.In addition, Figure 6c compares the PDFs of the RUL distributions derived from our model with the other three methods at time 40.It is clear from Figure 6c that our method can cover the actual RUL well, which reflects the superiority of our method.Moreover, it can also be observed from Figure 6c that the PDF curve of Zheng's method [25] cannot cover the true value of RUL well, revealing that the bias of RUL predicted by Zheng's method [25] is very large in the first phase, which is consistent with the previous results of MSE and AE.It is noteworthy that the units are omitted in Figures 1-6 since the degradation data are generated via simulation.To quantitatively compare the RUL prediction performance of our method with the other methods, we further employ three evaluation metrics.The first metric is the total MSE (TMSE) [19], which is defined as the sum of the MSE at each observation point over the whole life cycle.Videlicet, if there are m observations, the TMSE could be formulated as, The second metric is the mean absolute error (MAE) [41], which measures the average absolute deviation between the predicted value and the actual value.The MAE could be represented as, where AE k is defined in Equation (30).The third metric is the cumulative relative accuracy (CRA) [27]., which evaluates the relative prediction accuracy of the RUL over time, and could be defined as, For these three quantitative metrics, the smallest value of TMSE and MAE corresponds to the best RUL prediction result, in contrast, a higher CRA value that approaches to 1 indicates higher RUL prediction accuracy.Then, the subsequent Table 2 shows the comparison results of the estimated RUL based on the three metrics mentioned above.It can be found from Table 2 that compared with the other three methods, our method has smaller TMSE and MAE values, whereas a bigger CRA value, which indicates that our method has higher accuracy in RUL prediction. Practical Application In this subsection, the data of high-voltage pulse capacitors are utilized to illustrate our approach [39].High-voltage-pulse capacitors can be utilized for electrical energy storing and releasing, which are often encountered in pulse lasers, radar, and particle accelerators [34].It is well known that capacitors and batteries can form hybrid energy storage systems, which are commonly used as energy sources for hybrid electric vehicles [42][43][44].However, the unit-to-unit variability mentioned in this work is defined as the heterogeneity between different units in the same batch of identical devices [33,34].Therefore, if degradation modeling with three-source variability is based on the degradation data of different types of devices, such as capacitors and batteries, the basic premise for the unit-to-unit variability is not satisfied.In addition, if the research object is a system composed of batteries and capacitors, it is another topic of RUL prediction, namely, the RUL estimation for multi-component systems.Currently, our work mainly focuses on the RUL prediction with three-source variability of different devices from the same batch, thus, only the capacitor degradation data is considered in this research. According to the usage scenario, the high-voltage pulse capacitors have a short service time and a long storage time.Hence, the storage performance is a primary factor that influences the reliability of such capacitors.Capacitance can be used as a health indicator (HI) to evaluate the RUL of the capacitors; Figure 7 shows the degradation test data of five high-voltage pulse capacitors under the storage condition.The capacitance was observed every month and the relative capacitance variability was selected as the HI.Reference [34] revealed that the degradation paths of such capacitors could be modeled based on the two-phase nonlinear method.In addition, Reference [39] mentioned that the capacitance degradation path is affected by temperature and humidity, thus, measurement errors are inevitable.Therefore, it is appropriate to use the degradation data of these capacitors to verify our proposed two-phase nonlinear degradation model.Based on the degradation data of Capacitors 2-5, the offline parameter estimation results are obtained through the parameter identification method proposed in Section 4, and are listed in Table 3.Through the estimated values of b 1 , b 2 and σ 1ε , σ 2ε in Table 3, it can be found that nonlinearity and measurement errors exist in the degradation process of capacitors.It is noticeable that the results of Table 3 are treated as the prior information.For online implementation processes, the degradation data of Capacitors 1 is selected to illustrate the parameter updating and RUL prediction processes.Figure 8 presents the parameter updating results, and it can be found that the updated values of λ 1p , λ 2p are bigger than the offline values in Table 3.The reason is that the degradation path of Capacitors 1 is steeper than the mean paths of Capacitors 2-5, which can be seen in Figure 7.According to the literature [34,39], the failure threshold of capacitor is set as w = 5%.Then, based on the results of model updates, the PDFs of RUL could be obtained as shown in Figure 9a.It is observable from Figure 9a that the estimated values of RUL are almost consistent with the actual values, indicating that our method can effectively estimate the RUL of capacitor degradation data.For comparative purposes, we further presented the estimated RUL of Lin's method [33], Chai's method [37], and Zheng's method [25] as shown graphically in Figure 9b-d, respectively.It can be observed from Figure 9 that in both phases, the PDF curves of the proposed method are sharper compared to Lin's method [33] and Zheng's method [25].Additionally, although our PDF curves are similar in steepness to Chai's method [37], the PDF curves of our method are more compact around the actual RUL and the estimated RUL values are closer to the actual values compared to the other three methods.Thus, from the overall RUL prediction results, our method that considers three-source variability and two-phase nonlinearity has higher RUL prediction accuracy compared to the other three methods.To evaluate the performance of RUL prediction, three metrics, including MSE, AE, and relative error (RE) [19], are adopted for performance evaluation.Among them, the RE of the estimated RUL at t k can be defined as, where l k and lk denotes the estimated and the actual RUL at t k , respectively.Then, we calculated the MSE, AE, and RE of the predicted RUL, as shown in Figure 10.It can be found from Figure 10a that the MES values of our method maintain a relatively low level compared to the other three methods.Additionally, it can be observed from Figure 10b,c that the AE as well as the RE results of the proposed method are smaller than the other three methods, indicating the higher accuracy of our proposed method.Furthermore, it can be seen from Figure 10c that the RE curves of each method gradually increase in the later stage of the degradation process.The main reason is that as time accumulates, the actual value of RUL becomes smaller and smaller.Although the AE value is gradually decreasing over time, its proportion in the actual RUL value is larger than that in the early stage of the degradation process, thus, resulting in an increase trend in the later stage.In addition, it can be seen from Figure 10a that the MSE value of Chai's method [37] is slightly better than the MES value of our method.To investigate the reasons for this phenomenon, we further obtained the PDFs of the RUL estimate in the fifth, sixth, and seventh adjacent months, as shown in Figure 11.It can be found from Figure 11 that the PDF curves of our method at the three time points can cover the actual values of RUL well, and are more compact around the actual RUL, respectively.Moreover, the estimated RUL values of our method at different time points are closer to the actual values than the other three methods. The key issue lies in the PDF curve of the estimated RUL at the sixth month in Figure 11b.As can be seen from Figure 11b, the deviation between our estimated RUL and the actual value is only slightly smaller than that of Chai's method [37].However, the PDF curve of Chai's method [37] is sharper than ours.According to the definition of MSE in Equation ( 29), the MSE value is jointly affected by the deviation between the estimated value of RUL and the actual value, as well as the corresponding PDF function.The MSE value will be smaller when the PDF of RUL is closely distributed around the actual RUL value [34].Therefore, the MSE value of Chai's method [37] is slightly smaller than ours in the sixth month.In fact, this is acceptable.Generally, as can be seen in Figure 10a, the MSE curve of our proposed method maintains a relatively low level compared to Chai's method [37].Therefore, the MSE result in the sixth month does not affect the conclusion that our RUL prediction accuracy is higher than Chai's method [37]. To quantitatively compare the predicted results based on the capacitor degradation data, we further calculated the TMSE, MAE, and CRA values of RUL estimation, as detailed in Table 4.It is observable from Table 4 that the TMSE of our method is the smallest.Thus, although the MSE value of Chai's method [37] in Figure 10a is smaller than ours in the sixth month, however, in general, the TMSE of our method is still better than Chai's method [37].In addition, among the four methods, our method has the smallest MAE value and the largest CRA value, because we simultaneously consider the two-phase nonlinearity and the three-source variability of the degradation process.The results of the above quantitative metrics indicate that our RUL prediction method has higher accuracy compared to the other three methods, which verifies the effectiveness and superiority of our method. TMSE MAE CRA Our method 4.9349 0.1091 0.9685 Lin's method [33] 7.0592 0.3637 0.8862 Chai's method [37] 7.5904 0.6182 0.8647 Zheng's method [25] 16.2556 1.0455 0.7873 In summary, the experimental results show that compared to the other three methods, the MSE, AE, and RE curves of our method maintain a relatively low level, and the RUL prediction results of our method have smaller TMSE, MAE values, whereas larger CRA values.Based on the above results, it can be found that our work can obtain competitive results, which indicates that the accuracy of RUL prediction is better than the other three methods, thus, verifying the feasibility and effectiveness of the proposed method in practical application.Furthermore, for two-phase nonlinear degradation devices, it is necessary to consider the nonlinearity and the three-source variability simultaneously to boost the performance of RUL prediction. Conclusions In this study, a two-phase nonlinear Wiener process-based degradation model subject to the three-source variability is formulated for RUL prediction.Based on the FPT concept, the approximate analytical form of the RUL is obtained considering the three-source variability and the random degradation state at the changing point simultaneously.To incorporate the historical observations of the devices within the same batch, the MLE method is adopted to estimate the unknown model parameters.By combining the Bayesian rule and KF algorithm, the drift coefficients and the underlying degradation state are adaptively updated in real-time with newly observed data.Finally, the effectiveness of the proposed method is verified via numerical and practical examples.The quantitative comparison results of MSE, AE, RE, TMSE, MAE, and CRA reveal that the two-phase nonlinear degradation model with three-source variability is more accurate than the existing methods that only partially consider the two-phase nonlinearity and three-source variability.Therefore, in RUL prediction for the two-phase nonlinear degradation devices, the impact of these uncertainties on degradation modeling should be considered simultaneously to improve the accuracy of RUL prediction.However, there are several directions that are worth further study.First, the measurement error is assumed as Gaussian, which may be inadequate when there are outliers in the degradation observations.In this case, the non-Gaussian measurement errors need to be considered, such as t-distribution.Second, the proposed model is formulated under the assumption of the progressive degradation process.However, shocks are often encountered in practice.Therefore, determining how to construct the two-phase nonlinear degradation model considering the interaction between the internal degradation mechanism and external shocks needs to be explored.Third, at present, our research only focuses on the RUL prediction for single-component systems, without considering the impact of correlation between different components on RUL prediction.Therefore, the degradation modeling and RUL prediction of multicomponent systems (such as hybrid energy systems consisting of capacitors and batteries) with three-source variability is also a valuable research direction in the future. Nomenclature The following abbreviations and notations are used in this manuscript: Proof.To obtain the RUL estimation result considering the unit-to-unit variability and the random degradation state at the changing point, the PDF of the lifetime should be derived first, which is given by Ref. [33]. For the two-phase nonlinear degradation model proposed in Equation ( 1), if the drift coefficient in each phase is a random variable and follows a normal distribution, i.e., ), the PDF of the lifetime T considering the unit-to-unit variability and the random degradation state at the changing point could be expressed as follows, where and Based on Equation (A1), according to the relationship between the lifetime T and RUL, the PDF of RUL that considers temporal variability, unit-to-unit variability, and the randomness of the degradation state at the changing point could be obtained as follows, Case 1: The current time t k is smaller than the changing time τ (i.e., t k < τ) where S = S 1 − S 2 , T = T 1 − T 2 , and Case 2: The current time t k is larger than the changing time τ (i.e., t k ≥ τ) The result of Equation (A3) is transformed from lifetime T to RUL based on the law of total probability as f T where the result of f T (t|λ 2 , x τ ) can be found in Equation ( 6), and the proof can be found in reference [33]. In this way, the proof has been completed. Appendix B. Proof of Theorem 2 Proof. (A) Based on f T (t|λ 1 ) in Equation ( 5) and λ 1 ∼ N(λ 1p , σ 2 1p ), the PDF of the lifetime considering the unit-to-unit variability could be formulated via Lemma A1 as, (B) Then based on the relationship between lifetime and RUL, we can further obtain the PDF of RUL considering the unit-to-unit variability as follows, (C) When considering the three-source variability simultaneously, it is necessary to further incorporate the measurement variability into the PDF of RUL that considers unit-to-unit variability, which could be presented as, (D) Then, based on x k ∼ N( xk|k , P k|k ) and Lemma A1, we can obtain the PDF of RUL with three-source variability when t k ≤ τ and 0 < l k + t k ≤ τ as follows, where, To solve the PDF of RUL considering three-source variability simultaneously, the PDF formulas in S − T of Theorem 1 can be transformed into the following form, where Then, based on Equation (A9) and x k ∼ N( xk|k , P k|k ), the PDF of RUL with threesource variability can be formulated as follows, separately.To facilitate calculation, the following Lemmas A2 and A3 are provided. Lemma A2 ([45]) , the S formula in Equation (A9) is expanded to S a and S b as follows, Then, the E x k |Y 1:k [T a ] in Equation (A29) could be formulated as follows, Figure 2 . Figure 2. The estimated changing time of 100 degradation paths. Figure 4 . Figure 4.The online updating process.(a) The parameter updating.(b) The underlying degradation state updating. Figure 5 . Figure 5.The RUL prediction results.(a) PDFs of the RUL.(b) The mean RUL of the four methods. Figure 6 . Figure 6.Performance evaluation of the RUL prediction.(a) MSE of the estimated RUL.(b) AE of the estimated RUL. (c) PDFs of the estimated RUL at time 40. Figure 7 . Figure 7. Relative capacitance variability of the capacitors with time. Figure 10 . Figure 10.Performance evaluation based on capacitance degradation data of Capacitor 1.(a) MSE of the predicted RUL.(b) AE of the predicted RUL.(c) RE of the predicted RUL. Figure 11 . Figure 11.Comparison of the estimated RUL at different months.(a) PDFs of the estimated RUL at the fifth month.(b) PDFs of the estimated RUL at the sixth month.(c) PDFs of the estimated RUL at the seventh month. x 2 k − b T x k × exp − (c T −d T x k )whereD 1T = d 2 T − 2a T b S , D 2T = c T d T − b S b T .(C)Then, the common item before {•} in the formula T of Equation (A9) could be simplified as follows,I 2 2π(t k +l k −τ) 2 (σ 2 α2 +σ 2 β2) On this basis, the formula T in Equation (A9) could be split into T a and T b as follows,T a = exp − D 1T 2b S x 2 k + D 2T (e T x k ) × Φ g T +h T x Based on x k ∼ N( xk|k , P k|k ) and Lemma A3, E x k |Y 1:k [T a ] could be obtained as, E x k |Y 1:k [T a ] = S +D 1T P k|k ) exp − D 1T 2b S x 2 k + D 2T b S x k × (e T x k ) × exp − (x k − xk|k ) 2 2P k|k × Φ g T +h T x k f S dx k (A32) Whereas if t k > τ, only the observations in the second phase could be applied, i.e., Y τ:k Table 1 . Parameter estimation values with different sample sizes. Table 2 . Comparison results of RUL prediction based on TMSE, MAE, and CRA. Table 3 . The parameter estimation results of the capacitance degradation data. Table 4 . Comparison results of RUL prediction based on TMSE, MAE, and CRA for Capacitor 1 degradation data.
11,127.2
2023-12-27T00:00:00.000
[ "Engineering", "Physics" ]
Language, Gesture, and Emotional Communication: An Embodied View of Social Interaction Spoken language is an innate ability of the human being and represents the most widespread mode of social communication. The ability to share concepts, intentions and feelings, and also to respond to what others are feeling/saying is crucial during social interactions. A growing body of evidence suggests that language evolved from manual gestures, gradually incorporating motor acts with vocal elements. In this evolutionary context, the human mirror mechanism (MM) would permit the passage from “doing something” to “communicating it to someone else.” In this perspective, the MM would mediate semantic processes being involved in both the execution and in the understanding of messages expressed by words or gestures. Thus, the recognition of action related words would activate somatosensory regions, reflecting the semantic grounding of these symbols in action information. Here, the role of the sensorimotor cortex and in general of the human MM on both language perception and understanding is addressed, focusing on recent studies on the integration between symbolic gestures and speech. We conclude documenting some evidence about MM in coding also the emotional aspects conveyed by manual, facial and body signals during communication, and how they act in concert with language to modulate other’s message comprehension and behavior, in line with an “embodied” and integrated view of social interaction. INTRODUCTION In the last years, the hypothesis of language as "embodied" in sensory and motor experience has been widely discussed in the field cognitive neuroscience. In this review, we will firstly discuss recent behavioral and neurophysiological studies confirming the essential role of sensorimotor brain areas in language processing, facing the controversial issues and reviewing recent results that suggest an extended view of embodied theories. We will discuss this hypothesis, providing evidences about the gestural origin of language, focusing on studies investigating the functional relation between manual gesture and speech and the neural circuits involved in their processing and production. Finally, we will report evidences about the functional role of manual and facial gestures as communicative signals that, in concert with language, express emotional messages in the extended context of social interaction. All these points provide evidences in favor of an integrated body/verbal communication system mediated by the mirror mechanism (MM). WHAT IS EMBODIED ABOUT COMMUNICATION? THE INVOLVEMENT OF MIRROR MECHANISM IN LANGUAGE PROCESSING It is well known that our thoughts are verbally expressed by symbols that have little or no physical relationship with objects, actions and feelings to which they refer. Knowing how linguistic symbols may have been associated with aspects of the real world represents one of the thorniest issues about the study of language and its evolution. In cognitive psychology, a classic debate has concerned how language is stored and recovered in the human brain. According to the classical "amodal approach, " the concepts are expressed in a symbolic format (Fodor, 1998;Mahon and Caramazza, 2009). The core assumption is that meanings of words are like a formal language, composed of arbitrary symbols, which represent aspects of the word (Chomsky, 1980;Kintsch, 1998;Fodor, 2000); to understand a sentence, words are led back symbols that represent their meaning. In other terms, there would be an arbitrary relationship between the word and its referent (Fodor, 1975(Fodor, , 2000Pinker, 1994;Burgess and Lund, 1997;Kintsch, 1998). Neuropsychological studies provide interesting evidence for the amodal nature of concept. In Semantic Dementia, for example, a brain damage in the temporal and adjacent areas results in an impairment of conceptual processing (Patterson et al., 2007). A characteristic of this form of dementia is the degeneration of the anterior temporal lobe (ATL) that several imaging studies have highlighted to have a critical role in amodal conceptual representations (for a meta-analysis, see Visser et al., 2010). In contrast, the embodied approaches to language propose that conceptual knowledge is grounded in body experience and in the sensorimotor systems (Gallese and Lakoff, 2005;Barsalou, 2008;Casile, 2012) that are involved in forming and retrieving semantic knowledge (Kiefer and Pulvermüller, 2012). These theories are supported by the discovery of mirror neurons (MNs), identified in the ventral pre-motor area (F5) of the macaque (Gallese et al., 1996;Rizzolatti et al., 2014). MNs would be at the basis of both action comprehension and language understanding, constituting the neural substrate from which more sophisticated forms of communication evolved (Rizzolatti and Arbib, 1998;Corballis, 2010). The MM is based on the process of motor resonance, which mediates action comprehension: when we observe someone performing an action, the visual input of the observed motor act reaches and activates the same fronto-parietal networks recruited during the execution of the same action (Nelissen et al., 2011), permitting a direct access to the own motor representation. This mechanism was hypothesized to be extended to language comprehension, namely when we listen a word or a sentence related to an action (e.g., "grasping an apple"), allowing an automatic access to action/word semantics (Glenberg and Kaschak, 2002;Pulvermüller, 2005;Fischer and Zwaan, 2008;Innocenti et al., 2014;Vukovic et al., 2017;Courson et al., 2018;Dalla Volta et al., 2018). This means that we comprehend words referring to concrete objects or actions directly accessing to their meaning through our sensorimotor experience (Barsalou, 2008). The sensorimotor activation in response to language processing was demonstrated by a large amount of neurophysiological studies. Functional magnetic resonance imaging (fMRI) studies demonstrated that seeing action verbs activated similar motor and premotor areas as when the participants actually move the effector associated with these verbs (Buccino et al., 2001;Hauk et al., 2004). This "somatotopy" is one of the major argument supporting the idea that concrete concepts are grounded in action-perception systems of the brain (Pulvermüller, 2005;Barsalou, 2008). Transcranial magnetic stimulation (TMS) results confirmed the somatotopy in human primary motor cortex (M1) demonstrating that the stimulation of the arms or legs M1 regions facilitated the recognition of action verbs involving movement of the respective extremities (Pulvermüller, 2005;Innocenti et al., 2014). However, one of the major criticism to the embodied theory is the idea that motor system plays an epiphenomenal role during language processing (Mahon and Caramazza, 2008). In this view, the activations of motor system are not necessary to language understanding but they are the result of a cascade of spreading activations caused by the amodal semantic representation, or a consequence of explicit perceptual or motor imagery induced by the semantic tasks. To address this point, further neurophysiological studies using time-resolved techniques such as high-density electroencephalography (EEG) or magnetoencefalography (MEG) indicated that the motor system is involved in an early time window corresponding to lexical-semantic access (Pulvermüller, 2005;Hauk et al., 2008;Dalla Volta et al., 2014;Mollo et al., 2016), supporting a causal relationship between motor cortex activation and action verb comprehension. Interestingly, recent evidences (Dalla Volta et al., 2018;García et al., 2019) has dissociated the contribution of motor system during early semantic access from the activation of lateral temporal-occipital areas in deeper semantic processing (e.g., categorization tasks) and multimodal reactivation. Another outstanding question is raised by the controversial data about the processing of non-action language (i.e., "abstract" concepts). According to the Dual Coding Theory (Paivio, 1991), concrete words are represented in both linguistic and sensorimotor-based systems, while abstract words would be represented only in the linguistic one. Neuroimaging studies support this idea showing that the processing of abstract words is associated with higher activations in the left IFG and the superior temporal cortex (Binder et al., 2005(Binder et al., , 2009Wang et al., 2010), areas commonly involved in linguistic processing. The Context Availability Hypothesis instead argues that abstract concepts have increased contextual ambiguity compared to concrete concepts (Schwanenflugel et al., 1988). While concrete words would have direct relations with the objects or actions they refer to, abstract words can present multiple meanings and they needed more time to be understood (Dalla Volta et al., 2014Buccino et al., 2019). This assumes that, they can be disambiguated if inserted in a "concrete context" which provides elements to narrow their meanings (Glenberg et al., 2008;Boulenger et al., 2009;Scorolli et al., 2011Scorolli et al., , 2012Sakreida et al., 2013). Researches on action metaphors (e.g., "grasp an idea") that are involved in both action and thinking, found an engagement of sensory-motor systems even when action language is figurative (Boulenger et al., 2009(Boulenger et al., , 2012Cuccio et al., 2014). Nevertheless, some studies observe motor activation only for literal, but not idiomatic sentences (Aziz-Zadeh et al., 2006;Raposo et al., 2009). In a recent TMS study, De Marco et al. (2018) tested the effect of context in modulating motor cortex excitability during abstract words semantic processing. The presentation of a congruent manual symbolic gesture as prime stimulus increased hand M1 excitability in the earlier phase of semantic processing and speeded word comprehension. These results confirmed that the semantic access to abstract concepts may be mediated by sensorimotor areas when the latter are grounded in a familiar motor context. GESTURES: A BRIDGE BETWEEN LANGUAGE AND ACTION One of the major contribution in support of embodied cognition theory derived from the hypothesis of the motor origin of spoken language. Comparative neuroanatomical and neurophysiological studies sustain that F5 area in macaques is cytoarchitectonically comparable to Brodmann area 44 in the human brain (IFG), which is part of Broca's area (Petrides et al., 2005(Petrides et al., , 2012. This area would be active not only in human action observation but also in language understanding (Fadiga et al., 1995(Fadiga et al., , 2005Pulvermüller et al., 2003), transforming heard phonemes in the corresponding motor representations of the same sound (Fadiga et al., 2002;. In this way, similarly to what happen during action comprehension, the MM would directly link the sender and the receiver of a message (manual or vocal) in a communicative context. For this reason, it was hypothesized to be the ancestor system favoring the evolution of language (Rizzolatti and Arbib, 1998). Gentilucci and Corballis (2006) showed numerous empirical evidence that support the importance of the motor system in the origin of language. Specifically, the execution/observation of a grasp with the hand would activate a command to grasp with the mouth and vice-versa (Gentilucci et al., 2001(Gentilucci et al., , 2004Gentilucci, 2003;De Stefani et al., 2013a). On the basis of these results the authors proposed that language evolved from arm postures that were progressively integrated with mouth articulation postures by mean of a double hand-mouth command system (Gentilucci and Corballis, 2006). At some point of the evolutionary development the simple vocalizations and gestures inherited from our primate ancestors gave origin to a sophisticated system of language for interacting with others conspecifics (Rizzolatti and Arbib, 1998;Arbib, 2003Arbib, , 2005Gentilucci and Corballis, 2006;Armstrong and Wilcox, 2007;Fogassi and Ferrari, 2007;Corballis, 2010), where manual postures became associated to sounds. Nowadays, during a face-to-face conversation, spoken language and communicative motor acts operate together in a synchronized way. The majority of gestures are produced in association with speech: in this way the message assumes a specific meaning. Nevertheless, a particular type of gesture, the symbolic gesture (i.e., OK or STOP), can be delivered in utter silence because it replaces the formalized, linguistic component of the expression present in speech (Kendon, 1982(Kendon, , 1988(Kendon, , 2004. A process of conventionalization (Burling, 1999) is responsible for transforming meaningless hand movements that accompany verbal communication (i.e., gesticulations, McNeill, 1992) into symbolic gestures, as well as string of letters may be transformed into a meaningful word. Symbolic gestures therefore represent the conjunction point between manual actions and spoken language (Andric and Small, 2012;Andric et al., 2013). This leads to a great interest around the study of the interaction between symbolic gestures and speech, with the aim to shed light to the complex question about the role of the sensory-motor system in language comprehension. A large amount of researches have claimed that, during language production and comprehension, gesture and spoken language are tightly connected (Gunter and Bach, 2004;Bernardis and Gentilucci, 2006;Gentilucci and Dalla Volta, 2008;Campione et al., 2014;De Marco et al., 2015, suggesting that the neural systems for language understanding and action production are closely interactive (Andric et al., 2013). In line with the embodiment view of language, the theory of integrated communication systems (McNeill, 1992(McNeill, , 2000Kita, 2000) is centered on the idea that gestures and spoken language comprehension and production are managed by a unique control system. Thus, gestures and spoken language are both represented in the motor domain and they necessarily interact with each other during their processing and production. At the opposite, the theory of independent communication systems (Krauss and Hadar, 1999;Barrett et al., 2005) claims that gestures and speech can work separately and are not necessarily integrated each other. Communication with gestures is described as an auxiliary system, evolved in parallel to language, that can be used when the primary system (language) is difficult to use or not intact. In this view, gesture-speech interplay is regarded as a semantic integration of amodal representations, taking place only after processing of the verbal and gestural messages have occurred separately. This hypothesis is primary supported by neuropsychological cases which reported that abnormal skilled learned purposive movements (limb apraxia) and language disorders (aphasia) are anatomically and functionally dissociable (Kertesz et al., 1984;Papagno et al., 1993;Heilman and Rothi, 2003). However, limb apraxia often co-occuring with Broca's Aphasia (Albert et al., 2013) and difficulty in gesture-speech semantic integration was reported in aphasic patients (Cocks et al., 2009(Cocks et al., , 2018. Alongside clinical data, disrupting the activity in both left IFG and middle temporal gyrus (MTG) is found to impair gesture-speech integration (Zhao et al., 2018). Evidence in favor of the integrated system theory came from a series of behavioral and neurophysiological studies that have investigated the functional relationship between gestures and spoken language. The first evidence of the reciprocal influence of gestures and words during their production came from the study by Bernardis and Gentilucci (2006), who showed how the vocal spectra measured during the pronunciation of one word (i.e., "hello") was modified by the simultaneous production of the corresponding in meaning gesture (and viceversa, the kinematics resulted inhibited). This interaction was found depending on the semantic relationship conveyed by the two stimuli (Barbieri et al., 2009), and was replicated even when gestures and words were simply observed or presented in succession (Vainiger et al., 2014;De Marco et al., 2015). Neurophysiological studies showed controversial evidences about the core brain areas involved in gestures and words integration, that include different neural substrates as M1 (De Marco et al., 2015 IFG, MTG and superior temporal gyrus/sulcus (STG/S) (Willems and Hagoort, 2007;Straube et al., 2012;Dick et al., 2014;Özyürek, 2014;Fabbri-Destro et al., 2015). However, IFG virtual lesion showed to disrupt gesture-speech integration effect , in accordance with the idea of human Broca's area (and so the mirror circuit) as the core neural substrate of action, gesture and language processing and interplay (Arbib, 2005). Partially in contrast, investigation of temporal dynamics of the integration processing by mean of combined EEG/fMRI techniques confirmed the activation of a left fronto-posterior-temporal network, but revealed a primary involvement of temporal areas (He et al., 2018). Finally, further results in favor of motor origin of language came from genetic research, since it was suggested that FOXP2 gene was involved both in verbal language production and upper limb movements coordination (Teramitsu et al., 2004) opening the question about a possible molecular substrate linking speech with gesture (see Vicario, 2013). In conclusion, a good amount of results evidenced a reciprocal influence between gesture and speech during their comprehension and production, showing overlapping activation of the MM neural systems (IFG) involved in action, gesture and language processing and interplay (see Table 1). Further studies should consider potential integration of neuroscience research with promising fields investigating the issue at molecular level. MOTOR SIGNS IN EMOTIONAL COMMUNICATION The majority of studies that investigated the neural mechanism of hand gesture processing focused on the overlapping activations of words and gestures during their semantic comprehension and integration. However, it was shown that, gestural stimuli can convey more than semantic information, since they can also express emotional message. A first example came from the study of Shaver et al. (1987) which tried to identify behavioral prototype related to emotions (e.g., fist clenching is involved in the anger prototype). More recently, Givens (2008) showed that uplifted palms postures suggest a vulnerable or non-aggressive pose toward a conspecific. Main references Gallese and Lakoff, 2005;Barsalou, 2008;Casile, 2012;Kiefer andPulvermüller, 2012 Fodor, 1998;Patterson et al., 2007;Mahon and Caramazza, 2009;Visser et al., 2010 Challenges No shared model about the dynamic and interplay between sensorimotor and temporal brain areas at different stages of semantic comprehension Necessity to further support the essential contribute of sensorimotor system in abstract language processing Language evolution Gestural origin of Language Independent evolution of gestures and language Main concepts Speech evolved from arm postures that were progressively integrated with mouth gestures and vocalization by mean of a double hand-mouth command system. Gesture and speech necessarily interact during their processing and production Gestures and speech evolved independently. They are functionally dissociated and processed separately, or eventually integrated as amodal concepts). Communication with gestures is described as an auxiliary system Neural systems Inferior Frontal Gyrus Sensorimotor systems for gestures, temporal cortex for language Main references McNeill, 1992;Rizzolatti and Arbib, 1998;Gentilucci and Corballis, 2006;Gentilucci et al., 2006 Krauss andHadar, 1999;Barrett et al., 2005 Challenges Overlapping activation of areas belonging to mirror circuit (IFG) and linguistic areas (MTG) during gesture and speech processing Limited evidence about neural dynamic of gesture and speech interplay Potential fields of research (i.e., FOXP2 genes variations and communication behavior) However, beyond hand gestures investigations, emerging research about the role of motor system in emotion perception dealt with the study of mechanisms underlying body postures and facial gestures perception (De Gelder, 2006;Niedenthal, 2007;Halberstadt et al., 2009;Calbi et al., 2017). Of note, specific connections with limbic circuit were found for mouth MNs (Ferrari et al., 2017), evidencing the existence of a distinct pathway linked to the mouth/face motor control and communication/emotions encoding system. These neural evidences are in favor of a role of MM in the evolution and processing of emotional communication through the mouth/facial postures. As actions, gestures and language become messages that are understood by an observer without any cognitive mediation, the observation of a facial expression (such as disgust) would be immediately understood because it evokes the same representation in the insula of the individual observing it (Wicker et al., 2003). We propose that MM guides every-day interactions in recognizing emotional states in others, decoding body and non-verbal signals together with language, influencing and integrating the communicative content in the complexity of a social interaction. Indeed, the exposure to congruent facial expressions was found to affect the recognition of hand gestures (Vicario and Newman, 2013), as the observation of facial gesture interferes with the production of a mouth posture involving the same muscles (Tramacere et al., 2018). Moreover, emotional speech (prosody), facial expressions and hand postures were found to directly influence motor behavior during social interactions De Stefani et al., 2013bDi Cesare et al., 2017). CONCLUSION AND FUTURE DIRECTIONS Numerous behavioral and neurophysiological evidences are in favor of a crucial role of MM in language origin, as in decoding semantic and emotional aspects of communication. However, some aspects need to be further investigated, and controversial results were found about the neural systems involved in semantics processing (especially for abstract language). Nevertheless, a limitation emerges about experimental protocols which studied language in isolation, without considering the complexity of social communication. In other words, language should be considered always in relation to some backgrounds of a person mood, emotions, actions and events from which the things we are saying derive their meanings. Future studies should adopt a more ecological approach implementing research protocols that study language in association to congruent or incongruent non-verbal signals. This will shed further light onto the differential roles that brain areas play and their domain specificity in understanding language and non-verbal signals as multiple channels of communication. Furthermore, future research should consider to integrate behavioral and combined neurophysiological technique extending the sampling from typical to psychiatric population. Indeed, new results will have also important implications for the comprehension of mental illness that were characterized by communication disorders and MM dysfunction as Autism Spectrum Disorder (Oberman et al., 2008;Gizzonio et al., 2015), schizophrenia (Sestito et al., 2013), and mood disorders (Yuan and Hoff, 2008). AUTHOR CONTRIBUTIONS All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.
4,739
2019-09-24T00:00:00.000
[ "Psychology", "Linguistics" ]
The Analysis of Process Accidents Due to Risks in the Petrochemical Industries — The Case Study of Radiation Intensity Determination Proportional to Distance from Tank Level Today, one of the basic methods in engaging the decision making of senior managers for securing the organizations regarding assessing risks that the industries face, is the analysis of outcomes of processing accidents. The development of process industries is always associated with an increased risk of accidents. Therefore, paying attention to this case is a priority for the safety units in the oil refineries and petrochemical plants. The aim of this study was to investigate safe distance between the storage tanks of gasoline tanks in the petrochemicals of South Pars Region; comparisons to the American petroleum institute were studied. The maximum of length of flame and the ratio of length to radius of flame were measured 20 m and 2.6 m respectively. If occurrence of leakage and pool fire in one of these tanks, the maximum intensity of thermal radiation was equal to 13.7kw/m2. Hence, there is a possibility to control of fire at a distance of 15 meters from the tanks to heat radiation due to a pool fire, thereby, domino effect does not occur in the event of fire.. The fire is controllable but in case of leakages aggregation and creation of flammable ponds around these tanks, the gathered radiator intensities have immediate explosion potential in the tanks. So, it is proposed, besides increasing the distance between the storage tanks, other measures like quality and safety inspection planning of the connections and controlling instruments of the tanks, inspection of the fire lighting equipment like hydrants, monitors, fire hose reels are done and performance of reaction maneuvers in emergency conditions can take place Corresponding author. M. Ebrahemzadih et al. 22 and be documented and planned. Introduction The process industries have production, storage and use of flammable chemicals material units.Therefore, these industrials are most apt and prepared industries for facing the risks and explosions [1].Always fire and explosions have enormous implications for human resources and financial capital.For example, in 1997, the explosion of pressure pipeline in the French Vitzan Petrohemica occurred because of leakage of liquid gas and killed 57 members of the industry staff [2].Also, in 1992, release of light Nephatan from the pressurized tank resulted in the explosion in the Bambi refinery and 35 experts were killed [3].According to risk matrixes, the risks with consequence of death are the unacceptable and critical risks [4] [5].The lack of safety inspections has basic role in such accidents.So, the planning of safety inspection programs, risk assessment detonation and fire and determine preventive measures based safety is essential in the different units of the refinery.The items of inspection programs and risk assessment detonation and fire must be including identifying the warehouses of flammable materials, value and operating parameters of material, leakage paths of material, identifying places and storage locations for flammable materials, value and their operating parameters, possible leakage paths, identifying scenarios possible and consequences of accident, identifying of preventive measures fire and explosion [6].Although the processing facilities and equipment are used for productivity, profitability and also wealth production, but they have important (main) potential for damaging human being, property and the environment.Since the number of refineries and their products increased thus exposure to hazards is also higher for workers.For example, the accident of Felix borough in England in 1984 left 28 dead and 26 injured [7].Another case of chemical accidents occurred in December 1984 in Bhopal, India [8].Also, because of leakage and firing of oil (petroleum), from gas platform of BP company in Mexico gulf, 11 people died and the damage amounted to more than 40 billion $ and caused 22 percent decrease in shares and 63 percent decrease in profits [9].In order to prevent the accidents, assessment and identification of the accidents and risks in the refinery units are very important.For multiple injuries resulting from accidents in processing industries are illness cases and cause injury and death, damages to property, environmental pollutions, decreasing production and profits, loss of reputation and credibility in the organization.In the following, the common scenarios relating to fire has been investigated in oil and gas industries. Eruptive Fire This kind of fire usually consists of flammable materials that are exited from a pressurized source and there have been source of ignition in the environment.The speed of flammables effects greatly on the kind of fire, because initially the exit speed must be high enough to be sent to the air into the jet, and secondly the flame will be constant at a point that the flame speed will be equal to the local velocity of the gas mixture.When the exit speed of flammables increases, the air increases (in the mixture) too and as a result fuel consideration decreases in the jet, the flame distances more from the fuel exit and finally when the flame is far enough, the fuel density is lower than the low limit of flammability and the fire (flame) extinguishes.In the modeling of eruptive fire, the thermal radiation is the only factor that is studied as the sample for the damage of this type of hazard.The aim of eruptive fire modeling was assessing to the jet diameter and length and the rate of thermal radiation of burning jet at any point.It must be noted that in the modeling diagrams, the limit of radiation rate due to fire that causes severe injuries to humans is considered as 37Kw/m 2 [4] [10]. Abrupt Fire Abrupt fire is a non-explosive combustion due to gas cloud of flammable material releases in the air.In other words, if the gas is uniformly dispersed in the environment and its flame is without acceleration, the fire is non-explosive.But if the flammable gas cloud non-uniformity be concentrated in a tank, then be exposed with a spark, it will cause explosive in the gas cloud [11] [12]. Pool Fire It is possible to release a flammable liquid from equipment into the environment.Around the releasing source, a liquid pond may be formed and gradually evaporated.In case that the vapors of the pool reach to the ignition source, fire breaks out and forms pools that are flaring.This kind of fire is called pool fire, it must be noticed that the resulted vapors of pools reach to the ignition source and the resulted flames return back to the pool then caused fire.The outcome of a pool fire is assessed by the rate of radiation [11] [12].The consequences due to the thermal radiations are different from burns to death.The domino effects of thermal radiations on equipment installations of the refinery industries are clearly evident.In the case, the outcomes caused by radiation internally, the exposure time played basic roles.The English standard BSS908, 1990, has been presented for the effects of the thermal radiation dose (Table 1). Work Description Method The type and the system characteristics must be described; also the work limits that consist installations and activities and conditions must be studied, identified and defined.So, in this study, the pool fire, and the effects of thermal radiations of the gasoline storage tank were studied in the utility unit in one of the South Pars petrochemicals in Iran.Also, distances between storage tanks have been studied according to the American petroleum institute standard (API's Standards). Table 2 shows the data (information) relating the damage caused by thermal radiation on the equipment and installation of processing industries. Description of Work Methods In the following, the said tank specifications and the considered scenarios for this aim is described in details (Table 3). Using the following formula [14], the flame length of the pool time is calculated.Here, we have L for the flame length, D is the flame diameter, M is the rate of mass burning in kg/m 3 and r a is the air density in kg/m 3 and g is the acceleration of gravity that equals to 9.8 m/s 2 .It is worth noting that the calculation of eruptive fire characteristics can base on AP1521 [15]. The thermal radiation coefficient can be calculated according to the heat transfer from the source and on the following formula base: Here, P w is partial pressure of water vapor and X is the distance [16].Also, in this study, we use the following formula to determine maximum thermal radiation intensity in the horizontal distances from the effected source.Here, I is equal to the thermal radiation in KW/m 2 , SEP is equal to the surface emission in KW/m 2 , and VF is equal to visibility factor and t is equal to the thermal heat transfer power and the guidance wave in the water vapor [16]. Result On the basis of results, maximum flame length is 20 meters and maximum flame-to-flame length is 2.6 m.For determining the thermal radiation intensity in the ground level, the visibility coefficient item must be calculated.The method used to determine the visibility coefficient is based on two criteria that determine the reception amount on the ground level.The first standard is L + H, that here H is equal to the height of the tank and the second standard is equal to H.The difference between these two standards is equal to the visibility coefficient of flame.The nearest distance of the first tank is 15 meters (Table 4).Table 5 shows that the maximum of thermal radiation intensity in this distance is equal to 13.7 Kw/m 2 and if the wind direction is in the path of the No. 2 storage tank, the thermal intensity will be much larger.According to the determined permissible limit for damage (37.5kw/m 2 ), it can be concluded that the determined distance on API standard [16] for the tanks is enough. If the fire caused in the first tank do not be extinguished quickly, or that the second tank is not cooled properly, the intensity of radiant heat of the first tank evaporates flammable material in the second tank.Then, the vapor flux in the second tank is increased, thereby; second tank causes the explosion in other tanks in a distance of 15 meters from each other. Discussion In order to manage the firing extents in every processing industry, proportional protection and control measures must be done, and continuously elimination of hazards situation be followed up.Therefore, quality control and inspection, risk assessment, service and maintenance of process pipes, isolator pumps, and safety valves are es- sential in order to prevent equipment corrosion.This point is very important that all the safety and control systems, like the control valves, alarms, the alarming and extinguishing systems, must be tested and checked regularly and have high capability of performance in all the processing activities steps [17].All the firing sources in the industries must be performed and documented; all of the procedures of firing risk assessment must be updated.Also, in order to control static electricity in the process units of the refineries, bonding and grounding are the simplest ways [18].The phase prelaunched of systems and process industries is one of the most important phases because the nature of work has a very high risk rate of fire and explosion.So, for the risks control in this phase, the safety work policy and procedures, safe work permit systems, emergency response procedures and training programs for personnel must be considered before the implementation of this phase.Also, in order to control the spread of fire, it must be design a safe distance and the range for sources of flammable, explosive and toxic materials from other fire sources in the process industries.Determining the safe distance from the location of aid and rescue, fire-fighting units, fire extinguishing equipment and fire detection and alarm systems is very important.The water spreading system must be considered for protection against the thermal radiation to NFPA15 standard.Thus, the use of the water spreading systems for cooling of the tanks body containing flammable materials to prevent the occurrence of fire and explosion can be effective [19]. Fire Extinguishing According to the determined scenarios, the strategies and equipments of fire extinguishing like hydrants, monitors, firefighting car (vehicles), hose reels, and the reaction equipment in the emergency conditions must be inspected and assessed.These designs must be programmed according to reaction design in the emergency conditions.All fire-fighting equipment must have high reliability.All workers in refineries and process industries must have been trained about risk of fire and explosion chemical material, toxic hazards and fire extinguishing. The Final Results A comprehensive planning to risks control is the fundamental approach to the risks management in the process industries.The aim of the is to process planning, assess and recognize fire risk, and assess the probability and severity of consequences, and so, develop guidelines and engineering controls to prioritize the control measures.In the study, current fire scenarios and methods of determining thermal radiation intensity arising from eruptive fire and pool fire and the limit of its damage relating pool fire in gasoline storage tanks in one of the South Pars area Petrochemicals was described.The process planning for managing process dangers in the refineries and petrochemicals includes identification of hazards and documenting the instructions and control of the engineering systems.Therefore, in order to manage and eliminate the risks of fire and explosion, implementation of safety audits, assessing regularly and periodically of process, simulation maneuvers of process and review of control procedures is essential. Table 3 . Characteristics of cases and scenarios. Table 4 . The thermal radiation intensity according to distance from ground. Table 5 . Thermal radiation intensity based on the distance from the tank.
3,131.8
2015-05-15T00:00:00.000
[ "Environmental Science", "Engineering" ]
Boltzmann’s Six-Moment One-Dimensional Nonlinear System Equations with the Maxwell-Auzhan Boundary Conditions We prove existence and uniqueness of the solution of the problem with initial and Maxwell-Auzhan boundary conditions for nonstationary nonlinear one-dimensional Boltzmann’s six-moment system equations in space of functions continuous in time and summable in square by a spatial variable. In order to obtain a priori estimation of the initial and boundary value problem for nonstationary nonlinear one-dimensional Boltzmann’s six-moment system equations we get the integral equality and then use the spherical representation of vector. Then we obtain the initial value problem for Riccati equation. We have managed to obtain a particular solution of this equation in an explicit form. Introduction In case of one-atom gas any macroscopic system during process of its evolution to an equilibrium state passes 3 stages: initial transition period, described in terms of a full function of system distribution; the kinetic period, by means of a onepartial distribution function; and the hydrodynamic period, by means of five first moments of the distribution function.In kinetic regime the behavior of a rarefied gas in space of time and velocity is described by Boltzmann's equation.It is known from gas dynamics that in most of the encountered problems there is no need to use detailed microscopic gas description with help of the distribution function.Therefore it is natural to look for a less detailed description using macroscopic hydrodynamic variables (density, hydrodynamic velocity, temperature, etc.).As these variables are defined in terms of moments of the distribution function, we are faced with the problem of analyzing the various moments of Boltzmann's equation. Note that Boltzmann's moment equations are intermediate between Boltzmann (kinetic theory) and hydrodynamic levels of description of state of the rarefied gas and form class of nonlinear partial differential equations.Existence of such class of equations was noticed by Grad [1,2] in 1949. He obtained the moment system by expanding the particle distribution function in Hermite polynomials near the local Maxwell distribution.Grad used Cartesian coordinates of velocities and Grad's moment system contained as coefficients such unknown hydrodynamic characteristics like density, temperature, average speed, and so forth.Formulation of boundary conditions for Grad's system is almost impossible, as the characteristic equations for various approximations of Grad's hyperbolic system contain unknown parameters like density, temperature, and average speed.However, 13-and 20-moment Grad equations are widely used in solving many problems of the kinetic theory of gases and plasma. Boltzmann's equation is equivalent to an infinite system of differential equations relative to the moments of the particle distribution function in the complete system of eigenfunctions of linearized operator.As a rule we limit study by finite moment system of equations because solving infinite system of equations does not seem to be possible. Finite system of moment equations for a specific task with a certain degree of accuracy replaces Boltzmann's equation.It is necessary, also roughly, to replace the boundary conditions for the particle distribution function by a number of macroscopic conditions for the moments; that is, there arises the problem of boundary conditions for a finite system of 2 Journal of Applied Mathematics equations that approximate the microscopic boundary conditions for Boltzmann's equation.The question of boundary conditions for a finite system of moment equations can be divided into two parts: how many conditions must be imposed and how they should be prepared.From microscopic boundary conditions for Boltzmann's equation there can be obtained an infinite set of boundary conditions for each type of decomposition.However, the number of boundary conditions is determined not by the number of moment equations; that is, it is impossible, for example, to take as many boundary conditions as equations, although the number of moment equations affects the number of boundary conditions.In addition, the boundary conditions must be consistent with the moment equations and the resulting problem must be correct.Boundary condition problems arise in the following tasks: (1) moment boundary conditions in rarefied gas slip-flow problems; (2) definition of boundary conditions on the surfaces of streamlined rarefied gas; (3) prediction aerospace aerodynamic characteristics of aircraft at very high speeds and at high altitudes and so forth. In work [3] we have obtained the moment system which differs from Grad's system of equations.We used spherical velocity coordinates and decomposed the distribution function into a series of eigenfunctions of the linearized collision operator [4,5], which is the product of Sonine polynomials and spherical functions.The resulting system of equations, which correspond to the partial sum of series and which we called Boltzmann's moment system of equations, is a nonlinear hyperbolic system in relation to the moments of the particles distribution function. The structure of Boltzmann's moment system of equations corresponds to the structure of Boltzmann's equation; namely, the differential part of the resulting system is linear in relation to the moments of the distribution function and nonlinearity is included as moments of collision integral [6]. The linearity of differential part of Boltzmann's moment system of equations simplifies the task of formulation of the boundary conditions.In work [3] a homogeneous boundary condition for particles distribution function was approximated and proved the correctness of initial and boundary value problem for nonlinear nonstationary Boltzmann's moment system of equations in three-dimensional space. In work [7] the initial and boundary value problem for one-dimensional nonstationary Boltzmann's equation with boundary conditions of Maxwell was approximated by a corresponding problem for Boltzmann's moment system of equations.The boundary conditions for Boltzmann's moment system of equations were called Maxwell-Auzhan conditions. In work [8] a systematic nonperturbative derivation of a hierarchy of closed systems of moment equations corresponding to any classical theory has been presented.This paper is a fundamental work where closed systems of moment equations describe a transition regime.Moreover, hydrodynamical model is used to describe charge transport in a generic compound semiconductor.Compound semiconductors have found wide use in the microelectronic industry.The evolution equation for macroscopic variables is obtained by taking moments of the transport equation [9][10][11][12][13]. The study of various problems for Boltzmann's moment system of equations is an important and actual task in the theory of a rarefied gas and other applications of the moment system equations.The correctness of initial and boundary problems for Boltzmann's moment system of equations with Maxwell-Auzhan boundary conditions is being studied for the first time. Existence and Uniqueness of the Solutions of Initial and Boundary Value Problem for Six-Moment One-Dimensional Boltzmann's System of Equations with Maxwell-Auzhan Boundary Conditions In this section we prove the existence and uniqueness of solutions of the initial and boundary value problem for six-moment one-dimensional Boltzmann's system of equations with Maxwell-Auzhan boundary conditions in space of functions, continuous in time and summable in square by spatial variable.Note that the theorem of the existence of a global solution in time of the initial and boundary value problem for 3-dimensional nonlinear Boltzmann's equation with boundary conditions of Maxwell is proved in work [14]. We write in an expanded form system of one-dimensional Boltzmann's moment equations in the kth approximation, which corresponds to decomposition of the particle distribution function by eigenfunctions of the linearized collision operator where the moments of a nonlinear collision operator are expressed through coefficients of Talmi and Klebsh-Gordon [15,16] and have the form where ⟩ are generalized coefficients of Talmi, ( 1 0 2 0/0) are Klebsh-Gordon coefficients, = 1/ √ is the constant, is Boltzmann's constant, and is the ideal gas temperature. For generalized coefficients of Talmi there exists a table [15] for each value of quantum number = 2 + from 0 to 6.Moreover, there is a program on IBM for calculation of generalized coefficients of Talmi. It is possible to check through direct calculations that and matrix 1 has three positive and the same number of negative nonzero eigenvalues.From ( 6)-( 7) it follows that the number of boundary conditions on the left and right ends of interval (−, ) is equal to the number of positive and negative eigenvalues of the matrix 1 . where 1 is a constant independent from and ∼ is the particular solution of Riccati equation (15) in an explicit form. An expansion form of the function It is not difficult to prove using immediate substitution that function 1 () satisfies Riccati equation (15). The uniqueness of solution of problem ( 5)-( 7) is proved by contradiction.Let problem ( 5)-( 7) have two different solutions 1 , 1 and 2 , 2 .We denote them again by = 1 − 2 and = 1 − 2 .Then with respect to new values of and we obtain the following problem: We prove that solution of problem ( 44)-( 46) is trivial.Hence the uniqueness of the solution of problem ( 5)-( 7) follows. Using the method which was mentioned before we get (see equality ( 14)) where () has the same value as in (15) and
2,024.4
2016-07-10T00:00:00.000
[ "Mathematics" ]
Evaluation of the Design and Implementation of a Peer-To-Peer COVID-19 Contact Tracing Mobile App (COCOA) in Japan We evaluate a Bluetooth-based mobile contact-confirming app, COVID-19 Contact-Confirming Application (COCOA), which is being used in Japan to contain the spread of COVID-19, the disease caused by the novel virus termed SARS-COV-2. The app prioritizes the protection of users’ privacy from a variety of parties (eg, other users, potential attackers, and public authorities), enhances the capacity to balance the current load of excessive pressure on health care systems (eg, local triage of exposure risk and reduction of in-person hospital visits), increases the speed of responses to the pandemic (eg, automated recording of close contact based on proximity), and reduces operation errors and population mobility. The peer-to-peer framework of COCOA is intended to provide the public with dynamic and credible updates on the COVID-19 pandemic without sacrificing the privacy of their information. However, cautions must be exercised to address critical concerns, such as the rate of participation and delays in data sharing. The results of a simulation imply that the participation rate in Japan needs to be close 90% to effectively control the spread of COVID-19. Introduction As of August 23, 2020, over 23 million cumulative cases of COVID-19 and nearly 800,000 deaths from the disease have been reported worldwide [1].Since the first cases were reported in late 2019, the world has witnessed the rapid spread of the pathogen, and it has been declared a global public health crisis by the World Health Organization [2].Due to the infectiousness of the disease and the dynamics of interperson interactions, the spread of COVID-19 could advance in a way that is unnoticeable to individuals, as evidenced by subclinical presymptomatic and asymptomatic cases [3], which refer respectively to cases in which infection started before the onset of symptoms and infections without the emergence of symptoms.Research has demonstrated the risk of person-to-person transmission of COVID-19 between individuals, especially for those in close contact (ie, close proximity) [4].When infections are established where individuals are unable to readily self-triage their exposure risk, timely responses to COVID-19 will be challenging; these responses could also be weakened by delayed data sharing, impeded privacy preservation, and impaired security [3]. A variety of measures based on digital health have been adopted to control the spread of COVID-19 [5][6][7][8][9][10][11][12][13].These containment measures display heterogeneities in terms of their design: decentralized (ie, mostly privacy-first) versus centralized (ie, mainly data-first) deployment frameworks have emerged alongside Bluetooth-, GPS-, and quick response (QR)-based sensor technologies [5,9].Countermeasures such as contact tracing can play remarkable roles in the containment of the pandemic, including inference of exposure risk, identification of infections, and quarantining or isolation of individuals being traced [5,10,11].However, there is major divergence among nations regarding which digital health approach to employ (eg, a centralized approach that collects private data at the expense of potential illegal use vs a decentralized approach that stores data on local devices and leaves individuals in charge of their sensitive information at the cost of constrained accessibility for others) to address critical concerns such as privacy preservation, health care pressure load-balancing, speed of response, and ease of operations [9,12,13].The centralized approach highlights a data-first methodology and involves the collection of privacy-sensitive information; this approach can enhance the capacity of unified administration, but the identities of individuals can be readily inferred [6,8,9].GPS-and QR-based contact tracing apps facilitate evidence-based inference and improve the traceability of contact tracing [6,8,9,13].However, multiple crucial concerns must be addressed to achieve effective containment.The first concern is that the information gathered through GPS is not strictly equivalent to close contact; hence, bias could be undoubtedly introduced.The second concern is the unlawful use or abuse of sensitive personal information obtained using GPS or QR [5][6][7][8][9].Third, there is some debate that centralized approaches could cause discrimination, reduce confidence, and negatively impact the health of individuals if their private data are misused or breached [9,13].In contrast, the typical application of the decentralized approach includes Bluetooth-based digital health, which does not theoretically identify individuals; hence, this approach is desirable for settings where concealing users' identities and preventing accessibility of their contact information are valued by the population [5,8,10,13].Bluetooth digital health approaches rely on microwave and millimeter-wave technologies to sense the proximity between local devices, enabling the tracking of social contacts with a high degree of precision [5,10,13].Hence, infections due to close contact (ie, geographical proximity) with pre-asymptomatic or asymptomatic patients can be easily detected and recorded by a Bluetooth-based approach.It is feasible for exposed people to evaluate and self-identify their exposure risk without disclosing either their own identity or the identities of their counterparts.This approach is more rapid and efficient in close contact diagnosis, less labor-intensive, and less susceptible to human error than extant approaches.Bluetooth contact tracing is currently being adopted by numerous countries, including Japan, India, and Singapore [5,9,[13][14][15]. The first cases of COVID-19 in Japan were reported in late January 2020; since then, numerous cases, including asymptomatic and presymptomatic infections, have been identified, and the national medical system has been overburdened, with ever-increasing risk of collapse [14,15].The excessive strain on the capacity of the medical system is expected to be alleviated.Unidentifiable discrete spreading events could lead to a later outbreak of infections; thus, it is important for individuals to gain an updated understanding of the pathogen [16].When people can locally track their exposure risk, self-triage, and make differentiated responses based on digitally provided instructions, cross-transmission (eg, cluster infections at crowded locations) and unnecessary in-person visits can be reduced [15]. In this viewpoint, we discuss a decentralized and GPS-free Bluetooth digital health approach, COVID-19 Contact-Confirming Application (COCOA).This approach is mainly used to address the issues of privacy protection, efficacy enhancement, load balancing of pressure on the health care system, population mobility, and manual operation errors, and it principally complies with the Apple and Google contact tracing technology frameworks [5,13].The major aim is to appraise how the approach can be used as a routine tool to contain the spread of COVID-19, with emphasis on privacy preservation and load-balancing.Prior research has revealed that other factors, such as the rate of participation, play remarkable roles in contributing to the effectiveness of containment [17].The results of the simulation in our study are consistent with the findings in other empirical research. Framework and Core Mechanism of COCOA The Architecture and Prototype of COCOA A schematic of the general architecture of the COCOA system is shown in Figure 1.The app automatically records close contact (ie, defined as within 1 meter of proximity for at least 15 minutes in COCOA) on Android and iOS devices by employing Bluetooth technology [18].The COCOA system consists of three major sections: two mobile terminal apps for individuals (ie, infected and potentially exposed), and an infection information sharing system maintained by public authorities and health care providers.COCOA complies with the decentralized framework; this means that the COCOA app only locally tracks close contacts and performs matching inference of exposure risk, during which no personal private information is requested or collected through COCOA. Core Mechanisms To conceptualize how COCOA optimizes its core functionality, including privacy protection of individuals and load-balancing of health care stress, in comparison with other centralized digital health approaches, we illustrate the core mechanism and diagram in Figure 2. COCOA integrates the following principal features: 1. Individuals (ie, either infected or potentially exposed) receive informed consent to participate and authorize data sharing. 2. The informed consent feature is configurable, and consent can be withdrawn at any time. 3. Prior records are erased when the user opts out. 4. No sensitive personal information that enables the app user to be identified, such as date of birth, gender, address, telephone number, email address, or location, is requested or collected through COCOA. 5. Close-contact data are encrypted, saved only on users' local devices, and automatically deleted after 14 days, which is the period generally considered to be the average incubation interval of COVID-19. 6.If the user is infected, informed consent of the COVID-19-positive patient is required to authenticate and distribute their infection status. 7. Upon completion of verification, the process code that is used by the infected person to verify the accuracy of their infection status with the central server is eliminated from the COCOA app, the notification server, and the management system. 8. Exposure risk matching is performed on local devices.When individuals (whether symptomatic, presymptomatic, or asymptomatic) are in close contact, the COCOA app records this status by automatically exchanging generated random codes, which change periodically and thus cannot be exploited to identify either the infected or potentially exposed users.The codes are not shared with the information sharing system unless the individuals are COVID-19-positive.The codes will be saved only on local devices and erased after 14 days.As these codes are generated randomly and changed periodically, they cannot be exploited to uniquely identify any individuals; this guarantees the preservation of privacy of users' data from infected people, potentially exposed people, attackers, and public authorities.In this way, concerns regarding privacy-preserving issues inherent to other technologies (eg, GPS and QR) can be waved.The detection of close contacts can run automatically in the background without requiring COCOA to be active, all of which can be unnoticeable to the individuals in contact.This feature improves the efficacy of detection, increases the ease of operations, and reduces manual errors [15,18].The Health Center Real-time Information-sharing System on COVID-19 (HER-SYS) is operated and maintained by prefecture-level or local health care providers.It issues a process code when an individual tests positive for COVID-19 by polymerase chain reaction (PCR).These process codes are distributed to the patients through private messaging (eg, emails) and are not exploited to bind private information (eg, telephone numbers that can identify individuals).Hence, privacy protection issues during the data dissemination steps are also not of concern.The notification server is administered by the Ministry of Health, Labor and Welfare in Japan, and its functionality is considerably constrained for privacy protection.The notification server does not store the patients' infection status or any other sensitive personal information.When one individual is notified that they have been infected, they are encouraged to share their status with potential close contacts.However, informed consent and authorization are requested.To prevent malicious inquiries and to guarantee the accuracy of data, the patient must input a process code and authenticate the correctness of their data with the notification server, which transfers the request to HER-SYS.HER-SYS authenticates the accuracy and returns the outcome to the notification server, which then distributes the random code of the infected person to all potentially exposed people upon request.The COCOA apps of the exposed people then perform local matching inference based on the retrieved anonymized list of random codes for individuals infected with COVID-19.Note that asymptomatic or presymptomatic infections can be traced effectively because the detection mechanism of COCOA hinges on geographical proximity, which is generally considered to be the critical factor contributing to the transmission of highly infectious disease.If a match is found and the exposure risk is identified, health care instructions on the outcome and the severity of risk will be provided.The potential exposed person can respond appropriately according to their corresponding symptoms.For severely symptomatic individuals, urgent care must be scheduled.In contrast, asymptomatic or mild cases may choose to self-isolate at local sites.As the matching calculation is performed locally and individuals can self-triage the exposure risk, risky in-person visits and cross-transmission at health care facilities can be curtailed.This can prioritize limited health care resources for more severely ill patients and enhance the load-balancing of the pressure on medical systems, thus lowering the risk of collapse of the health care system [14,15,18]. Different countries differ in their digital health participation rates [9,12,13].According to a report by MIT Technology Review [19], we outlined the statistics of countries adopting or partially adopting Bluetooth digital health by the end of July 2020 (Table 1).On average, the rate of participation is higher XSL • FO RenderX for countries using centralized Bluetooth digital health frameworks than for countries using decentralized Bluetooth frameworks; however, the latter frameworks generally outperform the former in privacy protection.Further, countries using decentralized approaches, most of which hinge on voluntary participation, mostly sustain low rates of participation.None of these countries currently have a participation rate ≥60% [19].To illustrate how countries using Bluetooth digital health can differ in the capacity of privacy preservation, we provide details of two centralized frameworks: one voluntary (ie, Aarogya Setu in India [20]) and one involuntary (ie, TraceTogether in Singapore [21]).Emphasis is placed on the core qualitative concepts employed to clarify the major differences; detailed quantitative technical specifications are not examined in this paper (Textbox 1). • Personal information, including names, telephone numbers, and GPS locations, is not requested or collected. • Participation is voluntary.Informed consent to participate is requested. • Close contact detection automatically runs in the background without requiring COCOA to be active, resulting in ease of use and low power consumption. • Mobile phones generate and exchange periodically changing random codes with close contacts. • Close contact information is saved only on local mobile phones for 14 days and is not transmitted.Individuals poll the central server (without sharing private information) to retrieve the list of infected people, not the reverse. Aarogya Setu (Bluetooth-based digital health framework in India) • Personal data, including name, gender, travel history, and telephone numbers, are requested and shared with the central server. • GPS locations are collected and used to trace the paths of infected individuals. • Participation is voluntary. • There are risks of data inaccuracy and illegal data use. • It is difficult to operate. • Its power consumption is high. TraceTogether (Bluetooth-based digital health framework in Singapore) • Random tokens recording close contacts are shared with the central server, which maintains a database linking tokens and telephone numbers. There is a likelihood of linkage attacks and unlawful use. • Infected individuals are required by law to share their infection status, including telephone and unique identification numbers. • Individuals are notified of their exposure risk via identifiable information (eg, telephone numbers). • GPS location data are not tracked; however, telephone and unique identification numbers are collected by public authorities. • Participation is generally mandatory. • Inference of exposure match is performed on the authority-administered central server. COCOA differentiates from Aarogya Setu in privacy preservation in that the latter framework requests individuals' self-reported personal data, including gender, travel history, and telephone numbers; this raises concerns regarding the accuracy of data in the case where users share wrong information, as no mechanism is provided for authentication.Further, because GPS locations are collected by Aarogya Setu, it is debated that these data could be used to identify individuals without improvement of contact tracing precision (eg, individuals on different floors of the same building) [20].In contrast, in Singapore, individuals are legally required to share their infection status; hence, the rate of participation can be guaranteed.However, personal data such as telephone numbers and unique identification numbers are also collected, which creates concerns regarding the possibility of illegal use of private information.Further, inference of exposure is performed on the central server; hence, technical pressure on the medical system must be optimized [21]. Screenshots and Diagram of the COCOA App Figure 3 shows screenshots of the COCOA app.The app can be divided into components for infected people (Figure 3A), potentially exposed people (Figure 3B), and general settings (Figure 3C).The process can be described as follows: 1.An individual is tested and identified as COVID-19-positive using PCR.The individual receives a process code from HER-SYS, verifies the accuracy of their status through the notification server, and authorizes anonymized sharing of infection status (Figure 3A). 2. Potentially exposed individuals retrieve the up-to-date list of COVID-19 infections (Figure 3B). 3.If an exposure match is identified, exposure statistics and subsequent response guidance are promptly provided.Otherwise, a message indicating no exposure is promptly provided (Figure 3B). 4. Close-contact recording and COCOA participation are configurable.Records will be erased if the user opts out (Figure 3C). Statistical Analysis and Simulation Results Digital health can enhance individuals' knowledge and risk perceptions of COVID-19, thus decreasing population mobility [22][23][24][25].Reducing mobility is a controversial but effective measure to flatten the curve and control global pandemics; it decreases the generation of crowded spaces that aggravate cluster infections [23][24][25].A simulation was conducted that implies that the spread of COVID-19 in Japan will be gradually contained by reducing the population mobility and the amount of time spent in crowded spaces [22].We observed the dynamics of population mobility at the prefecture level in Japan by comparing data from August 2020 (ie, when COCOA was XSL • FO RenderX deployed) with data from the same interval of the previous year (ie, when COCOA was not deployed) (Figure 4).The analysis suggests that the nationwide population mobility in 2020 decreased by 20% on average (ranging from the minimum of 12% in Saitama Prefecture to the maximum of 30% in Akita Prefecture).Reducing population mobility lowers the risk of exposure to COVID-19 and the risk of infection [18,25,26].Since its official deployment on June 30, 2020, COCOA has been used by approximately 15 million individuals as of August 27, 2020 [18].This finding denotes that around 12.6% of the population of Japan (15 million/118.6 million people aged ≥15 years) chose to participate by August 2020 [27].The intervention measures deployed in Japan are noncompulsory, informed consent is requested prior to participation, and no legal penalty is imposed in the case of noncompliance; these features are less aggressive than those of digital strategies in some other countries [22][23][24][25]. In Japan, 69,001 confirmed cases of COVID-19 and 1307 deaths were reported from January 14 to September 2, 2020; the numbers of cases and deaths during this time were 3,769,523 and 66,333 for India and 56,852 and 27 for Singapore, respectively [26].The effectiveness of containment in Japan is greater than that in India in terms of both infected cases and mortality.However, countries with compulsory contact tracing measures (eg, Singapore) appear to outperform those with noncompulsory measures (eg, India and Japan).It was estimated that older people (ie, age ≥65 years) in Japan would comprise 28.7% of the total population by the end of August 2020 [18,[25][26][27]; the rates of severe cases and mortality among older people were 2.0% and 1.9%, respectively, during the same period, which are lower than the global average mortality rate of 3.3% (852,758/25,602,665) [18,26]. A simulation performed at Oxford University suggested that digital contact tracing would fail to decrease the spread of COVID-19 if the rate of participation fell to <60% (600,000/1,000,000) [28].Similar analyses by other researchers reinforce that varying adoption rates of peer-to-peer contact tracing apps can influence the trajectory of the pandemic [17,29]. By employing the simulation model described in [30], we estimated how the rate of participation would affect the trajectory of the COVID-19 pandemic in Japan (Figure 5).The model evaluates scenarios in which the epidemic is established and countermeasures such as contact tracing are employed to control the spread of COVID-19; it can be observed how the trend of the effective reproduction number (R t ) and thus of the outbreak would dynamically change.Prior research identified that when R t , which is defined as the average number of secondary cases generated by a single infectious case, decreases to less than one (ie, R t <1), transmission of the disease will stop and the pandemic will ultimately be contained [22][23][24].The simulation was calibrated to the demographic attributes in Japan [14,22,[25][26][27]29], the basic reproduction number (ie, 2.56) found for Japan [30], and the ratios of symptomatic patients in the report by the National Institute of Infectious Diseases in Japan [31].The assumptions for the simulation are as follows: (1) the population can freely choose to opt in or opt out from COCOA; (2) there are no delays in data sharing; and (3) all the populations in households, schools, workplaces, and other scenarios can be successfully digitally traced.The simulation outcome (Figure 5) shows that when the participation rate increases starting from zero, the effective reproduction number decreases gradually from a value >1.However, the pandemic would finally be contained when a threshold was exceeded and more people chose to opt in.To meaningfully contain the spread of COVID-19 (ie, R t <1), approximately 90% participation of the population would be required, which reinforces prior findings that controlling COVID-19 requires an estimated population uptake ranging from 56%-95% for contact-tracing apps [32]. Strengths, Limitations, and Future Directions The COVID-19 pandemic has imposed unprecedented challenges upon individuals, health care providers, and public authorities; meanwhile, different countries are using differentiated digital technologies and strategies to contain the spread of COVID-19, taking into account both technical and nontechnical factors.Digital health is not a panacea that will solve all difficulties; however, it does enhance the potential to counteract the disease compared to manual contact tracing [33,34]. In comparison with the Bluetooth-based digital health frameworks in other countries (eg, India or Singapore) or centralized approaches, COCOA more effectively protects the privacy of individuals from their counterparts, potential attackers, and public authorities without sacrificing accuracy or efficiency.First, users are not tapped to self-report personal data (eg, names or telephone numbers) through the app; this enhances the efficacy and eliminates the need for proofreading when users input incorrect data.Additionally, concerns about malicious use or illegal breach of private data can be waived.Moreover, persons are not requested to share their private information when infected or provide sensitive data, which may be subject to linkage attacks, when notified of infection.The authentication of infection precludes malicious exploitation of or attack on accurate infection information by other individuals, preventing misinformation regarding exposure.Location details, which are an unsuitable proxy for exposure, are not collected; this ensures that the movement paths of individuals will not be tracked and the identities of the individuals will hence not be disclosed. Further, because the matching of exposure inference is performed on local sites, and subsequent provision of instructions when exposure risk is identified can curtail unnecessary in-person visits and risk of crosstransmission, the load of pressure on the health care system can be substantially balanced [9].Bluetooth digital health uses proximity to identify close contact; hence, the speed of close contact detection is faster than that of other non-Bluetooth digital approaches.COCOA can run automatically in the background without interfering with other apps, which reduces errors from manual operation and enhances its efficiency.The deployment of the app contributes to increased risk perception and the reduction of nationwide population mobility. Contact tracing is an essential part of transitioning back to normal economic rhythms while simultaneously managing the risk of subsequent cyclical outbreaks [9,[33][34][35].The benefits XSL • FO RenderX can be multiple for a variety of responders.For potentially exposed or infected individuals, it is possible to know whether risky exposure has been established or disseminate knowledge to others without disclosure of identity or leakage of confidential information, which can increase individuals' confidence and trust in the health care system and self-awareness of their own behavior changes.This may facilitate appropriate and timely responses to the disease.For potential attackers, as no private information is available regarding either infected or exposed people for illegal or unauthorized exploitation, abnormal activities inherent to centralized or other digital health frameworks can be waved.For public authorities, the triage of patients and cumbersome matching inference of exposure are significantly trimmed; hence, the pressure on the medical system is expected to be alleviated.Further, as public authorities do not store personal private information, the risk from any attack on or misuse of their data is minimized.Bluetooth digital health has great potential to be used as a routine and mainstream tool in future outbreaks [36].The rate of participation is expected to increase over time.COCOA supplies a new approach that is supplemental to extant digital health frameworks that fail, either partially or completely, in these facets.It could perform or match well in contexts where the population is highly privacy-sensitive and where limited health care resources are at risk of collapse. Although decentralized telehealth has a variety of benefits and strengths, it has disadvantages as well [33,34,37].Multiple critical concerns must be addressed to achieve effective containment. First, the participation rate can essentially affect the trajectory of the outbreak.Studies have shown that societal level benefit hinges on broad and diverse user participation [19,[38][39][40][41][42][43].A low rate of participation can be associated with factors such as users' altruism, the population in rural or remote areas, wireless connectivity, availability of digital health, level of digital illiteracy, and legitimation regulations.In some countries, the use of private data is protected legally; whether this applies to other settings may need require more study and more time [5,9].Solutions that have performed well for some communities may not work well in other communities with different cultural norms, legitimate regulations, and shared perceptions of privacy. From the legislation perceptive, public disclosure of individuals' protected data may be a violation of law in some contexts [39]. A higher level of participation may be achieved through mandatory legal regulations, enforcing adoption or substantial enhancement of shared public awareness.However, the rapid adoption of compulsory digital health measures without public consensus and discussions could provoke debates due to the fundamental heterogeneity in the attitudes regarding how digital health should function and, crucially, who should have access to the generated data [38][39][40][41][42]44,45].Residents' perception of privacy and trust in public authorities can vary from culture to culture, which can impact the captured definition of individual privacy preservation [39][40][41].A survey conducted in five other countries (ie, France, Germany, Italy, the United Kingdom, and the United States) found that people in settings with stronger public privacy and security concerns are relatively less supportive of app-based contact tracing, and individuals with less trust in public authorities are also less supportive [46,47]. Second, the delays in data sharing could allow the spread of COVID-19 to continue, increasing the time and effort needed to contain it [9,46,47].If infectious individuals and their close contacts could be identified with efficacy, the effectiveness of digital health could be increased remarkably, and limited health care resources could thus be prioritized for the quarantining and treatment of the most severe cases [42].However, this mechanism is compromised during a pandemic, in which delays of data sharing occur.Voluntary participation could cause noncompliance, generating a latency in responses [42,43].The spread of COVID-19 hinges partially on the efficacy of data sharing and promptness of responses, given the infectiousness of the pathogen [42].The greater the delays, the more difficult it is to contain the outbreak.Hence, timely sharing of information is critical to prevent subsequent cyclical outbreaks [43].Finally, as data are automatically erased after a periodic interval, it is difficult to evaluate the long-term effects of a decentralized Bluetooth approach [13]. Future research could examine how privacy-enabled noncompulsory Bluetooth digital health can both quantitatively and qualitatively reduce the effectiveness of contact tracing relative to compulsory interventions.It could also examine ways to improve critical factors such as participation rate and delays of data sharing in these settings to enhance the effectiveness of containment.With the combined efforts of a variety of responders, the negative impacts of these factors are expected to be minimized.Coupled with the advancement in digital technologies and scientific understanding, telehealth can be enhanced to serve as a sustainable and mainstream solution to counter the COVID-19 pandemic, and it can be simultaneously employed as a routine tool to protect the privacy and well-being of the public [16]. Conclusions The balance between privacy protection, public health, and other objectives is controversial [13].COCOA contributes to prioritizing the preservation of users' privacy more effectively than the centralized Bluetooth digital health frameworks used in some other countries.The matching inference of exposure is performed locally, and individuals can self-triage their risk of exposure, which facilitates the load balancing of pressure on the medical system.It works better in load-balancing than centralized frameworks.As public authorities do not collect or manage users' sensitive personal information, concerns regarding illegal use or malicious attacks on private data can be disregarded.The detection of close contact is rapid and effective, and it reduces the likelihood of crosstransmission and in-person contacts.The background running feature enhances the efficacy of the approach and reduces errors of operation, which could be vital in the fight against highly infectious diseases such as COVID-19. Since the deployment of COCOA, an average of 20% reduction in population mobility has been observed in Japan, which has affected the trajectory of the outbreak.With the wide spread of wireless connections and advancements in digital technologies, digital health can reduce inequality in access to health resources, promote health literacy, and improve risk perceptions.The XSL • FO RenderX Tokyo area has observed faster growth in the number of infected cases than other prefectures in Japan [22,24]; hence, substantial improvements in the participation rate and speed of data sharing are of great concern in these densely populated communities or in places where the risk of close contact is high [22][23][24]. Countries diverge in their digital health frameworks and technologies.Decentralized privacy-first Bluetooth approaches can protect citizens' sensitive information, but possibly at the expense of compromised participation and impeded central surveillance.In contrast, a centralized data-first framework can warrant traceable data but may substantially violate individuals' privacy.Cultures differ in the perception and definition of privacy.The lack of a consensus on privacy protection in contact tracing incurs risks of noncompliance, as evidenced by recent privacy scandals [42,43].This has hindered governments' capacity to effectively respond to the pandemic.The deployment and acceptance of telehealth in specific settings reflect both technical and nontechnical factors such as regional heterogeneity, cultural conflicts, shared altruism, and legal regulations [9,44]. Given that participation and data sharing are nonbinding, the privacy-first approach could consistently generate skepticism but ideally will enable the implementation to mitigate current and subsequent cyclical pandemics [41].Coupled with the efforts from a variety of responders, the rate of participation and delays in data sharing are expected to improve over time. Countries using the decentralized Bluetooth approach must prioritize deliberation of how currently unresolved problems can be addressed to contain the spread of COVID-19.Digital health itself cannot overcome all these challenges; however, by combining it with other countermeasures, such as social distancing, early case isolation, and hygiene practice, it is feasible to achieve meaningful containment [45].With these improvements, it could be feasible to achieve a balance between privacy preservation and public health by enabling individuals to have full control over sensitive data, identify local exposure risk, share their data in a timely fashion, and enact prompt responses [42]. This decentralized Bluetooth approach will undoubtedly upgrade its definition with advancement in digital health, digital technologies, and a more accurate scientific understanding of the disease.Lessons learned from this current deployment will play paramount roles in future pandemics, further aid the establishment of an effective routine surveillance approach, and provide meaningful insights for other countries and regions. Figure 2 . Figure 2. Core mechanisms and diagram of the COVID-19 Contact-Confirming Application (COCOA) framework.HER-SYS: Health Center Real-time Information-sharing System on COVID-19; MHLW: Ministry of Heath, Labor and Welfare. Textbox 1 . Core concepts of the COVID-19 Contact-Confirming Application (COCOA) framework and of Bluetooth digital health frameworks in other countries. Figure 3 . Figure 3. Screenshots and diagram of the use of the COVID-19 Contact-Confirming Application (COCOA) app.(A) Infected users verify their infection status and authorize data sharing; (B) potentially exposed users retrieve the infection list and triage their exposure risk; (C) users can adjust their close contact settings. Figure 4 . Figure 4. Comparison of the population mobility in Japan in August 2020 versus August 2019. Figure 5 . Figure 5. Simulation of the association between the rate of participation in the COVID-19 Contact-Confirming Application (COCOA) framework and the R t of COVID-19.R t : effective reproduction number. Table 1 . Estimated participation rates of countries employing Bluetooth frameworks. a Data for Japan as of August 27, 2020.
7,344.4
2020-09-13T00:00:00.000
[ "Medicine", "Computer Science" ]
Multi-features fusion classification method for texture image : Aiming at the problem of poor classification and recognition rate for distorted texture image based on single texture feature, a classification method of texture image based on multi-features fusion is proposed. First, the corresponding GLCM features, HOG features, and HU moment features were extracted from the segmented texture images. Then, the three feature matrices were cascaded into a new feature matrix, and the principal component analysis method was used to reduce the dimension of the new feature matrix. Finally, the fused feature matrix was inputted to the support vector machine (SVM) for training, so that the final discriminant model was obtained. The model is applied to the classification of distorted texture images and compared with the single texture feature classification method through experiment. The results show that the multi-features fusion classification method improves the classification accuracy of distorted texture images and has better real-time performance. Introduction Texture analysis and texture feature extraction of image have been the active field of image processing. The extraction of representative texture feature is the key to describing the texture image, which directly affects the accuracy of subsequent classification [1]. Researchers have proposed many different methods of texture feature extraction to describe the texture information of images, including grey-level co-occurrence matrix (GLCM) [2], Markov random field (MRF) model [3], discrete wavelet transform (DWT) [4], local binary pattern (LBP) algorithm [5]and so on. As a single texture feature cannot fully represent the texture information, the feature fusion method is proposed. The fusion algorithm of Gabor filter [6] and GLCM was proposed by Clausi [7] to extract the texture information at low, middle, and high frequencies simultaneously, which improved the ability of feature space separation. The fusion algorithm of multi-scale GLCM and semivariogram method [8] was proposed by Acqua [9], which combined multi-scale analysis and variogram to better reflect the texture features of radar images. An improved texture descriptor called ILBP (Improve LBP) was proposed by Xu Shao-Ping [10], which combined the advantages of LBP and GLCM to improve the ability of describing regional features of images effectively. There are the problems of the noise pollution, and uneven illumination in the process of filming, which cause inevitably the distortion and degradation of the image [11], and affect the classification of the texture image. A classification method based on multi-features fusion is proposed to solve the above problem. The classifier uses a support vector machine (SVM), which has the advantages to classify the data quickly and the strong ability to suppress the noise [12], and can achieve classification for distorted texture images better. Image segmentation In order to extract the texture features of the target image better, the target image needs to be segmented from the background. The segmentation method based on threshold is used, which includes boundary detection and rotating segmentation. Boundary detection In order to segment the target image from the background, the boundary of the target image is first obtained. Threshold-based segmentation methods require grayscale processing of the image, because grayscale image contains only one component and does not lose any texture information of the image so that it is more convenient to handle. The mean value method is adopted for image grayscale. The formula is as follows. The image collected is greatly influenced by the external light. If the fixed threshold is used to detect the boundary of the grey image, the external light changes will have an influence on the segmentation effect. The dynamic threshold method was used here, which can be used to find the threshold in real time to detect the boundary contour. Rotating segmentation The bounding matrix of the boundary contour was obtained by means of boundary contour detection of the target image. According to the angle between the right boundary line and the vertical direction, the transformation matrix was obtained, and the image was rotated by the affine transformation. Affine transformation refers to the process of transforming that a vector space with a linear transformation and a translation into another vector space in geometry without losing any valid information in the image. Its expression is as follows. x, y, 1 = v, w, 1 T = v, w, 1 where the x, y is the pixel coordinate of the transformed image, and the v, w is the pixel coordinate of the image before transformation. The image can be rotated by the transforming matrix. With the contour extraction of the rotating image, the image in the contour can be segmented into an independent image for subsequent image classification. Texture feature extraction Feature extraction is the most critical step in the detection of texture image classification. In order to describe the features of distorted texture images better, the GLCM feature, HOG feature, and HU moment feature of the segmented texture images were extracted simultaneously. GLCM feature extraction GLCM obtains the co-occurrence matrix by calculating the corresponding grey-level images, and the eigenvalues are calculated by the co-occurrence matrix to represent the texture information contained in the images. The grey-level co-occurrence matrix can reflect the overall information of the image greyscale in all directions, adjacent intervals, and amplitude changes. Let f(x, y) is an image of M × N (in pixels) and Ng is its grey level, then the grey-level co-occurrence matrix which matches the specific relationship in space, can be expressed as The C represents the number of elements in the set, and the P is a matrix of size Ng × Ng. If the distance between (x1, y1) and (x2, y2) is d and the angle with the horizontal coordinate for θ, the grey-level cooccurrence matrix P(i, j, d, θ) with different intervals and angles can be obtained. In order to describe the texture condition more intuitively, the four kinds of statistics, namely contrast, entropy, energy, and deficit moment, were obtained by the grey symbiosis matrix. It is recorded as f1, f2, f3, and f4, respectively. The calculation formula is as follows. HOG feature extraction Histogram of oriented gradient (HOG) is a feature descriptor based on a local area, and it is based on the calculation of the image that has been zoned. The principle of division is to form a unified small areas with the same size called cell unit, and these areas are connected together without intervals. In order to reduce the interference of light and so on, the normalised technique of block is also used for reducing the reference. The HOG algorithm divides the different gradient directions from 0° to 360° into nine sections, and then the images are divided into several blocks (16 × 16), each block is reconstructed into four cells (8 × 8). For each cell, the gradient direction and modulus of each pixel are calculated. Finally, the gradient histograms of N cells were combined into a high-dimensional vector. The gradient of the pixel point of (x, y) in the image are as follows. where the Gx(x, y), Gy(x, y), and H(x, y) represent the horizontal gradient, the vertical gradient, and the pixel value of the pixel located at (x, y) in the image, respectively. The gradient magnitude and gradient direction at pixel (x, y) are as follows, α(x, y) = tan −1 (Gy(x, y)/Gx(x, y)) (10) HU moment feature extraction The collected texture image may be rotated or scaled relative to the template image, which can affect the accuracy of the classification. The HU moment feature has in-variance in rotation and scaling. Assuming that f (x, y) is a two-dimensional image, then its (p + q) order ordinary moments and central moments are calculated as follows. In order to obtain the invariable nature of image scaling, the centre moment can be normalised with the zero-order central moment, the normalisation centre is defined as follows. It is proved that when the image changes in proportion, the normalised central moment becomes as follows. where the ρ is the scaling factor. So the calculation method of the normalised centre moment is as follows. At the same time, the formula for calculating the moments of each order can be simplified, as shown below. The seven invariant moments M1-M7 can be calculated by central moments, but the fluctuation range of the seven invariant moments is large, and it is inconvenient to compare. The fluctuation range of invariant moments can be reduced by formula (17). Multi-features fusion strategy The principal component analysis (PCA) [13] was used to realise the multi-features fusion of texture images here. The normalisation of the extracted feature matrices before fusion can effectively improve the learning speed of the classifier. Feature normalisation As of the variables with larger variance play a greater role in the overall variance. The variables with large variances in the principal component analysis are prioritised, and this will remove some representative features and affect the accuracy of classification. So before the fusion, the feature should be normalised to make it more convenience to analysis and calculate [14]. The normalised formula is shown in formula (18). where the μ and σ are the mean and variance of the feature vector, respectively. Feature fusion based on PCA The normalised matrix also has a large dimension and contains a lot of redundant information, which can cause the performance of the classifier to be worse. In order to extract fewer dimensions and representative features, the features of normalisation were analysed by principal component analysis. The specific steps are as follows. After the processing by PCA, the feature vector becomes a lowdimensional vector, and the feature recognition rate is the highest at 20 dimensions. Fig. 1 shows the classification accuracy in different dimensions of the method used here after PCA processing. Image segmentation Three different textured tiles were selected and 20 samples of each tile image were taken. A total of 60 images were used for training. Another 40 random texture images were taken as test sets to verify the classification accuracy of distorted texture images. These tiles have different levels of distortion. Take three samples of each tile in the training sample are shown as Fig. 2. The original image must be segmented from the background, which includes five steps: obtaining the original image, greying the image, threshold dynamic, the affine transformation, and cutting the image. The resulting image for each step is shown in Fig. 3. Extraction of GLCM features: The four parameters of energy, entropy, contrast, and inverse moment were calculated at 0 degree, 45 degree, 90 degree, and 135 degree, respectively. There were 16 values in total. The experimental results show that the four eigenvalues extracted from the LBP image were better than the four eigenvalues extracted from the original image. The original image and LBP image are shown in Fig. 4. The GLCM eigenvalues of six LBP images in horizontal direction are shown in Table 1. Extraction of HOG feature: In HOG of OpenCv, the detection window is 64 × 64 pixels, and the horizontal and vertical sliding steps are eight pixels. The block size is 16 × 16 pixels, and the cell size is specified as 8 × 8 pixels. The gradient histogram of nine directions in each cell is calculated. By the analysis of knowledge, a detection window has 49 blocks, each block consists of four cells, and each cell is a nine-dimensional vector, so a HOG descriptor vector is 1764 dimensions. The result images are shown in Fig. 5. Extraction of HU moment feature: On the basis of normalisation, the seven invariant moments are invariable in rotation and scaling, and they are input to the SVM as another feature of the texture image. Each class of tile image takes two samples. The experiment obtains seven invariant moments data of six images are shown in Table 2. Analysis of experimental results In order to illustrate the effect of feature fusion better, three groups of contrast experiments were designed. The multi-features' fusion method proposed here was compared with two mainstream feature operators of LBP and GLCM. All the images in the experiment were unified as 64 × 64 pixels by using bi-linear interpolation. The experimental result of image is showed in Fig. 6. The comparison of the method used here with the LBP and GLCM feature operators in feature extraction time and recognition rate is shown as Table 3. From the comparison of Table 3, the multi-features' fusion method was shorter than LBP feature in the time of extraction, slightly longer than the GLCM feature, and the method used here was higher than the other two mainstream operators in the recognition rate. Conclusion A multi-features' fusion method for distorted texture image classification was designed to solve the problem of poor recognition rate of distorted texture image. For the problem of the gradient of the texture image relative to the background, the method of dynamic threshold and affine transformation was used to realise image correction. In order to describe distorted texture images better, the HOG features, HU moment features, and GLCM features of LBP images were extracted simultaneously. In order to reduce the amount of computation, the PCA was applied to reduce the dimension of the extracted features. Of course, there are no a large number of samples be trained to enhance the robustness of the system. Therefore, further research and improvement are needed in the future. Acknowledgments This work is supported partly by National Natural Science Foundation of China under Grant No.5160411.
3,105.2
2019-03-25T00:00:00.000
[ "Computer Science" ]
Effects of Y , GdCu , and Al Addition on the Thermoelectric Behavior of CoCrFeNi High Entropy Alloys Thermoelectric (TE) materials can interconvert waste heat into electricity, which will become alternative energy sources in the future. The high-entropy alloys (HEAs) as a new class of materials are well-known for some excellent properties, such as high friction toughness, excellent fatigue resistance, and corrosion resistance. Here, we present a series of HEAs to be potential candidates for the thermoelectric materials. The thermoelectric properties of YxCoCrFeNi, GdxCoCrFeNiCu, and annealed Al0.3CoCrFeNi were investigated. The effects of grain size and formation of the second phase on thermoelectric properties were revealed. In HEAs, we can reduce the thermal conductivity by controlling the phonon scattering due to the considerable complexity of the alloys. The Y, Gd-doped HEAs are competitive candidate thermoelectric materials for energy conversion in the future. With the development of the social economy and technology, the energy consumption of the world has increased significantly, and the traditional petrochemical energy has been increasingly exhausted.At present, more than 60 percent of energy is lost in the form of heat during the use of energy [16].Thermoelectric (TE) materials are going to be the potential candidates for alternative sources of energy, because heat can be directly converted into the electrical energy and vice versa [17,18].Therefore, the TE materials are also attractive because they can contribute to realizing the sustainable utilization of resources [19].If the industrial waste heat, automobile exhaust waste heat and other waste heat are converted through thermoelectric materials, the energy efficiency will be greatly improved, and the energy crisis and environment pollution will be alleviated.The thermoelectric material is a kind of functional material which utilizes the transport and interaction of carriers and phonons in solids to achieve the direct conversion between the thermal energy and electrical energy.The thermoelectric-power generation or refrigeration device made of thermoelectric material has the advantages of being noise free, zero emission and pollution free, lacking vibration, small volume, etc.It has been applied in deep space exploration and refrigeration, and has great potential in waste heat power generation [20,21]. Conventional thermoelectric materials currently in use include Bi 2 Te 3 based, PbTe based and SiGe based, etc. Bi 2 Te 3 is currently the most commercially successful thermoelectric material near room temperature.It is widely used in the field of the thermoelectric refrigeration, and is also the thermoelectric material with the highest power generation efficiency at low temperature [22].Although Bi 2 Te 3 has been commercialized, its performance can be further improved by doping, composition adjustment, etc. PbTe is the medium temperature thermoelectric material that has been studied for the longest time and PbTe-based materials have been successfully used in the NASA aerospace missions many times since 1960.In recent years, PbTe-based thermoelectric materials have made great progress.The application temperature range of the Si-Ge alloy is more than 1000 K.It has been successfully applied to some deep space detectors.For example, in the radioisotope temperature difference battery of the Cassini Saturn detector, the thermoelectric conversion device is prepared by the Si-Ge alloy.However, the reserves of Te and Ge elements are scarce and very costly; Pb is toxic, pollutes the environment, and endangers people's health.Therefore, the development of environmentally friendly, new low-cost thermoelectric materials is particularly urgent. HEAs are always being studied for their mechanical properties.Maybe many new and unexpected properties remain for us to explore.In the search for TE materials, the performance of this kind of material is estimated by the dimensionless figure of merit, defined as ZT = S 2 Tσ/k [23], where S is the Seebeck coefficient, T is the absolute temperature, σ is the electrical conductivity, and k = k e + k l is the total thermal conductivity, where k e and k l are the electronic and lattice components of the thermal conductivity, respectively.In practical applications, the efficiency of the TE material depends on the average ZT over the whole working temperature range, rather than its max ZT [24].So it is our goal to raise the average ZT over the entire working temperature.The larger the ZT value, the better the performance of the TE material.We can easily understand from the equation of ZT that large S and σ, and low k will attain a satisfactory ZT value [25].In addition, the Seebeck coefficient, electric conductivity, and thermal conductivity, which measure the performance of TE materials, are coupled by the carrier concentration [26].The variation trend of the three parameters was summarized in Figure 1 as a result of the great efforts of the researchers [27].We observe that different carrier concentrations lead to various conduction properties.The materials can be divided into insulators, semiconductors, and metals according to their carrier concentration.With the increase of the carrier concentration, the Seebeck coefficient significantly decreases, while the electric conductivity and the electronic thermal conductivity increase [28].At present, semiconductor materials are the hotspot of the thermoelectric properties, and the TE properties of metallic materials are still remain to be studied. Generally, for a TE material, the reduction of the k l is an effective measure to obtain a high ZT value.And k e relates to the conductivity of the materials, whereas k l has nothing to do with other parameters.The scattering of phonons plays a crucial part on k l .The complexity, or disorder in the crystal structure leads to the scattering of phonons.So, it is a feasible method to reduce the lattice thermal conductivity by enhancing the phonon scattering [29][30][31][32][33]. In HEAs, Yeh summarized mainly four core effects of this new kind of alloys.One of them is the severe lattice-distortion effect [1,3,14].The severe lattice-distortion effect is always compared with the traditional alloys, where the lattice site is mainly occupied by the principal element.For HEAs, each constituent element has the same possibility to occupy the lattice sites, since the size of each atom can be different in some cases, which can lead to severe lattice distortion [1,3,15].Therefore, in the HEAs, the strategies to control phonon scattering can be generally achieved through the complex nature of the materials [34].Because of the severe lattice-distortion effect and the points defect, HEAs offer a large amount of complexity, which is conducive to phonon scattering.The phase structure of the HEAs is always highly symmetrical, such as face-centered-cubic (FCC), body-centered-cubic (BCC), and hexagonal-close-packed (HCP) phases.So it is possible for the new class of materials to reach a high convergence of the bands close to the Fermi level to attain the high Seebeck coefficient values [35][36][37].Therefore, the peculiar microstructures and properties of the HEA provide opportunities to obtain the low lattice thermal conductivity and appropriate Seebeck coefficient to achieve a high ZT value and act as a new class of thermoelectric materials.Generally, for a TE material, the reduction of the kl is an effective measure to obtain a high ZT value.And ke relates to the conductivity of the materials, whereas kl has nothing to do with other parameters.The scattering of phonons plays a crucial part on kl.The complexity, or disorder in the crystal structure leads to the scattering of phonons.So, it is a feasible method to reduce the lattice thermal conductivity by enhancing the phonon scattering [29][30][31][32][33]. In HEAs, Yeh summarized mainly four core effects of this new kind of alloys.One of them is the severe lattice-distortion effect [1,3,14].The severe lattice-distortion effect is always compared with the traditional alloys, where the lattice site is mainly occupied by the principal element.For HEAs, each constituent element has the same possibility to occupy the lattice sites, since the size of each atom can be different in some cases, which can lead to severe lattice distortion [1,3,15].Therefore, in the HEAs, the strategies to control phonon scattering can be generally achieved through the complex nature of the materials [34].Because of the severe lattice-distortion effect and the points defect, HEAs offer a large amount of complexity, which is conducive to phonon scattering.The phase structure of the HEAs is always highly symmetrical, such as face-centered-cubic (FCC), body-centered-cubic (BCC), and hexagonal-close-packed (HCP) phases.So it is possible for the new class of materials to reach a high convergence of the bands close to the Fermi level to attain the high Seebeck coefficient values [35][36][37].Therefore, the peculiar microstructures and properties of the HEA provide opportunities to obtain the low lattice thermal conductivity and appropriate Seebeck coefficient to achieve a high ZT value and act as a new class of thermoelectric materials. Nevertheless, the thermoelectric properties of the HEAs were not extensively studied in the literature.Here the annealed Al0.3CoCrFeNi (annealed at 473 K, 673 K, 873 K, and 1073 K), GdxCoCrFeNiCu (x = 0, 0.3), and YxCoCrFeNiCu (x = 0, 0.05, 0.1) were prepared, and the parameters for the thermoelectric properties were studied in this paper. Experimental Section The button ingots (30 g each) of the sample were prepared by the arc-melting method in a vacuum-titanium-gettered high purity argon (99.999 volume percent, vol %) atmosphere and cooled Nevertheless, the thermoelectric properties of the HEAs were not extensively studied in the literature.Here the annealed Al 0.3 CoCrFeNi (annealed at 473 K, 673 K, 873 K, and 1073 K), Gd x CoCrFeNiCu (x = 0, 0.3), and Y x CoCrFeNiCu (x = 0, 0.05, 0.1) were prepared, and the parameters for the thermoelectric properties were studied in this paper. Experimental Section The button ingots (30 g each) of the sample were prepared by the arc-melting method in a vacuum-titanium-gettered high purity argon (99.999 volume percent, vol %) atmosphere and cooled by the water in a copper crucible.The purity of the element was greater than 99.95 weight percent (wt %).The samples were flipped and remelted at least five times in order to achieve a good homogeneity.We obtain Al 0.3 CoCrFeNi, Gd x CoCrFeNiCu (x = 0, 0.3), and Y x CoCrFeNi (x = 0, 0.05, 0.1) ingots.The Al 0.3 CoCrFeNi was then annealed at 473 K, 673 K, 873 K, and 1073 K in the muffle furnace, respectively.The as-cast samples were cut and then polished to obtain a bright and smooth surface.The crystalline structures of all the samples were obtained with X-ray diffraction (XRD) (Rigaku, Tokyo, Japan) using the Cu-Kα radiation, operating in the 2θ range of 20 • -100 • at a scanning rate of 4 • /min.The microstructures of the samples were characterized by a SE-4800 scanning electron microscope (SEM) (Hitachi, Tokyo, Japan) operated in a back-scatter electron (BSE) mode.The thermal conductivity of the samples was measured in the laser thermal conductivity analyzer (TC-9000H, Ulvac-Riko, Yokohama, Japan), and the cylindrical samples were cut from the center of the ingots with a diameter of 6 mm and thickness of 1 mm.A Seebeck coefficient analyzer (ZEM-3) (Advance-Riko, Yokohama, Japan) was used to determine the temperature-dependent Seebeck coefficient and the electrical resistivity (1/σ).All of the samples were studied from 298 K to ~873 K.The temperature difference of each measure point was 50 K. Crystal Structure The XRD patterns of the annealed Al 0.3 CoCrFeNi were shown in Figure 2. The Al 0.3 CoCrFeNi alloy exhibited a simple FCC crystal structure at all the mentioned annealing temperatures.Figures 3a and 4a show the XRD patterns of Gd x CoCrFeNiCu and Y x CoCrFeNi, respectively.Only the FCC phase was detected when x = 0.With the increases of Y and Gd contents, new peaks appeared.Those new diffraction peaks were identified as the Laves phase.The microstructure of these HEAs was also displayed in the Figures 3b,c and 4b,c.The HEAs with the Y addition show the typical cast dendrite (DR) identified as the FCC phase, and interdendrite (IR) identified as the Laves phase.With the increase of the Y content, the grain is refined.Based on the XRD patterns of Gd x CoCrFeNiCu, it can be concluded that when x = 0, the diffraction peaks coincide with the FCC phase.In the sample with x = 0.3, both FCC and Laves phases can be observed, while the FCC structure is still the main phase.With the increase of the Gd content, the grain is refined. 4°/min.The microstructures of the samples were characterized by a SE-4800 scanning electron microscope (SEM) (Hitachi, Tokyo, Japan) operated in a back-scatter electron (BSE) mode.The thermal conductivity of the samples was measured in the laser thermal conductivity analyzer (TC-9000H, Ulvac-Riko, Yokohama, Japan), and the cylindrical samples were cut from the center of the ingots with a diameter of 6 mm and thickness of 1 mm.A Seebeck coefficient analyzer (ZEM-3) (Advance-Riko, Yokohama, Japan) was used to determine the temperature-dependent Seebeck coefficient and the electrical resistivity (1/σ).All of the samples were studied from 298 K to ~ 873 K.The temperature difference of each measure point was 50 K. Crystal Structure The XRD patterns of the annealed Al0.3CoCrFeNi were shown in Figure 2. The Al0.3CoCrFeNi alloy exhibited a simple FCC crystal structure at all the mentioned annealing temperatures.Figures 3a and 4a show the XRD patterns of GdxCoCrFeNiCu and YxCoCrFeNi, respectively.Only the FCC phase was detected when x = 0.With the increases of Y and Gd contents, new peaks appeared.Those new diffraction peaks were identified as the Laves phase.The microstructure of these HEAs was also displayed in the Figures 3b,c and 4b,c.The HEAs with the Y addition show the typical cast dendrite (DR) identified as the FCC phase, and interdendrite (IR) identified as the Laves phase.With the increase of the Y content, the grain is refined.Based on the XRD patterns of GdxCoCrFeNiCu, it can be concluded that when x = 0, the diffraction peaks coincide with the FCC phase.In the sample with x = 0.3, both FCC and Laves phases can be observed, while the FCC structure is still the main phase.With the increase of the Gd content, the grain is refined. Electrical Conductivity The average electrical conductivity for the annealed Al0.3CoCrFeNi alloys was presented in Figure 5a.Electrical conductivity is closely connected with the change of temperature.In general, the electrical conductivity of the alloys decreases with increasing the temperature, while the electrical conductivity of the semiconductor is directly proportional to the temperature within a certain temperature range [8][9][10][11].We observed from Figure 5a that the Al0.3-HEA worked in accordance with the trend, and the electrical conductivity was inversely proportional to the temperature [14].But they do not exactly line up with the trend of the annealing temperature.For the sample as-cast, annealed at 473 K and 873 K, the electrical conductivity decreases with the increasing the annealing temperature, while the other two samples do not entirely follow this rule.It is noted that the sample annealed at 673 K shows the highest electrical conductivity among all of the mentioned Al0.3-HEAs.The conductivity of the samples annealed at 1073 K falls in between the as-cast and annealed at 473 K samples.The conductivity of GdxCoCrFeNiCu and YxCoCrFeNi were shown in Figures 6a and 7a.The electrical conductivity decreases with the increase of Y and Gd.The Gd0CoCrFeNiCu shows the highest conductivity among all tested samples. Electrical Conductivity The average electrical conductivity for the annealed Al0.3CoCrFeNi alloys was presented in Figure 5a.Electrical conductivity is closely connected with the change of temperature.In general, the electrical conductivity of the alloys decreases with increasing the temperature, while the electrical conductivity of the semiconductor is directly proportional to the temperature within a certain temperature range [8][9][10][11].We observed from Figure 5a that the Al0.3-HEA worked in accordance with the trend, and the electrical conductivity was inversely proportional to the temperature [14].But they do not exactly line up with the trend of the annealing temperature.For the sample as-cast, annealed at 473 K and 873 K, the electrical conductivity decreases with the increasing the annealing temperature, while the other two samples do not entirely follow this rule.It is noted that the sample annealed at 673 K shows the highest electrical conductivity among all of the mentioned Al0.3-HEAs.The conductivity of the samples annealed at 1073 K falls in between the as-cast and annealed at 473 K samples.The conductivity of GdxCoCrFeNiCu and YxCoCrFeNi were shown in Figures 6a and 7a.The electrical conductivity decreases with the increase of Y and Gd.The Gd0CoCrFeNiCu shows the highest conductivity among all tested samples. Electrical Conductivity The average electrical conductivity for the annealed Al 0.3 CoCrFeNi alloys was presented in Figure 5a.Electrical conductivity is closely connected with the change of temperature.In general, the electrical conductivity of the alloys decreases with increasing the temperature, while the electrical conductivity of the semiconductor is directly proportional to the temperature within a certain temperature range [8][9][10][11].We observed from Figure 5a that the Al 0.3 -HEA worked in accordance with the trend, and the electrical conductivity was inversely proportional to the temperature [14].But they do not exactly line up with the trend of the annealing temperature.For the sample as-cast, annealed at 473 K and 873 K, the electrical conductivity decreases with the increasing the annealing temperature, while the other two samples do not entirely follow this rule.It is noted that the sample annealed at 673 K shows the highest electrical conductivity among all of the mentioned Al 0.3 -HEAs.The conductivity of the samples annealed at 1073 K falls in between the as-cast and annealed at 473 K samples.The conductivity of Gd x CoCrFeNiCu and Y x CoCrFeNi were shown in Figures 6a and 7a Thermal Conductivity The thermal conductivity (k) of Al0.3CoCrFeNi was shown in Figure 5b.Thermal conductivity was measured between 298 K and 873 K.The samples were tested every 323 K.In general, the thermal conductivity of Al0.3 HEAs was in direct proportion to the temperature except for some special points.The conductivity of GdxCoCrFeNiCu and YxCoCrFeNi also increases with increasing temperature.Furthermore, it was observed that the k value decreases as the contents of Y and Gd increase.The Y0.05CoCrFeNi was noted for its lowest thermal conductivity in all the samples studied. Seebeck Coefficient The average Seebeck coefficient values for Al0.3CoCrFeNi were exhibited in Figure 5c.It was observed that the alloys annealed at 673 K and 1073 K reveal a negative Seebeck coefficient.The positive and negative values of the Seebeck coefficient represent the different diffusion patterns of electrons.For the Al0.3-HEAs, the absolute values of the Seebeck coefficient almost decrease with increasing the annealing temperature, except for some points.The change of S did not show an obvious relationship with the test temperature.The Seebeck coefficient of the other two HEAs were shown in Figures 6c and 7c.The absolute values, S, of the GdxCoCrFeNiCu alloys decrease with the increase of the Gd content.Nevertheless, the S is in the direct proportion to the test temperature.The absolute Seebeck coefficient values for Yx-HEAs decrease with increasing x in the low-temperature range (298 K to 573 K).In the temperature range of 573 K to 723 K, S0.05 was greater than S0.1, and in the range of 773 K to 873 K, Y0.05 shows the highest absolute Seebeck coefficient. Thermal Conductivity The thermal conductivity (k) of Al 0.3 CoCrFeNi was shown in Figure 5b.Thermal conductivity was measured between 298 K and 873 K.The samples were tested every 323 K.In general, the thermal conductivity of Al 0.3 HEAs was in direct proportion to the temperature except for some special points.The conductivity of Gd x CoCrFeNiCu and Y x CoCrFeNi also increases with increasing temperature.Furthermore, it was observed that the k value decreases as the contents of Y and Gd increase.The Y 0.05 CoCrFeNi was noted for its lowest thermal conductivity in all the samples studied. Seebeck Coefficient The average Seebeck coefficient values for Al 0.3 CoCrFeNi were exhibited in Figure 5c.It was observed that the alloys annealed at 673 K and 1073 K reveal a negative Seebeck coefficient.The positive and negative values of the Seebeck coefficient represent the different diffusion patterns of electrons.For the Al 0.3 -HEAs, the absolute values of the Seebeck coefficient almost decrease with increasing the annealing temperature, except for some points.The change of S did not show an obvious relationship with the test temperature.The Seebeck coefficient of the other two HEAs were shown in Figures 6c and 7c.The absolute values, S, of the Gd x CoCrFeNiCu alloys decrease with the increase of the Gd content.Nevertheless, the S is in the direct proportion to the test temperature.The absolute Seebeck coefficient values for Y x -HEAs decrease with increasing x in the low-temperature range (298 K to 573 K).In the temperature range of 573 K to 723 K, S 0.05 was greater than S 0.1 , and in the range of 773 K to 873 K, Y 0.05 shows the highest absolute Seebeck coefficient. Discussion Recently, particular interest has focused on the formation of the secondary phase in thermoelectric materials to reduce their lattice electric conductivity [31].From the XRD patterns of the as-cast Y x CoCrFeNi, the connection between crystal structures and Y contents can be clearly observed. Only diffraction peaks corresponding to the FCC crystal structure was observed in the Y 0 CoCrFeNi alloy.However, reflections of the Laves phases can be found with the increase of the Y content.The Laves phase can be identified as a YNi type.With the increase of the Y content, the grain is refined.The microstructure of Gd X CoCrFeNiCu was shown in the Figure 3b,c, which was typical of dendritic structures.The grain size decreases with changing x from 0 to 0.3.In the multiphase compounds, the inhomogeneous distribution of dopants between the matrix and secondary phase plays a crucial role in the electronic-transport properties [23].The XRD patterns and microstructures were observed in Figure 2. All of the Al 0.3 -HEA samples exhibit a simple FCC structure, and with the increase of the annealing temperature, the grain size increases [38].We can regard all of the HEA samples as similar low-dimensional thermoelectric materials, which follows the size effect.The low-dimensional thermoelectric material reduces the average free path of the phonon by the size effect to decrease the lattice thermal conductivity [29, [39][40][41][42][43].The lattice thermal conductivity decreases with the decrease of the grain size, which is due to the fact that the small grain size will enhance the long wave phonons scattering.The grain size and thermal conductivity of all samples conform to this rule [12][13][14][15]. From the electric-conductivity data, we observe that the conductivity of all the HEA samples decreases with increasing temperature, which is consistent with the traditional alloys.The Al 0.3 HEAs, annealed at 673 K, possess a high electric conductivity (~1.1 MS•m −1 ).The connection between the electric conductivity and annealing temperature was, however, not entirely systematic.Furthermore, a small variation in the composition of the main phase will consequently affect the value for the electric conductivity [36].With increasing the contents of Y and Gd, the K value decreases.It is because the movement of the carriers (electrons in the metal) is affected by the generation of the Laves phase, leading to the decrease of the electric conductivity. The change in the Seebeck coefficient for the Al 0.3 -HEAs was not entirely systematic over the full annealing temperature range, but the S was changing in a systematic order during the high annealing temperature (873 K and 1073 K).The absolute S value decreases as the annealing temperature changes from 873 K to 1073 K, and the coefficient was nearly in a direct proportion to the test temperature.The change in the Seebeck coefficient from positive to negative values can be ascribed to a change of the diffusion direction of the carriers.In general, the formation of the second phase in the thermoelectric materials will introduce the barrier, which will block the low-energy carriers and thus, cause the decrease of the Seebeck coefficient [44,45].In the high test-temperature range, the Y 0.05 -HEA shows the greatest absolute Seebeck coefficient.Nevertheless, with the formation of the Laves phase in Gd x CoCrFeNiCu alloys, the absolute S values decrease.We believe that it is because the rare earth element, Gd, possesses a special [46,47] electronic structure.Moreover, due to the bipolar conduction, the Seebeck coefficient is basically in direct proportion to the temperature, since noticeable minority carriers would be excited at high temperatures [21].We therefore believe that it is necessary to combine both the composition and phase structure with band structures in order to obtain a satisfactory Seebeck coefficient.Serious efforts should be directed towards the band-structure engineering to make the HEAs as effective TE materials.Nowadays, typically investigated TE materials may be semiconducting oxides such as ZnO and Bi 2 O 2 Se [48,49].They exhibit ZT values of 0.025 and 0.047 at elevated temperatures of 1073 and 773 K, respectively.Compared to that, the ZT of the HEA studied in the present manuscript is lower.However, high-entropy alloys are not traditional thermoelectric materials.The significance of this paper is to provide an exploration of the possibility of the high-entropy alloy as the thermoelectric material.It is found that the structure of HEA could be changed by doping elements and heat treatment to improve the thermoelectric properties, which provides guiding significance for the study of thermoelectric properties of high-entropy alloys.This is an important insight useful in establishing strategies aimed to design a new kind of thermoelectric materials. Conclusions We show the potential for the HEAs to act as thermoelectric materials even in the high temperature range.Moreover, it is found that the investigated Al 0.3 CoCrFeNi reaches a ZT value of 0.008.There is Figure 1 . Figure 1.Correlation of thermoelectric parameters and carrier concentrations.
5,791.8
2018-09-29T00:00:00.000
[ "Materials Science" ]
Bayesian Evaluation of Smartphone Applications for Forest Inventories in Small Forest Holdings : There are increasingly advanced mobile applications for forest inventories on the market. Small enterprises and nonprofessionals may find it di ffi cult to opt for a more sophisticated application without comparing it to an established standard. In a small private forest holding (19 ha, 4 stands, 61 standing points), we compared TRESTIMA, a computer vision-based mobile application for stand inventories, to MOTI, a smartphone-based relascope, in measuring the number of stems (N) and stand basal area (G). Using a Bayesian approach, we (1) weighted evidence for the hypothesis of no di ff erence in N and G between TRESTIMA and MOTI relative to the hypothesis of di ff erence, and (2) weighted evidence for the hypothesis of overestimating versus underestimating N and G when using TRESTIMA compared to MOTI. The results of the Bayesian tests were then compared to the results of frequentist tests after the p -values of paired sample t -tests were calibrated to make both approaches comparable. TRESTIMA consistently returned higher N and G, with a mean di ff erence of + 305.8 stems / ha and + 5.8 m 2 / ha. However, Bayes factors (BF 10 ) suggest there is only moderate evidence for the di ff erence in N (BF 10 = 4.061) and anecdotal evidence for the di ff erence in G (BF 10 = 1.372). The frequentist tests returned inconclusive results, with p -values ranging from 0.03 to 0.13. After calibration of the p -values, the frequentist tests suggested rather small odds for the di ff erences between the applications. Conversely, the odds of overestimating versus underestimating N and G were extremely high for TRESTIMA compared to MOTI. In a small forest holding, Bayesian evaluation of di ff erences in stand parameters can be more helpful than frequentist analysis, as Bayesian statistics do not rely on asymptotics and can answer more specific hypotheses. Introduction In measuring forest stands, mobile applications have proven to be a valuable alternative to traditional field measurement devices. The number of mobile applications for forest inventories has increased significantly in the last few years, and information technology development has enabled new techniques such as virtual reality and augmented reality to be implemented in hand-held devices [1][2][3][4][5]. Some of the advantages of mobile applications are user-friendliness and fast processing of the collected data, which may be particularly useful for inexperienced surveyors such as private forest owners. Although private forest owners are not always interested in wood production [6], they manage a sizeable part of private forests in Europe and the US, and they contribute a significant share to the supply of round wood in these countries (see, e.g., [7]). Proper estimation of growing stock, stand density, and tree species composition are prerequisites for sustainable forest management and the sustainable supply of wood and other ecosystem services. One of the mobile applications for stand inventories that is attractive to inexperienced users because of its simplicity is the TRESTIMA mobile application [8]. The advantage of TRESTIMA is that the user can estimate stand parameters just by shooting photos of the desired forest area. The photos are then sent to a cloud service for automated image analysis. Tree trunks and tree species are automatically recognized using computer vision, and stand parameters such as basal area, stem count, volume, median diameter and stand height are calculated using predefined site-and species-specific height and volume functions. In measuring stand basal area, the application uses the proportional-to-size sampling principle of the Bitterlich relascope [9] with a dynamic basal area factor, where a single tree represents 0.6 to 1.4 m 2 /ha. Similar to the Spiegel Relaskop, stand basal area is calculated by multiplying the number of trees by the corresponding basal area factor for each tree. Stand volume is calculated by the default volume functions based on the measured median tree height or the corresponding height for each tree species from previously measured similar forests stored in the cloud. The accuracy of TRESTIMA has been tested for different tree species compositions and stand structures in Europe and Russia and was found to be similar to that of traditional angle counting methods [10][11][12]. The TRESTIMA forest inventory system has mainly been used by entrepreneurs and forest companies, but less so by private and family forest owners [8]. Among traditional dendrometers, the Spiegel Relaskop and various angle gauges have been widely utilized. The principle of the Spiegel Relaskop has been implemented in many mobile applications [13]. MOTI, for example, is an application that combines a digital relascope with stem counting and tree height measuring capabilities [14,15]. It provides estimates of stand basal area that are "as good as, if not better than, the analog Bitterlich method", with automatic correction for slope (see [16] p. vii). MOTI cannot reach the level of precision of a Vertex clinometer for tree height unless far enough from the tree [16]. It has proven to be a valuable replacement for the Spiegel Relaskop in rapid stand parameter estimations, with excellent accuracy in stands similar to ours (e.g., [17,18]). MOTI has been considered as a standard private forest stand inventory tool by several private forest enterprises in Switzerland [19], providing inexpensive stand parameter estimates needed for further simulations [20]. When a private forest owner is deciding on a new mobile application for a forest inventory, he expects the new application to be better than the previous one in at least one criterion, without losing accuracy. Accuracy in the eyes of forest owners may be expressed relative to the accuracy of the second-best option available and not in the absolute sense of the allowable tolerance of the sampling error. This means that an alternative application that does not return excessively deviating estimates but is easier, faster, or better in any other aspect may be the preferred application. In more statistical language, the interest of the layman is not to test whether the estimates obtained by the new application deviate significantly from the "true" population value (i.e., whether the null hypothesis of no differences between the "true" population value of the parameter and the value obtained by the application can be rejected) but which one of the two possible assertions about the mean value of the parameter under investigation is more likely, i.e., the means of the stand parameters from two competing applications are equal (H 0 ) or the means are not equal (H 1 ). Stated differently, in comparing two applications, forest owners do not treat the stand parameter under estimation as fixed with unknown bounds, depicted by the confidence interval as in the frequentist analysis, but rather as an unknown value that is highly likely within a credible interval in the Bayesian sense. This research question calls for a Bayesian approach to testing the differences between two alternative applications. In Bayesian hypothesis testing, the probability of the null (H 0 ) and the alternative hypothesis (H 1 ), based on the observed data, can be estimated (e.g., [21]), while in frequentist testing, the null hypothesis can only be rejected and a conclusion reached that there is a statistically significant difference between the two methods. There are also other advantages of Bayesian analysis when assessing a new application. In Bayesian hypothesis testing, the hypothesis of a difference between two applications does not need to be precise but can also be informative. This means that the researcher can specify the expected relations between parameters and may include effect sizes [22] (see [23] for details). In measuring stands, forest owners are not just interested in whether the data provide evidence in favor of H 1 (be it one-sided or two-sided H 1 ), but they might also be interested in testing the plausibility of several alternative hypotheses. For instance, Bayesian analysis can weigh the evidence for the hypothesis of one application returning higher means than the other application (H 2 ) versus the hypothesis of one application returning lower means than the other application (H 3 ). The dilemma is not trivial; forest owners are likely to be more concerned about overestimating versus underestimating stand parameters when using a new application than just determining whether a difference between the two applications exists. The difference can be statistically significant but small in size. Another aspect that should be carefully taken into account when testing hypotheses is the sampling method and sample size. Forest stands are usually too large to obtain all the measurements. Instead, a subset of all the measurements in the stand is used, from which the characteristics of the stand are inferred [24]. In a forest inventory, probability sampling ensures that all sampling units (e.g., trees or standpoints) have known chances of being selected. The advantage of using randomization is the absence of systematic and sampling bias. If random selection is made properly, the sample is representative of the stand, and the sample mean is expected to be an unbiased estimate of the population mean. However, probability sampling and large samples require more skill and more funds compared to nonprobability sampling. Private forest owners are likely to use nonprobability sampling because they lack knowledge on how to correctly conduct random sampling or simply because of the intuitiveness of nonprobability sampling techniques, such as purposive sampling, subjective sampling, typical case sampling, convenience sampling and various other combinations. The ability to go out in the forest with a new application, take a few photos, and instantly obtain stand parameter estimates is attractive. Although it is clear that such an approach does not guarantee unbiased estimates of the population values and that Type II errors are more likely to occur when sample sizes are small and frequentist statistics is used, nonprobability sampling and small samples do not prevent private forest owners from asking the following two questions before purchasing a new application: (1) Does the new application provide stand parameters that are as credible as those estimated by traditional field measurement devices? (2) What is the probability of overestimating or underestimating stand parameters when using the new application? Since both investigated applications use different standing volume calculation methods, and MOTI uses Swiss tariffs, the applications were compared only with respect to stand basal area and stem count per ha. Specifically, we used Bayesian hypothesis testing to (1) weight evidence for the hypothesis of no difference between the TRESTIMA image analysis application and field measurements with the MOTI digital relascope, relative to the hypothesis of difference; (2) weight evidence for the hypothesis of overestimating versus underestimating stand parameters when using TRESTIMA compared to MOTI, assuming that MOTI is an accurate replacement for the Spiegel Relaskop [16]. Study Area and Field Measurements The study area encompasses 19.4 ha of private forests in two forest compartments in the Kočevje forest management region, Slovenia, approximately 34 km from Ljubljana (45.81 • N, 14.67 • E), and consists of four stands (Z045, Z046, Z048_29, and Z048_30; Figure 1). All stands were mixed Norway spruce, silver fir, European beech, and Scots pine stands in the timber phase, with Norway spruce dominating the tree species composition. Stands were selected based on the similarity of site conditions and the dominance of Norway spruce and Scots pine, which TRESTIMA recognizes accurately [10]. The number of sampling points was 61. The number of photos taken in each stand was based on the size of the stand, stand structure, and the recommendation that at least 10 photos are needed per stand [25] and was as follows [26]: 17 photos in stand Z045 (6.5 ha), 10 photos in Z046 (2.51 ha), 19 photos in Z048_29 (5.46 ha), and 15 photos in Z048_30 (4.93 ha). All photos were taken by circling the stands, close to their borders, and shooting photos towards the center of the stand. The sampling points from which the photos were taken were chosen subjectively so that they were approximately 30 to 50 m apart. The GPS on a mobile phone, printed stand maps, and the GPS Forests 2020, 11, 1148 4 of 16 on a hand-held device were used to record the positions of the standpoints. After the stand was photographed, all photos were uploaded to the cloud. Since there were no specific tree species profiles available for Slovenia, we chose Germany as the most suitable one, in which TRESTIMA recognized Norway spruce and Scots pine, while all other species except for silver fir were classified as other species (Figure 2). The application was unable to discern between Norway spruce and silver fir and treated silver fir as spruce. The application calculated tree height for Norway spruce by utilizing the data it has from previously measured similar forests. For Scots pine and other tree species, tree height was measured by a clinometer and manually entered into the application since TRESTIMA did not determine the corresponding height automatically. After one to two minutes, basal area, stem count, stand volume, median diameter, and diameter distribution were calculated, and a report with all parameters was received on the mobile phone. The sampling points from which the photos were taken were chosen subjectively so that they were approximately 30 to 50 m apart. The GPS on a mobile phone, printed stand maps, and the GPS on a hand-held device were used to record the positions of the standpoints. After the stand was photographed, all photos were uploaded to the cloud. Since there were no specific tree species profiles available for Slovenia, we chose Germany as the most suitable one, in which TRESTIMA recognized Norway spruce and Scots pine, while all other species except for silver fir were classified as other species (Figure 2). The application was unable to discern between Norway spruce and silver fir and treated silver fir as spruce. The application calculated tree height for Norway spruce by utilizing the data it has from previously measured similar forests. For Scots pine and other tree species, tree height was measured by a clinometer and manually entered into the application since TRESTIMA did not determine the corresponding height automatically. After one to two minutes, basal area, stem count, stand volume, median diameter, and diameter distribution were calculated, and a report with all parameters was received on the mobile phone. Approximately 10 m away from the TRESTIMA sampling points, in the direction the photographs were taken, sampling points for digital Bitterlich sampling with MOTI [14] were selected ( Figure 3). The distance of 10 m was chosen because the mobile phone camera can only capture 60 to 70 degrees of a full circle, depending on the camera lens, while in conventional Bitterlich sampling, a full circle is surveyed. Moreover, TRESTIMA uses smaller basal area factors and, thus, compared to the relascope, picks up tree trunks that are farther away, which means that the sampling points for MOTI were chosen approximately in the middle of the circular sector photographed by TRESTIMA. Approximately 10 m away from the TRESTIMA sampling points, in the direction the photographs were taken, sampling points for digital Bitterlich sampling with MOTI [14] were selected ( Figure 3). The distance of 10 m was chosen because the mobile phone camera can only capture 60 to 70 degrees of a full circle, depending on the camera lens, while in conventional Bitterlich sampling, a full circle is surveyed. Moreover, TRESTIMA uses smaller basal area factors and, thus, compared to the relascope, picks up tree trunks that are farther away, which means that the sampling points for MOTI were chosen approximately in the middle of the circular sector photographed by TRESTIMA. Bayesian and Frequentist Paired Sample t-Test To test the relative evidence in the stand data about whether stem number and stand basal area estimated by TRESTIMA and MOTI are equal or different, we used a Bayesian paired sample t-test. In the Bayesian paired sample t-test, the Bayes factor BF01 indicates how likely the hypothesis that TRESTIMA equals MOTI (i.e., the null hypothesis H0) is in comparison to the hypothesis of a difference (i.e., the two-sided alternative hypothesis H1). For instance, if Bayes factor BF01 equals 10, H0 is 10-times more likely than H1, given the data. BF01 between 1 and 3, between 3 and 6, between 6 and 10, and above 10 should be considered as anecdotal, mild, moderate, and strong evidence, respectively, in favor of H0 [27,28], although the thresholds are fully arbitrary and have no statistical meaning. We used proportion wheels to visualize the strength of evidence for H0. In proportion Approximately 10 m away from the TRESTIMA sampling points, in the direction the photographs were taken, sampling points for digital Bitterlich sampling with MOTI [14] were selected ( Figure 3). The distance of 10 m was chosen because the mobile phone camera can only capture 60 to 70 degrees of a full circle, depending on the camera lens, while in conventional Bitterlich sampling, a full circle is surveyed. Moreover, TRESTIMA uses smaller basal area factors and, thus, compared to the relascope, picks up tree trunks that are farther away, which means that the sampling points for MOTI were chosen approximately in the middle of the circular sector photographed by TRESTIMA. Bayesian and Frequentist Paired Sample t-Test To test the relative evidence in the stand data about whether stem number and stand basal area estimated by TRESTIMA and MOTI are equal or different, we used a Bayesian paired sample t-test. In the Bayesian paired sample t-test, the Bayes factor BF01 indicates how likely the hypothesis that TRESTIMA equals MOTI (i.e., the null hypothesis H0) is in comparison to the hypothesis of a difference (i.e., the two-sided alternative hypothesis H1). For instance, if Bayes factor BF01 equals 10, H0 is 10-times more likely than H1, given the data. BF01 between 1 and 3, between 3 and 6, between 6 and 10, and above 10 should be considered as anecdotal, mild, moderate, and strong evidence, respectively, in favor of H0 [27,28], although the thresholds are fully arbitrary and have no statistical meaning. We used proportion wheels to visualize the strength of evidence for H0. In proportion Bayesian and Frequentist Paired Sample t-Test To test the relative evidence in the stand data about whether stem number and stand basal area estimated by TRESTIMA and MOTI are equal or different, we used a Bayesian paired sample t-test. In the Bayesian paired sample t-test, the Bayes factor BF 01 indicates how likely the hypothesis that TRESTIMA equals MOTI (i.e., the null hypothesis H 0 ) is in comparison to the hypothesis of a difference (i.e., the two-sided alternative hypothesis H 1 ). For instance, if Bayes factor BF 01 equals 10, H 0 is 10-times more likely than H 1 , given the data. BF 01 between 1 and 3, between 3 and 6, between 6 and 10, and above 10 should be considered as anecdotal, mild, moderate, and strong evidence, respectively, in favor of H 0 [27,28], although the thresholds are fully arbitrary and have no statistical meaning. We used proportion wheels to visualize the strength of evidence for H 0 . In proportion wheels, Bayes factor BF 01 was transformed to a magnitude between 0 and 1 and plotted as the proportion of a circular area. In setting the probabilities of H 0 and H 1 before observing the data (i.e., prior probabilities, or priors P(H 0 ) and P(H 1 ); Equation (1)), we used a default Cauchy prior width of 0.707, which means that we Forests 2020, 11, 1148 6 of 16 assume H 0 and H 1 are equally likely and that we were 50% certain that the effect of an application would be between −0.707 and 0.707. Bayesian error probabilities P(H 0 |data) and P(H 1 |data) (also called posterior probabilities), which quantify the support for H 0 and H 1 after observing the data, are then calculated by multiplying Bayes factor BF 01 with prior odds (Equation (1)). It is well known that the choice of priors affects the resulting Bayes factor. Therefore, the sensitivity of the Bayesian test was checked by changing the Cauchy prior width and inspecting BF 01 . In addition, we conducted sequential analysis, in which we inspected the strength of the evidence supporting the null hypothesis (BF 01 ) with respect to sample size and the range of plausible values for the prior. The results of the Bayesian paired sample t-test were then compared to the results of the frequentist paired sample t-test. The paired sample t-test tests the null hypothesis (H 0 ) that the population mean of the difference between paired observations equals zero. In more plain language, we tested whether the differences between paired observations converge to zero if measurements are repeated indefinitely. In frequentist analysis, the p-value is the central point answering this question; the p-value is the probability of the test statistic being more extreme than or equal to the observed results of a statistical hypothesis test. However, the p-value has frequently been misinterpreted as the probability that H 0 is true or the probability that H 1 is false in a frequentist sense (e.g., [29]). To make the Bayesian and frequentist analyses comparable, we calibrated the p-value by calculating two measures: the Vovk-Sellke maximum p-ratio (VS-MPR) and the frequentist Type I error probability (α(p)). The VS-MPR is obtained by choosing shape α of the p-value distribution under H 1 , such that the obtained p-value is maximally diagnostic. The VS-MPR value is then the ratio of the densities at point p under H 0 and H 1 [29]. The bound 1/(−e p ln(p)), also called the Benjamin and Berger bound [30,31], is derived from the shape of the p-value distribution when p < 1/e. Under H 0 , the bound is uniform (0, 1), and under H 1 , it is decreasing in p, e.g., by a beta (α, 1) distribution, where 0 < α < 1. For example, if the two-sided p-value equals 0.05, the Vovk-Sellke MPR equals 2.46, indicating that the p-value of 0.05 is, at most, 2.46 times more likely to occur under H 1 than under H 0 . A statistical widget for calculating the VS-MPR was used [32] when the VS-MPR was not reported by the software. The frequentist Type I error probability (α(p)) for rejecting H 0 was calculated as α(p) = ((1 + (−e p ln(p)) −1 ) −1 [29]. This represents the probability of false rejection of H 0 at the two-sided p-value. For instance, if the two-sided p-value equals 0.05, the error probability in rejecting H 0 is 0.289. Alternatively, we could also say that in 28.9% of the t-tests, the p-value would be greater than 0.05. Bayesian Informative Hypothesis Evaluation (Bain) To weight evidence for the hypothesis of overestimating versus underestimating stand parameters when using TRESTIMA compared to MOTI, we used a Bayesian informative hypotheses evaluation (bain) paired samples t-test in which the following two alternative hypotheses were evaluated: H 2 : mean TRESTIMA > mean MOTI ; H 3 : mean TRESTIMA < mean MOTI . BF 23 is the ratio between how well the hypothesis of overestimation by TRESTIMA performs in explaining the data compared to the hypothesis of underestimation. P(H 2 )/P(H 3 ) is the prior odds of these two hypotheses and P(H 2 |data)/P(H 3 |data) is the posterior odds for these two hypotheses (Equation (2)). All tests were performed in JASP 0.11.1 [33], based on a sample of n = 4 stands and for n = 61 paired observations of basal area. The hypotheses can be summarized as follows (Table 1): Table 1. Hypotheses about differences in the number of stems (N) and stand basal area (G) between TRESTIMA and MOTI. Paired Sample t-Test The Bain Paired Sample t-Test Hypotheses H 0 : N TRESTIMA = N MOTI . The null hypothesis that the population mean of differences in N between the two applications equals. 0. The null hypothesis that the population mean of differences in G between the two applications equals 0. Differences in the Number of Stems per Hectare between TRESTIMA and MOTI TRESTIMA estimates the number of stems only at the stand level; therefore, a comparison of the number of stems was not possible at the standing point level. At the stand level, TRESTIMA returned, on average, a higher number of stems than MOTI. The mean difference in stand-wise paired comparisons was + 305.8 trees/ha ( Table 2). The Bayesian paired sample t-test suggests that the data in the four stands moderately support the alternative hypothesis H 1 (Figure 4a). The Bayes Factor BF 01 = 0.246 indicates that the hypothesis of no difference in the number of stems between TRESTIMA and MOTI is less likely than the hypothesis of difference. The BF 01 error of 0.002% suggests that the BF 01 estimate is accurate. The median effect size Forests 2020, 11, 1148 8 of 16 is 1.5, and the 95% credibility interval for the posterior distribution [0.289, 3.286], plotted as the line above the posterior distribution, indicates 95% certainty that the true population effect size is between 0.289 and 3.286. The fact that the circle on the posterior graph is much lower than the circle on the prior graph illustrates that the alternative hypothesis is favored. The Bayesian error probabilities of P(H 0 |data) = 0.20 and P(H 1 |data) = 0.80, calculated from Equation (1), indicate that the Bayesian error associated with a preference for H 1 is a remarkable 20%. The frequentist paired sample t-test shows that the null hypothesis of no difference in stem number between TRESTIMA and MOTI should be rejected (t = 4.09, df = 3, p = 0.03; Table 3). Cohen's d, which indicates the size of the difference between two means in standard deviation units, is large (Table 3). However, the Vovk-Sellke maximum p-ratio of 3.83 indicates that the p-value of 0.03 is only, at most, 3.83 times more likely to occur under H1 than under H0, which is clearly not strong evidence for H1. Moreover, the frequentist Type I error probability α(p) (i.e., the frequentist probability in rejecting H0) is 0.22, which means that for 22% of the paired sample t-tests, the p-value would be greater than 0.03 and thus more in favor of H0. We could also say that in 78% of the t-tests, the results would be more significant than or equal to those in our case, and, in 22% of the tests, the results would be less significant. panel) shows how the degree of belief that there are differences between TRESTIMA and MOTI changes with each additional measurement and the width of the priors. The analysis shows mostly anecdotal evidence for H 1 ; for n = 4, the support in the data for H 1 is moderate, irrespective of the prior width. From the Bayesian analysis, we can conclude that our strength of belief that TRESTIMA and MOTI return different stem number estimates is moderate. The frequentist paired sample t-test shows that the null hypothesis of no difference in stem number between TRESTIMA and MOTI should be rejected (t = 4.09, df = 3, p = 0.03; Table 3). Cohen's d, which indicates the size of the difference between two means in standard deviation units, is large (Table 3). However, the Vovk-Sellke maximum p-ratio of 3.83 indicates that the p-value of 0.03 is only, at most, 3.83 times more likely to occur under H 1 than under H 0 , which is clearly not strong evidence for H 1 . Moreover, the frequentist Type I error probability α(p) (i.e., the frequentist probability in rejecting H 0 ) is 0.22, which means that for 22% of the paired sample t-tests, the p-value would be greater than 0.03 and thus more in favor of H 0 . We could also say that in 78% of the t-tests, the results would be more significant than or equal to those in our case, and, in 22% of the tests, the results would be less significant. Table 3. Frequentist analysis of the differences in the number of stems and basal area between closest observations (n = 61) and the four study stands. The nonparametric version of the paired sample t-test gives the opposite result. The significance of the exact nonparametric Wilcoxon signed-rank test (p = 0.13) suggests that we should retain the null hypothesis. When the asymptotic version of the test was used, the result was marginal (p = 0.068). However, these results should be interpreted with caution due to the low statistical power of tests with small samples. Therefore, the only conclusion that we can draw from the frequentist analysis is that neither p-values nor the two calibrated measures of the p-value (i.e., VS-MPR and α(p)) provide strong evidence for the rejection of H 0 . Differences in Stand Basal Area between TRESTIMA and MOTI Bayes Factor BF 01 = 0.729 for stand basal area indicates that the hypothesis of no difference between TRESTIMA and MOTI is only marginally less likely than the hypothesis of difference (Figure 4b, top panel). BF 01 = 0.729 is classified as anecdotal evidence in favor of the alternative hypothesis that the stand basal area differs between TRESTIMA and MOTI. The error of BF 01 was below 0.001%, which suggests that the BF 01 estimate is accurate. The median of the effect size was 0.866; the 95% credibility interval for the posterior distribution [−0.179, 2.178] indicates that the effect size is between −0.179 and 2.178. The Bayesian error probabilities of P(H 0 |data) = 0.58 and P(H 1 |data) = 0.42 suggest that if H 1 is preferred, there is still a 42% chance that H 0 is true. The Bayes robustness check (Figure 4b, middle panel) shows that the degree of belief in either of the two hypotheses does not substantially change by changing prior width and that our weak belief in the alternative hypothesis should remain. The sequential analysis (Figure 4b, bottom panel) shows that a larger sample does not change our belief that neither of the two hypotheses is more likely. The frequentist paired sample t-test shows that the hypothesis of no difference in stand basal area between TRESTIMA and MOTI should be retained (t = 2.28, df = 3, p = 0.11). Cohen's d is 1.141 (Table 3). The Vovk-Sellke maximum p-ratio of 1.54 indicates that the p-value of 0.11 is at most 1.54 times more likely to occur under H 1 than under H 0 , which is clear evidence for H 0 . The frequentist probability for rejecting H 0 α(p) is approximately 0.39, which means that for 39% of the paired sample t-tests, the result would be less significant and thus more in favor of H 0 . The sign test, which we used as an alternative to the parametric test because of the small sample and skewness of the differences in G, also suggests retaining the null hypothesis. This result is somewhat counterintuitive since TRESTIMA returned consistently higher estimates of N and G for all stands. The Bayesian paired sample t-test on the 61 nearest observations (i.e., without considering stand boundaries) shows results similar to its stand-wise counterpart. Bayes factor BF 10 = 2.533 indicates that the odds of obtaining different stand basal area estimates with TRESTIMA are 2.533 to one ( Figure 5, top panel). Different priors do not substantially change this belief ( Figure 5, middle panel). However, since both TRESTIMA and MOTI provide point estimates, different standing point positions and camera directions may result in highly divergent estimates ( Figure 5, bottom panel), which converge with a large enough sample. The frequentist test suggests that differences in G between both applications, if subjective sampling is applied, are highly significant (t = 2.52, df = 60, p = 0.01; Table 3). This, however, indicates an uneven-aged stand structure rather than the poor accuracy of TRESTIMA. Using TRESTIMA, taking a few photos subjectively and hoping for an accurate population parameter estimate, may not be prudent unless large samples are used ( Figure 5, bottom panel). Informative Hypotheses Evidence for the hypothesis of obtaining a greater number of stems versus a smaller number of stems with TRESTIMA compared to MOTI is extremely strong. The odds of overestimating versus underestimating N for TRESTIMA compared to MOTI are 45,530 to 1 (Table 4). This means that there is a 99.9% probability of obtaining greater estimates of stand density than lower estimates with TRESTIMA compared to MOTI. The odds of obtaining a greater basal area with TRESTIMA compared to lower basal areas are 88.1 to 1. This means there is only a 1.1% probability of obtaining a lower basal area with TRESTIMA compared to MOTI. The analysis for a pooled sample (n = 61) shows roughly the same results. between both applications, if subjective sampling is applied, are highly significant (t = 2.52, df = 60, p = 0.01; Table 3). This, however, indicates an uneven-aged stand structure rather than the poor accuracy of TRESTIMA. Using TRESTIMA, taking a few photos subjectively and hoping for an accurate population parameter estimate, may not be prudent unless large samples are used ( Figure 5, bottom panel). Discussion The performance of a mobile application can be evaluated from many perspectives. Software developers are concerned about accuracy, precision, and reliability, and the release of a new application is often accompanied by reported errors. However, inexperienced surveyors such as private forest owners may prefer to know whether a new application is as good as, better than, or worse than the established one, be it in terms of accuracy, precision, reliability, or any other performance parameter such as price, simplicity, or speed. We showed that Bayesian reasoning can be more informative than the Neyman-Pearson approach when answering these questions. It has been said that people think in a Bayesian way [34]. Our real-life example supports this. Our private forest owner was not interested in whether the data supports the alternative hypothesis of a population difference between the two methods, but which one of the two hypotheses is more likely to be true after both applications "see" the data from his forest. In the frequentist framework, the answer to the latter question is not possible since the outcome of the statistical test cannot be that H 0 is accepted. High p-values do not mean that H 0 is true, and with a large enough sample size, H 0 will always be rejected [35]. Moreover, in the frequentist framework, one always assumes H 0 of no difference is true and subsequently seeks evidence against H 0 . This means that any other plausible pair of hypotheses or their combinations cannot be compared. Bayesian hypothesis testing does not use thresholds such as significance levels but rather Bayesian factors, which measure the belief in the hypothesis, and are therefore much more intuitive for inexperienced end-users, who would probably have difficulty understanding the p-value. On the whole, our analysis suggests that TRESTIMA estimates of the number of stems and stand basal area differ from MOTI estimates, but evidence for the differences between the applications is, at best, moderate. Conversely, there is almost no doubt that TRESTIMA overestimates stand parameters. The relative biases in basal area of + 17.1% and + 16.1% at the stand level and standing point level, respectively, is consistent with the results of Vastaranta et al. [10], who found a + 15.2% relative bias. The relative bias for the number of stems of + 49.4%, however, is inexplicably high. The reason for this could be the uneven-aged stand structure and relatively low number of trees for which tree diameter was recognized and used as an input in the stem number calculation [36]. The reason could also be that for Scots pine and other tree species, tree height was entered manually, while for spruce, it was calculated by the application. Overestimation of stand basal area leads to standing volume estimations that are too optimistic, which may have a negative impact on harvesting planning in small forest holdings. Moreover, imprecise basal area estimations due to small basal area factors may lead to bias in basal area increments [37] and subsequently to suboptimal harvesting levels. With respect to accuracy and precision, we would like to note that accuracy tests usually (and this is also the case in this paper) do not refer to the closeness of the measurements to the "true" population value of a stand parameter. Instead, estimates obtained by an alternative method are used as a reference (e.g., [11,38]), meaning that the measurement error of the reference method is not accounted for (but see [10,39,40]). Our sample of four stands was extremely small for statistical hypothesis testing, meaning that the statistical power of the tests was low compared to the power usually used in the tests. Private forest owners should be warned about this before discussing the results. However, Bayesian tests are less influenced by sample size than frequentist tests, although small samples can also produce inconclusive results in Bayesian tests. In defense of the frequentist approach to statistical hypothesis testing with small samples, we would like to remind everyone that Student [41] used small samples of four units in his original paper on t-distribution. Moreover, many simulation and empirical studies suggest that the t-test can be applied to an extremely small sample size (n ≤ 5) as long as the effect size is expected to be large [42]. The core problem in real-life situations, however, is knowing the true effect size to estimate the Type II error. The effect size of an application can be estimated based on similar studies in the literature or expert opinion on the smallest effect size that is different enough from zero for the user [43]. However, users may have different criteria, and reported accuracy rates may be of little help for uneducated users. The positive side of the power analysis in the Bayesian framework is that the power analysis is not based on Type I and Type II error rates but on the probability that the Bayes factor exceeds the threshold value of the Bayes factor under the null hypothesis and under the alternative hypothesis [43]. In practice, power analysis means collecting new data and inspecting the Bayes factor until the sequential analysis diagram shows no substantial change. For the number of stems, the sequential analysis diagrams (Figure 4a, bottom panel) show that a large sample is needed for decisive evidence in favor of one of the hypotheses. Conversely, new data show similar weights for the hypotheses about the basal area, i.e., the hypothesis of difference is only marginally more likely than the hypothesis of no difference (Figures 4b and 5, bottom panel), which means that the sample for achieving a sufficiently low probability of making an erroneous decision about the basal area is relatively small. The Bayesian framework enables many competitive hypotheses to be evaluated, and analysts have flexibility in their formulation. For example, with the bain ANOVA [23], we can evaluate the plausibility of several claims about applications using equality and inequality constraints: H 1 , that Application 1 provides greater estimates than Application 2, and Application 2 provides greater estimates than Application 3 (H 1 : µ 1 > µ 2 > µ 3 ); H 2 , that all three applications are equal (H 2 : µ 1 = µ 2 , µ 3 ); H 3 , that Application 3 returns the highest estimate (H 3 : µ 1 < µ 2 < µ 3 ). However, if many hypotheses are formulated and evaluated, the Bayes factor will select the hypothesis that best describes the data and not, as it should, the hypothesis that best describes the population from which the data were sampled [22]. The conclusion may be more flawed when multiple hypotheses are tested on small samples; therefore, bain should not be misused for legitimizing uncontested beliefs about the applications. Differences in stand basal area between both applications, when estimated on 61 pairs of closest observations, seem to vary substantially ( Figure 5, bottom panel). However, the differences should be interpreted with caution. First, the standing point locations for TRESTIMA and MOTI were approximately 10 m apart because TRESTIMA samples trees in circular sectors while MOTI uses traditional proportional-to-size sampling in virtual circular plots. This means that the applications surveyed different parts of the forest. The differences may also originate from an uneven-aged stand structure, where a change in sampling point position may result in a greater change in the parameter than in even-aged forests. Vastaranta et al. [10] tested TRESTIMA in mainly even-aged and single layer stands, with an average size of slightly less than 1 ha, while in our case the stands were bigger and more variable in diameter. Second, the point-wise estimation of a stand parameter has no usefulness for a forest inventory. Bitterlich sampling selects trees with a probability proportional to their basal areas and thus provides local estimates of stand densities. A standing point level estimate is informative only when the variation of the parameter in the stand is close to null, such as in forest plantations. However, here, inventory crews should be cautious not to sample in similar locations relative to row spacing in order not to compromise the assumption of randomly selected point locations in the tract area [44]. Despite the differences found between both applications, our analysis does not indicate that TRESTIMA is inaccurate, but rather the opposite. We showed that a sufficiently accurate estimation of the basal area can also be obtained by nonprobability sampling, assuming that stand structure is not particularly uneven. Owners of smaller forest holdings who use TRESTIMA in mixed deciduous forests may be less satisfied with the fact that TRESTIMA does not recognize all major tree species, does not estimate the number of stems per hectare for each individual standing point, and has relatively poorly documented computational processes [13]. For instance, the user has no influence on the determination of standing volume, but compared to MOTI, TRESTIMA enables the user to upload and store different types of GPS-tagged data and review the recorded photos with a computer or mobile device. It also enables stand characteristics to be shared with third parties as electronic reports or raw data. Conclusions We showed how private forest owners can use Bayesian statistics to evaluate a new mobile forest inventory application using a small sample and nonprobabilistic sampling. We believe that private forest owners or other inexperienced users are likely to have fewer problems understanding Bayesian hypothesis testing than the concept of null hypothesis significance testing, but only if the results of the Bayes tests are communicated in a simplified manner. We found differences between TRESTIMA and MOTI in the number of stems and stand basal area estimates; however, in the Bayesian sense, the differences were not large. Each private forest owner can decide how satisfied he is with the results of the application in his own forest and whether his testing contradicts our conclusions. By collecting new data, the support for the hypotheses of interest can be continuously updated, and the reliability of our conclusions can be improved. However, private forest owners can already be sure that stand parameter overestimation with TRESTIMA is many orders of magnitude more likely than underestimation. The Bayesian approach could also be used in the multicriteria evaluation of the applications, where private forest owners decide the weights for certain criteria. Finally, no single or composite measure of application performance should replace commonsense reasoning. Funding: This research was funded by the Ministry of Agriculture, Forestry, and Food of Republic of Slovenia, grant number 2330-17-000077, project FOREXCLIM, and the ARRS program P4-0059 "Forest, forestry and renewable forest resources".
10,162.6
2020-10-29T00:00:00.000
[ "Mathematics" ]
A new approach for the solutions of the fractional generalized Casson uid model described by Caputo fractional operator The fractional Casson uid model has been considered in this paper in the context of the Goodman boundary conditions. A new approach for getting the solutions of the Casson uid models have been proposed. There is the Double integral method and the Heat balance integral method. These two methods constitute the integral balance method. In these methods, the exponent of the approximate solutions is an open main problem, but this issue is intuitively solved by using the so-called matching method. The graphical representations of the solutions of the fractional Casson uid model support the main results that have been presented. In our investigations, the Caputo derivative has been used. Introduction The numerical and analytical solutions are subject to many investigations in the literature of dierential and fractional dierential equations. Many models in the literature admit numerical schemes. Thus the numerical approximations are more useful for proposing the solutions of the dierential equations. As numerical schemes, we can cite the implicit scheme, the explicit scheme, the predictor-corrector method, the Adams Bashford method, and others. The analytical solutions are not all time possible due to the complexities in the structure of many dierential equations. These last years their exist many methods that emerged in the literature like the Fourier transform, the homotopy analysis, the homotopy perturbation method, and many others. In the problems of solving the diusion equations and the fractional diusion equations, we also note the numerical schemes, the Fourier transform, and the double Laplace transform. Expected the numerical schemes where we have all time the approximation of the solutions, there exist some new methods for proposing the analytical solution of the fractional diusion equations and some class of uids models as second-grade uid and Casson uid, stokes problems [25], Railegh problems [12], and others. These recent years, we have noted the emergency of the fractional operators in all the elds of physics, mathematical modeling, heat equations, and others [2] and many applications of the fractional operators in real-world problems continue to be provided, see investigation in [20,28,30,32,33,34]. As fractional operators, we notice the derivatives with singular kernels and the derivatives with non-singular kernels [4,3,7,11,14,19,30]. The uids models and the diusion equations with integer derivative or non-integer derivatives occupy a great part of the literature nowadays. Many papers in the literature address the uid models [31,35]. In [29], Sene proves the integral balance method is compatible with the second-grade uids models; the solution has been proposed in this paper. In [26], Sene experiments the integral balance method for solving the Stokes's rst problems. And some interesting results related to the compatibility of this method with the Stokes problem have been established. In [8], Ghosh et al. present the investigations on Casson uid model with over an exponentially stretching permeable sheet, validate their results by comparing their results with the results in the literature. In [9], Hamid proposes the numerical solutions of the uids models in the context of fractional order derivative. In [23], Sheikh et al. propose a review of the theoretical aspect of the nanouids models. In [22], Saqib experiments the fractional derivative with non-singularity, namely Atangana-Baleanu derivative to nanouid model over an inclined plate. In [1], Aman et al. apply the fractional operator in uids models by considering operator with the singular kernel. In [24], Sheikh proposes the analytical solution of Casson uid dened with fractional order derivative using classical Fourier sine transform. Many investigations related to uids model, heat equations and fractional diusion equations exist in the literature, see more investigations in [5,13,27,28]. In this paper, a new method for getting the analytical approximations of the solutions of the fractional diusion equations has been proposed. The technique used in this paper uses the physical concepts, namely the penetration depth. We utilize the integral balance method. We introduce, in particular, the heat balance method and the double integral method for solving the equations into the Casson uid model. The integral balance method is compatible with heat equations, second-grade uid, stokes problem. This paper provides it is also compatible with Casson uids models. These two methods are based on single integration and double integration through the diusion equations. The mean idea of this method is we stipulate the approximate solution of the fractional diusion equation exists, and its form depends on the penetration depth and a specic exponent. The question will be to determine the penetration depth and the exponent explicitly. In Section 2, we recall the fractional operators and some properties. In Section 3, we propose the fractional model under consideration. In Section 4, we investigate the solutions of the model using integral balance methods. In Section 5, we represent the graphics and make the interpretations. Concluding remarks are given in Section 6. Basics calculus tools These paragraphs focus on the tools used in fractional calculus; we mean the fractional operators. We utilize in this paper the Caputo derivative, the Liouville-Riemann integral, and their associated Simon Laplace. We enumerate the following denitions Denition 2.1. [16] Assume that k : [0, +∞[−→ R, the representation of the Liouville integral of k follows with t > 0, the order α ∈ (0, 1), and the value of gamma is denoted by γ(...). Note that in the above equations, all the denitions are reduced into the interval α ∈ (0, 1). Still, these denitions can be extended to arbitrary non-integer order outside of the interval considered in this section. For more information, the authors can refer to the literature. Another remarks are the generalizations of all the above denitions exist; these generalizations can be found in [10]. The discrete versions of all fractional derivatives exist too. The other fractional derivatives are not mentioned in this paper because we don't use them, namely Mittag-Leer [2] fractional derivative, the exponential fractional derivative [7], the conformable derivative [15], and many others. The list is long. We nish this section by recalling the Laplace transform of the Caputo derivative, which is utilized to solve the dierential equations with non-integer order derivatives. We have the following relationship with L denotes the classical Simon Laplace transformation. Fractional model for generalized Casson uid Let us present the fractional model presented in this section. The following equations describe the motions of the fractional generalized Casson uid model combined with the fractional diusion heat equation for innovation we consider the Goodman boundary conditions dened for our problem by the following relationships The parameters κ and denote the penetration depths of the previous model and verify the assumption that for z ≥ κ and z ≥ the temperature and the uid motion change from the initial temperature and the initial concentration are considered negligible. The parameter Gr represents the Grashold number, P r represents the Prandtl number, M denotes Hartman number and µ = 1 + 1 β represents the Casson diusion coecient. The main novelty in this model is the consideration of the Goodman boundaries conditions. In short, we will focus on the impact of the Goodman boundary conditions on the diusion processes. Note that in classical physic Eq. (6) is called fractional diusion equation or fractional heat equation and the Eq. (5) is called fractional diusion with a reaction term. The combination of these two equations represents as well the fractional generalized Casson uid model. Solutions algorithms for generalized Casson uid model In this section, we describe the algorithm of the solutions. With both equation we adopt a mathematical methods for physics. We method adopted are called integral balance methods: the heat balance integral method and the double integral methods. The methods are described as follows. For the integral balance methods, according to Goodman proposition we suppose the approximate analytical solution are described by where κ is the penetration depth as described in the presentation of the model, and n is the exponent of the approximate solution, which will be determined using the matching method. The second step consists of integrating the fractional dierential equation between 0 to κ, taking into account the approximate solution Eq. (8). The last step consists of solving the obtained fractional dierential equation after single integration between 0 to κ, which depends eventually on time. For the double integral methods, according to Goodman proposition, we conserve the approximate analytical solution given in Eq. (8). The second step consists of applying the double integration on the fractional dierential equation, the rst integration between 0 to κ, and the second integration between z to κ by taking into account the approximate solution Eq. (8). The last step consists of solving the obtained fractional dierential equation after the double integration. Note that all these methods use the given Goodman boundary conditions. To experiment with these methods, we begin our procedure of solutions by applying the heat balance integral method. Let the fractional diusion equation dened by Eq. (6), and we integrate between 0 to κ, that is Using the approximate solution in Eq. (8), we obtain after calculations the following relationships Multiplying by the parameter (n + 1)κ and applying the Laplace transform of the Caputo derivative to both sides of Eq. (10), we have the following relationships s ακ2 (s) = 2n(n + 1) P rs , κ 2 (s) = 2n(n + 1) P rs 1+α . Thus, the penetration depth is obtained after the application of the inverse of the Laplace transform to both sides of Eq. (11), we have κ 2 (t) = 2n (n + 1) t α P r . According to the the heat balance integral method described above, the approximate analytical solution of Eq. (6) is given by the relationship The exponent n is important in our investigations, to propose its value we continue our resolution by proposing the approximate solution using the Double integral method. Applying the double integral between 0 to κ and between z to κ on the fractional diusion Eq. (6), we have Using the approximation described in Eq. (8), and calculating the integration between z to κ, we obtain the following calculations Applying the Laplace transform to both sides of Eq. (15), we get the following relationship s ακ2 (s) = (n + 1)(n + * 2) P rs , κ 2 (s) = (n + 1)(n + 2) P rs 1+α . Thus, the penetration depth is obtained after the application of the inverse of the Laplace transform to both sides of Eq. (16), we have κ 2 (t) = (n + 1) (n + 2) t α P r . According to the Double integral method described above, the approximate analytical solution of Eq. (6) is We can observe the exponent n appears again in the approximate solution with the Double integral method. It is natural to stipulate these approximations are the same. It is called the matching method. Therefore, considering Eq. (13) and Eq. (18), we obtain the following value of the exponent Finally, the approximate solution of the fractional heat equation is obtained by considering the exponent The second part of this resolution consists of applying the heat balance integral method and the double integral method in Eq. (5). In other words, we have to solve the fractional dierential equation described by the equation We repeat the same resolutions. First, we apply the heat balance integral method. We apply the single integral between 0 to , that is Replacing u by it supposed approximation, we get after integration the following relationship Let rearranging the last terms and multiplying by the parameter (n + 1) , we have to solve the fractional dierential equation described by the following form Applying the Laplace transform of the Caputo derivative to both sides of Eq. (23), we have the following relationships s α¯ 2 (s) = 2µn(n + 1) s + [−2M + 2Gr]¯ 2 (s) , 2 (s) = 2µn(n + 1) Thus, the penetration depth is obtained after the application of the inverse of the Laplace transform to both sides of Eq. (24), we have According to the heat balance integral method described above, the approximate analytical solution of Eq. (5) is given by the relationship The second method consists of applying the double integral method. We Apply the double integral between 0 to κ and between z to on the fractional diusion Eq. (5), we have We assume the approximation described in Eq. (8) with u, and calculating the integration, we obtain the following calculations ∂u ∂z dz − M 2 (n + 1)(n + 2) + Gr 2 (n + 1)(n + 2) , D α t 2 (n + 1)(n + 2) = µv(0, t) − M 2 (n + 1)(n + 2) + Gr 2 (n + 1)(n + 2) , 1 (n + 1)(n + 2) D α t 2 = µ − M 2 (n + 1)(n + 2) + Gr 2 (n + 1)(n + 2) . Applying the Laplace transform to both sides of Eq. (27), we get the following relationship s α¯ 2 (s) = µ(n + 1)(n + 2) P r − M¯ 2 (s) (n + 1)(n + 2) + Gr¯ 2 (s) (n + 1)(n + 2) , Thus, the penetration depth is obtained after the application of the inverse of the Laplace transform to both sides of Eq. (28), we have According to the double integral method described above, the approximate analytical solution of Eq. (5) is given by the relationship We can observe the exponent n appears again in the approximate of the penetration depth with the double integral method. It is natural to stipulate these approximations are the same. Therefore, considering Eq. (25) and Eq. (29), we obtain the following value of the exponent Finally, the approximate solution of the fractional heat equation with reaction term is obtained with considering the exponent n = 2, there is Graphics and interpretations of the solutions This section is the main section of this work. We will analyze the impact of the fractional-order derivative into the diusion process, particularly into the temperature and the eect of the fractional-order into the velocity of the uid in z-direction. The impact of the Prandtl number P r, the Hartman number M , the Grashold number Gr, and the Casson diusion coecient µ into the velocity of the uid and its temperature. Let us begin with the fractional order α into the temperature. We x the Prandtl number to P r = 0.25 and the time t = 0.3. The fractional-order derivative varies into (0, 1). In Figure 1, we notice all the trajectories decrease according to the increase of the direction z. We note when the order α increases, then all the velocities decrease following the direction indicated by the arrow in Figure 1. We conclude the fractional-order α accelerate the non-increasing of the velocity. Thus the fractional-order α has an acceleration eect. We analyze the impact of the fractional-order α into the temperature. As in the previous analysis, here we x the Prandtl number to P r = 10, the time t = 0.3, the Hartman number M = 15, the Grashold number Gr = 10 and the diusion coecient to β = 10. The fractional-order derivative varies into the interval (0, 1). In Figure 2, we notice the velocity decrease according to the increase of the state z. We note when the order α increases, then all the velocity decrease too as well following the direction indicated by the arrow in Figure 2. We conclude the fractional-order α accelerates the decrease of the velocity. Thus the fractional-order α also has an acceleration eect into the diusion process. We continue by analyzing the impact of the Prandtl number P r into the temperature. We x the order α = 0.95, the time t = 0.3 into the temperature diusion. In Figure 3, we notice when the Prandtl number increase, all the temperatures decrease as well according to the increase of the direction z. To be more precise, we note when the Prandtl number P r increases the temperature deceases more fastly following the direction of the arrow in Figure 3. Finally, we conclude the increase of the Prandtl number P r accelerates the decrease of the temperature to zero. We now consider the velocity of the uid. The Hartman number M is xed to M r = 15, the diusion coecient to µ = 5, the order α = 0.95, the Grashold number take dierent value. In Figure 4, we note the general behaviors of the velocities are they decrease, but when the Grashold number Gr increases, then all the velocities increase to one another. Here the Grashold number Gr has also acceleration eect but accelerate the intensity of the velocity to a value dierent to zero, see the direction of the arrow indicated in Figure 4. We assume the velocity of the uid. The Grashold number Gr is xed to Gr = 10, the diusion coecient to µ = 6, the order α = 0.95, and the Hartman number M take dierent value. In Figure 5, we note the general behaviors of the velocities are they decrease, but when the Hartman number M increases, then all the velocities decrease to one another. Here the Hartman number M generate has also acceleration eect but accelerates the intensity of the velocity to zero; see the direction of the arrow indicated in Figure 5. Finally, the Hartman number M and the Prandtl number P r have the same eect in the diusion process of the velocity. The Casson coecient µ = 1+ 1 β also generates the same behavior, when we x Grashold number Gr = 10, the order α = 0.5 and the Hartman number M = 0.5. See the direction of the velocity in Figure 6. We also remark the diusion coecient has the same impact as the Hartman number M . Concluding remarks Some new methods as the Double integral and the heat balance integral methods have been experienced in this paper. The conclusion is these methods are suitable for getting the solutions of the fractional Casson uid models. The main dierence between these methods with the methods existing in the literature is, these methods are possible when the Goodman boundary conditions are considered. For short, we have also experimented in this paper the tolerability of the Goodman boundary condition into the diusion processes. The graphical representations support the main results of this paper. This works, and the method experienced in this paper will open a new door in the methods for approximating the solutions of the uid models.
4,333
2020-11-08T00:00:00.000
[ "Engineering", "Mathematics", "Physics" ]
Rectal and Tracheal Carriage of Carbapenemase Genes and Class 1 and 2 Integrons in Patients in Neurosurgery Intensive Care Unit The spread of multidrug-resistant Gram-negative bacteria, which is associated with the distribution of beta-lactamase genes and class 1 and 2 integrons, is a global problem. In this study, in the Moscow neurosurgery intensive care unit (neuro-ICU), the high prevalence of the above-stated genes was found to be associated with intestinal and tracheal carriage. Seven-point prevalence surveys, which included 60 patients in the neuro-ICU, were conducted weekly in the period from Oct. to Nov. 2019. A total of 293 clinical samples were analyzed, including 146 rectal and 147 tracheal swabs; 344 Gram-negative bacteria isolates were collected. Beta-lactamase genes (n = 837) were detected in the isolates, including beta-lactamase blaTEM (n = 162), blaSHV (n = 145), cephalosporinase blaCTX–M (n = 228), carbapenemase blaNDM (n = 44), blaKPC (n = 25), blaOXA–48 (n = 126), blaOXA–51–like (n = 54), blaOXA–40-like (n = 43), blaOXA–23-like (n = 8), and blaVIM (n = 2), as well as class 1 (n = 189) and class 2 (n = 12) integrons. One extensively drug-resistant Klebsiella pneumoniae strain (sequence type ST39 and capsular type K23), simultaneously carried beta-lactamase genes, blaSHV–40 and blaTEM–1B, three carbapenemase genes, blaNDM, blaKPC, and blaOXA–48, the cephalosporinase gene blaCTX–M, and two class 1 integrons. Before this study, such heavily armed strains have not been reported, suggesting the ongoing evolution of antibiotic resistance. Introduction Antimicrobial resistance is one of the most serious threats to global health care [1,2]. In the last two decades, an increased number of infections caused by multidrug-resistant Gram-negative bacteria (MDR-GNB) have been reported [3,4]. As a result, increases in morbidity and mortality rates have been observed, as well as a rise in health care costs [5]. Carbapenems were the most effective antibiotic therapy for infections caused by MDR-GNB until the increased prevalence of carbapenem resistance was described in clinical settings [6]. The widespread dissemination of carbapenem-resistant Gram-negative bacteria (CR-GNB) 2 patients in 5, 1 patient in 6, and 5 patients in 7 surveys. The median age of the patients was 53.8 years (ranging from 14 to 73 years old), and the sex ratio (male/female) of the patients was 1.07 (31/29). Most patients (n = 28) received carbapenem therapy before/during the surveys, few patients (n = 15) received cephalosporins with/no β-lactamase inhibitors followed by the patients, and the remaining patients (n = 17) were not treated with betalactams. Half of the patients (n = 30) had symptoms of gastrointestinal dysfunction (n = 13) and infection of the respiratory system (n = 21), and both symptoms were identified in four patients. Six patients died during this period; the mortality rate was estimated to be 10%. A total of 34 patients were discharged from the neuro-ICU before the end of the study (Figure 1). Fourteen patients carried carbapenemase genes throughout the study period. Three patients did not have carbapenemase genes at the beginning of the study, but later, they became positive for bla OXA-48 +bla NDM +bla KPC , bla OXA-48 +bla VIM , and bla VIM +bla NDM genes, respectively. In contrast, three patients lost carbapenemase genes during their time in the ICU, and they were not detected at the end of the study. Interestingly, eleven clinical samples of seven patients simultaneously carried three carbapenemase genes, bla NDM , bla KPC , and bla OXA-48 , the cephalosporinase gene, bla CTX-M , and class 1 integrons. A combination of CRGs were also detected: bla NDM +bla KPC was detected in two patients; bla KPC +bla OXA-48 in two patients; bla VIM +bla OXA-48 in one patient; and bla VIM +bla NDM in one patient. The CRG combination bla OXA-48 +bla NDM +bla KPC was detected in seven patients. The bla OXA-48 +bla NDM +bla KPC CRGs were detected in six patients. Thus, 33 (11%) of the 293 samples were positive for two or more CRGs (Table 2). In total, 65 (22%) of the 293 clinical samples were positive for CRGs. The blaOXA-48 gene was detected in 18 (53%) of the 34 CRG-positive r.s. and in 19 (61%) of 31 positive t.s.; the blaNDM gene was detected in 22 (64%) rectal and 17 (55%) tracheal samples; the blaKPC gene was detected in 17 (50%) rectal and 15 (48%) tracheal samples; and the blaVIM gene was detected in 4 (12%) rectal and 6 (19%) tracheal samples. It should be noted that the number of CRGs in the r.s. was two times higher compared to the number in the t.s. The asymptomatic rectal carriage of CRGs was estimated to be present in 17% of the patients, and the asymptomatic tracheal carriage of CRGs was estimated to be present in 18% of the patients ( Figure 2).r. In total, 65 (22%) of the 293 clinical samples were positive for CRGs. The bla OXA-48 gene was detected in 18 (53%) of the 34 CRG-positive r.s. and in 19 (61%) of 31 positive t.s.; the bla NDM gene was detected in 22 (64%) rectal and 17 (55%) tracheal samples; the bla KPC gene was detected in 17 (50%) rectal and 15 (48%) tracheal samples; and the bla VIM gene was detected in 4 (12%) rectal and 6 (19%) tracheal samples. It should be noted that the number of CRGs in the r.s. was two times higher compared to the number in the t.s. The asymptomatic rectal carriage of CRGs was estimated to be present in 17% of the patients, and the asymptomatic tracheal carriage of CRGs was estimated to be present in 18% of the patients (Figure 2).r. Table 2. Trends in the content of resistance genes in patients. Bacterial Isolates and Asymptomatic Carriage A total of 344 Gram-negative bacterial isolates were collected from the patients. Overall, 183 isolates were collected from the rectal swabs (r.s.), and 161 isolates were collected from the tracheal swabs (t.s.). Klebsiella pneumoniae was the most prevalent bacterium and other species, B. gladioli, Citrobacter spp., Enterobacter spp., H. alvei, K. aerogenes, K. oxytoca, K. variicola, M. morganii, P. mirabilis, P. stuartii, S. maltophilia, and S. marcescens (12%). The isolates obtained from the patients with gastrointestinal dysfunction and/or respiratory infection were identified as K. pneumoniae (38%), A. baumannii (21%), E. coli (16%), and P. aeruginosa (8%) and other species, Citrobacter spp., Enterobacter spp., M. morganii, P. mirabilis, and S. maltophilia (17%). Interestingly, among the isolates obtained from the patients without clinical manifestation, A. baumannii was found twice as often in the tracheal swabs compared with the rectal swabs. Moreover, among isolates obtained from the patients without clinical manifestation, E. coli was more prevalent in the rectal swabs compared with tracheal swabs ( Figure 4). All of the bacterial isolates were collected from 55 patients, while no GNB organisms were found in samples from 5 patients (namely 14, 16, 27, 40, and 60). Bacterial isolates were only collected from the r.s. in six patients (namely 5, 17, 24, 25, 30, and 56), and they were only collected from the t.s. in three patients (namely 33, 51, and 54). The number of isolates collected from one patient varied significantly: 1-10 isolates were obtained from 46 patients, 11-20 isolates were obtained from 6 patients, and 21-30 isolates were obtained from 3 patients. Interestingly, among the isolates obtained from the patients without clinical manifestation, A. baumannii was found twice as often in the tracheal swabs compared with the rectal swabs. Moreover, among isolates obtained from the patients without clinical manifestation, E. coli was more prevalent in the rectal swabs compared with tracheal swabs (Figure 4). Resistomes of P. aeruginosa Clinical Isolates P. aeruginosa isolates were obtained from 14 (23%) patients, including four patients (namely 1, 2, 3, and 32) who simultaneously carried P. aeruginosa in both r.s. and t.s. MDR phenotypes were identified for 100% of P. aeruginosa isolates. A total of five beta-lactamase genes, including three bla CTX-M genes and two bla VIM carbapenemase genes, and 32 class 1 integrons were detected in the P. aeruginosa genomes. The gene combination of bla CTX-M+ int1 was detected in three isolates collected from three patients (namely 2, 3, and 20); bla VIM +int1 was detected in two isolates collected from one patient (namely 34) (Figure 9). Antibiotics 2022, 11, x FOR PEER REVIEW 11 of 19 detected in two isolates from two patients (namely 1 and 46), and blaCTX-M+blaOXA-23-like+blaOXA-51-like was detected in one isolate from one patient (namely 1) (Figure 8). Resistomes of P. aeruginosa Clinical Isolates P. aeruginosa isolates were obtained from 14 (23%) patients, including four patients (namely 1, 2, 3, and 32) who simultaneously carried P. aeruginosa in both r.s. and t.s. MDR phenotypes were identified for 100% of P. aeruginosa isolates. A total of five beta-lactamase genes, including three blaCTX-M genes and two blaVIM carbapenemase genes, and 32 class 1 integrons were detected in the P. aeruginosa genomes. The gene combination of blaCTX-M+int1 was detected in three isolates collected from three patients (namely 2, 3, and 20); blaVIM+int1 was detected in two isolates collected from one patient (namely 34) (Figure 9). In our study, the rate of class 1 and 2 integrons' carriage in MDR-GNB isolated from patients in the neuro-ICU was 76%. These data were somewhat different from reports from France and Uganda (53 and 81%, respectively) [16,48]. Class 1 integrons were more prevalent (n = 248) than class 2 integrons (n = 14) in the isolates in this study. Similar proportions were described in previously published studies from Spain, Iran, and Tunisia [48][49][50]. In this study, the rate of class 1 integrons among CR isolates was 73%, while in reports from Iran, this index was higher (86-95%) [51,52]. In our study, class 1 integrons were detected in 76% of MDR P. aeruginosa, 70% of MDR K. pneumoniae, 68% of MDR A. baumannii, and 34% MDR of E. coli. These data are consistent with the findings of reports from Iran and Australia [52][53][54]. In total, the asymptomatic rectal and tracheal carriage of CRGs was found in 17 and 18% of patients, respectively; the rectal and tracheal carriage of class 1 and 2 integrons was found in 23 and 15% of patients, respectively. Such patients may be considered as a potential source of AMR gene transmission in the ICU. Moreover, according to some reports, colonization with MDR potential pathogens and CRGs' carriage may be a prerequisite for the development of nosocomial infections [34]. Thus, this study highlighted the asymptomatic carriage of carbapenemase genes and the prevalence of potential nosocomial pathogens in the intestine and in the trachea of patients in the neuro-ICU. This is important for clinicians, because it will help them to improve the strategies used for hospital infection control and choose optimal antimicrobial therapies. The novelty of this study is the description of CR-GNB strains that simultaneously carry three carbapenemase genes, bla OXA-48 +bla NDM +bla KPC . The limitation of our study was that its single-center nature meant it was impossible to generalize the results. Although the study provided information about the patients regarding prior antibiotic therapy and their length of hospitalization before the study, it did not reveal the importance of these aspects in explaining the reasons for the persistence of resistance genes. Bioethical Requirements In this study, we anonymized the data of patients in the ICU. According to the Requirements of the Russian Federation Bioethical Committee, each patient signed informed consent to treatment and laboratory examination. The study was a retrospective observational study. The Burdenko National Medical Research Center of Neurosurgery Review Board approved the study and granted a consent waiver status. Approval Code: #11/2018. Approval Date: 1 November 2018. Study Design Seven point-prevalence surveys were conducted weekly among all of the neuro-ICU patients in the period from 18 October 2019 to 29 November 2019. Two clinical samples (rectal and tracheal swabs) were collected from each patient on each survey date. The presence of antimicrobial resistance genes was detected in every clinical sample using realtime PCR (RT-PCR), Gram-negative bacteria were isolated from the samples (Figure 10). DNA Extraction and AMR Gene Detection in Clinical Samples Total DNA from clinical samples was extracted using an AmpliPrime DNA-sorb-B reagent kit (InterLabService, Moscow, Russia) in accordance with the manual from the manufacturer. Real-time PCR with specific primers was performed with the obtained DNA preparations to detect beta-lactamase genes-bla TEM , bla SHV , bla CTX-M , bla OXA-48 , bla NDM , bla VIM , and bla KPC -and class 1 and 2 integrons, as described previously [55]. Isolation and Identification of Gram-Negative Bacteria Gram-negative bacteria were collected from clinical samples through growth in Luria-Bertani (LB) broth (Difco, Sparks, MD, USA) at 37 • C for 18 h and the isolation of single bacterial colonies on Lactose TTC agar with Tergitol-7 (SRCAMB, Obolensk, Russia) and 50 mg/L ampicillin (Thermo Fisher Scientific, Waltham, MA, USA). Bacterial identification was conducted using a MALDI-TOF Biotyper instrument (Bruker, Karlsruhe, Germany). The following ATCC reference strains were used as controls: Escherichia coli ATCC 25922, Pseudomonas aeruginosa ATCC 27853, and Klebsiella pneumoniae ATCC 700603. Bacterial isolates were stored in 15% glycerol at −80 • C. Susceptibility to Antimicrobials The minimal inhibitory concentrations (MICs) of antimicrobials belonging to six functional groups: beta-lactams (ampicillin, amoxicillin/clavulanic acid, cefotaxime, ceftazidime, and meropenem), tetracyclines (tigecycline), fluoroquinolones (ciprofloxacin), phenicols (chloramphenicol), aminoglycosides (gentamicin), and sulfonamides (trimethoprimsulfamethoxazole) were determined via the dilution in agar method using Mueller-Hinton agar (Merck, Darmstadt, Germany) according to the Clinical and Laboratory Standards In-stitute (CLSI) guidelines, performance standards for antimicrobial susceptibility testing [56]. The results were interpreted according to the European Committee on Antimicrobial Susceptibility Testing. Break-point tables were used for the interpretation of MICs and zone diameters, Version 12.0, 2022 (http://www.eucast.org, accessed on 8 June 2022). Escherichia coli strains ATCC 25922 and ATCC 35218 were used for quality control. The criterion for defining multi-drug resistant (MDR) isolates was non-susceptibility to ≥1 agent in ≥3 antimicrobial categories; extensively drug-resistant (XDR) isolates were non-susceptible to ≥1 agent in all but ≤2 categories [32]. real-time PCR (RT-PCR), Gram-negative bacteria were isolated from the samples ( Figure 10). DNA Extraction and AMR Gene Detection in Clinical Samples Total DNA from clinical samples was extracted using an AmpliPrime DNA-sorb-B reagent kit (InterLabService, Moscow, Russia) in accordance with the manual from the manufacturer. Real-time PCR with specific primers was performed with the obtained DNA preparations to detect beta-lactamase genes-blaTEM, blaSHV, blaCTX-M, blaOXA-48, blaNDM, blaVIM, and blaKPC-and class 1 and 2 integrons, as described previously [55]. Isolation and Identification of Gram-Negative Bacteria Gram-negative bacteria were collected from clinical samples through growth in Luria-Bertani (LB) broth (Difco, Sparks, MD, USA) at 37 °C for 18 h and the isolation of single bacterial colonies on Lactose TTC agar with Tergitol-7 (SRCAMB, Obolensk, Russia) and 50 mg/L ampicillin (Thermo Fisher Scientific, Waltham, MA, USA). Bacterial identification The prevalence of bacterial species and antimicrobial resistance phenotypes was calculated using Microsoft Excel v. 1909. Whole-Genome Sequencing The whole-genome sequencing of the isolates was performed using a Nextera DNA Library Preparation Kit and MiSeq Reagent Kit v3 (300 cycles) on an Illumina MiSeq platform. Reads without quality filtering were de novo assembled using Unicycler v 0.4.7 [65] with default parameters. The annotation was performed using the NCBI Prokaryotic Genome Annotation Pipeline [66]. Multilocus sequence typing (MLST) and the identification of antibiotic resistance genes, virulence genes, plasmids, and restriction-modification systems were conducted using the web resource of the Center for Genomic Epidemiology (http://www.genomicepidemiology.org/, accessed on 22 April 2022) and the BIGSDB database (https://bigsdb.pasteur.fr/klebsiella/klebsiella.html, accessed on 22 April 2022). Conclusions The current study focused on the carriage of MDR and ESKAPE Gram-negative bacteria and antimicrobial resistance genes in patients in the neurosurgery-ICU in Moscow in October-November 2019. In total, 55 out of 60 patients harbored significant antimicrobial resistance mechanisms in the neuro-ICU, which implies that infection control measures (contact precautions, etc.) are of critical importance to avoid the spread of resistance. The inappropriate use of antimicrobials should be reduced through antimicrobial stewardship interventions to reduce the effects of antimicrobial resistance gene selection in patients. It was shown that the asymptomatic rectal and tracheal carriage of carbapenem resistance genes was estimated to occur in 17 and 18% of patients, respectively. These patients were a potential source of the transmission of these genes in the ICU. The obtained data indicate the importance of monitoring the asymptomatic carriage of antimicrobial resistance genes, especially carbapenem resistance genes and integrons, and preventing the transmission of such genes within ICUs.
3,819.2
2022-07-01T00:00:00.000
[ "Biology", "Medicine" ]
The Adoption of Bundled Sustainable Farm and Environmental Practices by Coffee Farmers in Southwest Ethiopia Despite numerous efforts to introduce sustainable farm and environmental practices (SFEPs), such as pruning, soil erosion control, and water pollution abatement measures), their adoption by smallholder farmers is awfully low in Ethiopia. As a result, smallholder coffee farmers in the country remain in poverty traps even if there is room to enjoy coffee returns by doubling the yield by implementing sustainable practices. On the other hand, most previous coffee sustainability studies focus on the economic, livelihood, and poverty alleviation impact of private sustainability standard schemes. Despite the holistic advantages of the adoption of bundled SFEPs over individual adoption practices, it has been overlooked by earlier scholars in the country. In southwest Ethiopia, few farmers applied sustainable coffee farm practices (particularly pruning, stumping, the use of fertilizer, and mulching), and the yields gained by the farmers are quite low. Therefore, this study seeks to examine the factors affecting the adoption of bundled SFEPs and their intensity at the farm household level in southwest Ethiopia based on cross-sectional data obtained from 153 sampled coffee farm households for the 2019/2020 cropping season. The study results showed that the farmers' adoption of different SFEPs depended on farm and management characteristics (total size of coffee holdings, multiple plots, remoteness of coffee farm, hired labor, and farming experience), socioeconomic variables (literacy, household size, and training), and Fairtrade coffee certification. Likewise, the intensity of SFEPs implementation is influenced by literacy and hired labor. Providing training and supplementing coffee farmers with farm equipment used for SFEPs, promoting small-scale mechanization options to address seasonal labor constraints, as well as strengthening Fairtrade organizations will facilitate the adoption of multiple SFEPs by coffee farmers in the country. Introduction Ethiopia is the origin of Arabica coffee [1] and the biggest producer and exporter in Africa [2]. Coffee is one of the uppermost contributors to the Ethiopian government's capital through taxation, social services, and trade, [3] accounting for 34% of the total national export earnings in 2017/ 18 [4]. e sector provides an income basis for an estimated 15 million Ethiopians [1]. Ethiopia has also a specific coffee consumption culture, and the domestic consumption is estimated to be 50% of the national production [3,5]. All coffee production is rain-fed in Ethiopia, and 95% of national production is contributed by smallholder farmers who grow their coffee in different environments, including forests, semiforests, gardens, and plantations [4]. Coffee production in Ethiopia is largely organic [6], although tremendously few farmers use chemicals. 6% of coffee producers are estimated to be using chemical fertilizer and 2% use other agrochemicals [6]. From a development viewpoint, although improvements (such as coffee farm management and soil conservation practices) have been made to coffee, productivity is still low [4] compared to major coffee producer countries like Vietnam and Brazil. As per the USAID [7] data, with a mean of 0.75 tons per hectare of land, Ethiopia has nearly 50% and 70% lower coffee yields than Vietnam and Brazil, respectively. By the same token, approximately 5 million smallholder coffee farmers throughout East Africa have 50% lesser coffee yields than those in Central America, largely because of the lack of the adoption of good coffee farm management practices [8]. On the other hand, most coffee-producing regions suffer from high levels of poverty incidence in the country. e study result conducted on rural poverty and sustainability indicators in coffee-growing regions of south and southwestern Ethiopia shows that an estimated 65-70% of farmers in the targeted coffee regions are below the US$3.10 per day poverty line [9]. is is much higher than the rate of poverty at the national level that is estimated to be 23.5% in 2015- 16 [10]. e high level of poverty might be associated with a low level of SFEPs adoption status in the area. Coffee production sustainability scores (environmental, social, and economic standards) are largely correlated with poverty levels. In other words, wealthier farmers tend to have better sustainability scores in general than poorer farmers [9]. To shift the existing conditions of the coffee sector, numerous encouraging reforms are under implementation. Particularly, the Ethiopian government's second Growth and Transformation Plan involves increasing coffee yield and production. However, the performance of the sector has failed to meet the expectations, pointing out that the government policy fails to consider the real situation on the ground-the prevailing bad coffee tree management measures and limited supply of modern inputs [4]. e key barriers to meeting these goals are the main factors related to poor farm management practices and improved technology adoption. Although there is room to more than double the coffee yield and net income of coffee with good agricultural practices and rejuvenation, the adoption of sustainable farms and technology is awfully low in Ethiopia. For instance, approximately 80% of the total coffee land needs rehabilitation and renovation practices in Ethiopia to fully meet its potential coffee yield improvement. On the other hand, based on the global coffee platform and Technoserve estimation, the country has the potential to uplift smallholder coffee yield up to 114%, with renovation, rehabilitation, and other sustainable good agricultural practice in the country [7]. As a result, the sustainability of coffee growing has a valuable effect on the poverty alleviation and livelihoods of smallholder coffee farmers in Ethiopia. e thought of sustainable practices has been extensively encouraged in agriculture. It is crucial to realize that the current resource use may exhaust to meet the future generation demand for resources continuously. Sustainable agricultural improvement can be realized by the uptake of either particular or multiple sustainable agricultural practices. e adoption of individual sustainable practices is often used to help a specific target. For example, cutoff drains and mulches are commonly used to control soil erosion. On the other hand, bundled sustainable agricultural practices are the combinations of different individual sustainable practices. For instance, Good Agricultural Practice (GAP) and organic certification schemes include different sustainable agricultural practices for soil and water conservation, soil fertility management, pest management, and waste management. Moreover, the bundled sustainable agricultural practices are more of an all-inclusive approach since they comprise a set of multifunctional measures that advance agricultural sustainability in general [11]. Despite several benefits of bundled sustainable practices, their adoption rates are exclusively low in the sub-Saharan Africa [12][13][14][15]. Across different kinds of literature, the socioeconomic, institutional, environmental, and climatic features affecting the adoption of bundled sustainable agricultural practices throughout the sub-Saharan Africa have been explored in different contexts. It justifies that the adoption variables differ in terms of household characters considered, technology, and location [14]. Furthermore, the findings of different study results on the adoption packages of sustainable agricultural practices are not clear yet [11]. Several studies have explored the approaches followed by different voluntary sustainability standards (VSS) and their effects on coffee sector sustainability in Ethiopia [16]. However, most studies have investigated the economic, livelihood, and poverty alleviation impact of smallholder participation in private sustainability standard schemes [5,17]. In southwest Ethiopia, few farmers applied sustainable coffee farm practices (particularly pruning, stumping, the use of fertilizer, and mulching), and the yields gained by the farmers are quite low. For example, the average coffee yield was 350 kg green beans per hectare for farmers, while there is a possibility to produce 1,000 kg green beans on the same unit of the farm for well-managed semiforest coffee farms with improved varieties in the area [18]. e present study aims at examining the factors affecting the bundled adoption of sustainable farm and environmental practices (SFEPs) among coffee farm households in southwest Ethiopia. e study adds value to the existing literature. Firstly, this study offers the first attempt to explicitly evaluate the drivers of the adoption of bundled SFEPs for the coffee sector in Ethiopia. Secondly, this study identifies various SFEPs choices adopted by coffee farm households while distinguishing the linkage between the different SFEPs considered. Hence, the findings in this study will help in the design of more effective policy tools for promoting coffee-related technology adoption. In particular, this study seeks to examine the following research questions: (1) what are the factors affecting the adoption of bundled SFEPs in the coffee sector at the farm household level? (2) What are the factors influencing the adoption intensity of SFEPs? Review of Empirical Studies Sustainable agricultural practices are the measures that help improve farming productivity with small negative effects on the environment. However, the adoption of these measures by smallholder farmers has exclusively been slow in most Sub-Saharan Africa [19]. e adoption of bundled sustainable agricultural practices is unavoidable since the practices are used frequently in combination [15]. e adoption of bundled sustainable practices is not uniform, where several intrinsic and extrinsic factors determine individual adoption performance [15,20]. Furthermore, 2 e Scientific World Journal farmers do not necessarily implement the recommended SAPs blindly as the adoptive decision-making faced by the farmers relies on a mix and match of SAPs with similarity to local conditions [11]. e interactions of different features influencing the farmers' decision to adopt different practices are explained below based on the past studies' findings as related to this study. Among the farm and management characteristics, the farm size is an important factor driving or constraining the adoption of sustainable farm practices. A large farm size facilitates the adoption of sustainable farm practices since farmers who possess large-sized farms have more capacity to allot resources and distribute the risk across larger areas [11,21]. In contrast, a farmer with a large farm size might prefer extensive farming to intensive farming [19,22,23]. Operating on many farms can enhance the adoption of multiple sustainable farms and environmental management practices since the situations on different plots may seek the adoption of different practices [24]. On the other hand, the study results by Sileshi et al. [25] showed the probability of farmers adopting bench terracing decrease with an increase in the number of plots. Access to hired labor is also a crucial input for facilitating the uptake of multiple SAPs by overcoming labor force supply limitations for adoption practices [23]. On the other hand, the average remoteness of coffee farms from home to farm is less likely to influence the adoption of multiple sustainable practices by farmers because of the transportation costs of materials and the opportunity cost time involved in transporting different materials from remote [26]. A large household size, which is a proxy to household labor resources, may enhance the likelihood of farmers' adoption decisions of bundled sustainable farm practices since most of the practices are labor demanding activities [19]. Studies have also unmasked the socioeconomic characteristics of the farm households (farmers' education, age, farming experience, training, and information variables are indicators) as reliable indicators of the predisposition to adopt sustainable farm practices [11]. Among such possible signals, the direction of the effect of age and education is not always clear [20]. In several adoption works of literature, age, which is a proxy to farming experience, is an important influencing factor in new agricultural technology adoption. However, the effect of age is not uniform with the adoption of multiple sustainable farm practices [20,23]. Education is supposed to enlighten the household head on the adoption of bundled sustainable farm practices and make it easy to understand the benefits of sustainable farm practices. However, the effects of education on the household showed mixed results on the adoption decisions with differing perspectives [23]. Similarly, the receipt of training on the sustainable farm and environmental practices helps to enlighten the farmers on the importance of sustainable practices. Likewise, the study conducted by Nigussie et al. [23] showed that participation in training on sustainable practices helps widen the understanding of the importance of manure to soil fertility and then increases their use of manure. From a social capital perspective, the cooperatives may encourage adoption by creating social networking and conducive environments for farmers to share their knowledge, know-hows, and experiences about sustainable practices [19,20]. Finally, the membership of Fairtrade encourages the adoption of sustainable farm and environmental practices. Voluntary sustainability standards (VSS), such as Fairtrade and organic certification schemes, have scaled up in Brazil's coffee sector [27]. Description of the Study Area and Sampling Procedure. e data for this study was collected from 153 coffee farm households in the Jimma Zone, Oromia National regional state in southwest Ethiopia. e survey related the activities undertaken during the 2019/2020 coffee season and was conducted in October 2020. Field interviews and face-to-face interviews with the household heads were conducted using a structured questionnaire. During the initial sampling, the Jimma zone was selected because of its high potential for coffee production among coffee-growing areas within the country, where 30-45% of the population of the zone relies on coffee production [17]. In this study, the sample size was determined based on a simplified formula developed by Yamane [28] at 0.081 sampling error. e formula is specified as, where n is the sample size, N is the population size (the total smallholder coffee producer households of Jimma zone), and e is the level of precision. Based on CSA [29] data, Jimma zone had a total of 504,336 smallholder coffee producer households in the 2019 cropping season. Hence, the desired sample size is equal to 153 households. In the second stage, two districts (Gera and Seka Chekorsa) within the zone were randomly selected as field sites, and the survey was pretested in each district. Proportionate random sampling was conducted using a database of coffee producers obtained from the respective agricultural and rural development offices of each district. Sustainable Farm and Environmental Practice Categories. e data that includes all dimensions of sustainability attributes based on an international and established sustainability framework hardly exists for the coffee sector [16]. Moreover, the sustainability standards in the coffee sector vary from organization to organization. On the other hand, a set of sustainability standards introduced by Technoserve under its Coffee Initiative project is commonly used in East Africa. e standards obey universally accepted environmental, social, and safety-related best practices, categorized into five broad sustainability practices: social, health and safety, environmental, economic, and farm management [30]. In this study, following Technoserve's sustainability classifications, due attention is given merely to sustainable environmental and farm management practices to meet the objective of the study. Furthermore, the classification set by Technoserve is too wide to e Scientific World Journal 3 include under one category. It seeks a further subdivision based on their main contribution to sustainability in the coffee sector. Accordingly, the farm management practices are further categorized into the renovating coffee tree stock practices that allow the coffee trees to bear more and the application of fertilizer practices that improve plant nutrition. Similarly, the environmental practices are classified into the soil conservation practices that permit erosion control and the water environment management practices used for water pollution control. Consequently, the four main categories of sustainable farm and environmental practices (SFEPs), which are common among coffee farm households in Ethiopia, are considered in this study. ey are as follows: Water Environment Management Measures (i) Dumping waste from coffee processing 30 meters or more away from water sources (ii) Using a latrine 30 meters or more away from water sources (iii) Using washing places 30 meters or more away from water sources For each of the four categories of sustainable management practices and water pollution abatement measures, a farmer was considered an adopter if at least one of the practices had been applied. e first category lists three possible options the producers have for improving coffee tree stock. ese measures embrace replanting the plots by removing the old or damaged trees, pruning the branches, or cutting off the trunks (stumping) to inspire new growth. Naturally, it takes three years for the coffee trees after planting to begin cherries with peak production ranging from five to six years. After a peak period, the coffee yields begin to decline unless it gets pruned or stumped and the type of measure it requires rests on both the physical state and age of the tree [20]. For the second category of sustainable management practices, organic and chemical fertilizer utilizations were considered although Ethiopian coffee farmers hardly use chemical fertilizers [6]. e types of organic fertilizers used in coffee production include manure, coffee pulp, and compost. e third category of sustainable management practices summarizes the ways the farmers can reduce soil erosion because of water runoff. Such practices involve constructing terraces, cutoff drains, mulching, and intercropping with other food crops. Conservation measures are vital since coffee is mostly grown in mountainous areas, where coffee farms are often susceptible to soil erosion in the country. e fourth category focuses on water pollution abatement measures. ese measures include dumping waste from coffee processing, using a latrine, and washing places 30 meters or more away from the nearest water bodies. Water pollution abatement adoption measures are important since dumping waste from coffee processing into water sources is a serious problem in the coffee-growing regions of Ethiopia. Water quality worsens in the coffee-growing areas in Ethiopia because of the dumping of coffee waste into the rivers [31]. Econometric Model Framework e Multivariate Probit Model. e Multivariate Probit (MVP) model simultaneously estimates the effect of the set of explanatory variables on each of the various adoption practices, allowing the possible relationship among a farmer's decision to adopt one practice with other adoption choices, as well as the correlation between the unobserved error terms. On the other hand, several kinds of literature employ the univariate models to test the variables affecting the adoption of a sole practice. However, such a method may be problematic as it is incapable to consider the intercorrelations among the practices [15,20]. Such correlations allow the error terms for positive correlation (complementarity) and negative correlation (substitutability) between multiple adoption measures [32]. Following Teklewold, Kassie, and Shiferaw, as well as [15] Ubertino, Mundler, and Tamini, [20] a random utility function of an i th coffee farm household confronted with a decision to adopt or not adopt a bundle of interreliant sustainable management and water pollution control practices was considered. e utility U a represents the benefits derived by the households from adopting the traditional agricultural practices, and U n represents the benefits of adopting the sustainable management and water pollution control practices, which, in this context, include the adoption of renovating coffee tree stocks, fertilizer, soil conservation, and water pollution control practices. e farm household chooses to adopt if Y * in � U n − U a > 0. e net benefit (Y * in ) from adoption is a latent variable determined by a vector of the farm household and characteristics (X in ), as well as nonobservable disturbances captured by the error term ε in . is is specified as follows: where β n represents the vector of parameters to be estimated. e observed binary outcome equation for each choice of 4 e Scientific World Journal practice adopted by the farm households is illustrated as follows: If the adoption of multiple practices is assumed to be interdependent or is assumed to occur simultaneously, then the error terms in equation (2) are expected to jointly follow a multivariate normal distribution pattern with a zero conditional mean and a unitary variance [32,33]. Hence, equation (3) displays a multivariate probit model that represents the decision to adopt multiple sustainable management and water pollution control adaptation practices, simultaneously. Ordered Probit Model. e MVP model from the previous subsection conceptualizes that a farm household will adopt new sustainable practices if the expected net benefit of the practices exceeds nonadoption. However, the MVP model cannot compute the intensity of the adoption of sustainable practices [14]. Adoption intensity is commonly estimated based on the proportion of the area of total cultivated land under a given practice but the exact area under each sustainable practice is hardly accessed [34]. However, under such data limitation, the intensity of adoption can be estimated using an ordered probit model by taking the number of sustainable practices adopted by each farm household as the dependent variable [14]. In this study, an integer ranging from 0 to 4 is assigned for the number of sustainable practices adopted by a farm household. Following Aryal et al. [34] and Teklewold et al. [15], the ordered probit model is illustrated as follows: where Y * i is an underlying unobserved measure of the households' adoption of sustainable practices in numbers and is given by, where the values of y are latent variables and α are unknown parameters to be estimated. For m categories following a standard ordered probability model, the probability of observing the outcome i corresponds to, where μ i is assumed to be normally distributed with a standard normal cumulative distribution function. e coefficients β 1 . . . β K is jointly estimated with the cut points α 1 , α 2 . . . α K−1 , where K is the number of possible outcomes. Explanatory Variables. e dependent and explanatory variables encompassed in the model are discussed below. Table 1 presents the definitions and descriptive statistics of the variables. Descriptive Statistics. e survey data indicate that more than half of the responses were nonadopted for all categories of sustainable management and water pollution abatement practices, while the adoption rate varies substantially from one category to the other. For example, the number of producers who had adopted water pollution abatement practices was relatively high: 49% of the respondents had practiced at least one of the water pollution reduction practices. In contrast, only 24% of the farmers had implemented at least one measure of the soil conservation strategies. Concerning fertilizer application, 33% of the producers had applied fertilizer, and with three exceptions, all adopters had used organic materials. 74% of the farmers had renovated their coffee tree stocks. Based on the recent findings of related studies, the independent variables assumed to affect the dependent variables are described in Table 1. General Performance of Multivariate Probit Model. e multivariate probit (MVP) model was used to estimate the effects of explanatory variables on the adoption behavior of coffee farmers. e log of some continuous variables was used for a better model fit. Before computing the model results, the existence of multicollinearity was checked by running the variance inflation factor (VIF). e mean results of the test (VIF � 1.69) indicate the nonappearance of a serious multicollinearity problem in the model. e Wald test [chi − square(40) � 134.96 * * * ] also designates that the data fit of the model performs reasonably well. Furthermore, a likelihood ratio test was carried out to compare the four univariate probit models with the multivariate results ( Table 2). e test result was statistically meaningful [chi − square(6) � 47.98 * * * ], indicating that the error correlations between the farm management practices are jointly significant. It suggests that using the MVP model is preferable than individual probit regressions and supports the hypothesis that sustainable farm management adoption choices are interrelated. More specifically, the coffee farmers who practiced water management were more likely to renovate their coffee tree stock (rho � 0.485 * * * ), apply fertilizer (rho � 0.763 * * * ), adopt soil conservation practices (rho � 0.717 * * * ), practice water management (rho � 0.660 * * * ), and renovate coffee tree stock (rho � 0.326 * * ). Similarly, the farmers who adopted soil conservation practices were more likely to apply organic fertilizers (rho � 0.422 * * * ). However, renovating the coffee e Scientific World Journal 5 tree stocks was not correlated with applying organic fertilizers and soil conservation practices. Table 3 presents the MVP results obtained for the four categories of sustainable farm management practices. e result shows that the larger coffee producers are less likely to adopt soil conservation practices. is preference is likely because soil conservation measures are tedious and labordemanding activities that entail high labor for the larger coffee farm sizes. Consequently, adopting soil conservation practices will be more attractive to the smaller producers since the practices require a higher amount of labor than what the larger producers can do. e finding is concurrent to that of Mulinde et al. [21] and Sileshi et al. [25]. e results indicate that multiple plots are negatively correlated with the application of fertilizer. With three exceptions, the interviewed farmers who adopted fertilizer had chosen to use organic fertilizer. However, this result is not unusual given that the coffee producers in Ethiopia have a habit of not applying chemical fertilizers. Tremendously, few farmers use chemicals (chemical fertilizers, pesticides, and herbicides) for growing coffee in Ethiopia, and most coffee production is organic [6]. For example, only 24% of coffee growers applied compost to their coffee farms in 2014 in Ethiopia [6]. Coffee producers who have multiple plots are less likely to use fertilizer. It is because farmers may disguise distributing organic fertilizer over several plots since the opportunity cost of time and transaction cost will be higher for multiple plots. e finding is concurrent to that of [25]. Farm Characteristics and Management Practices. e MVP results indicate that coffee farming experience is a determining factor although the effect is not uniform. Experienced producers are more likely to apply fertilizer than their stock of coffee trees. is preference is likely because the experienced producers on an average have accumulated a stock of information on the benefits of fertilizer application. Similar to the previous study, i.e., Jabbar et al. [35], the study results have revealed a positive and significant relationship between the farming experience and the adoption of organic manure. In contrast, the coffee farming experiences, a proxy to age, were found to have a negative and significant relationship with the adoption of water management practices, indicating that experienced farmers/ older farmers are less likely to adopt water management practices. is can be justified as age is commonly related to reduced physical capacities, however, many soil and water conservation measures are labor intensive [36]. It makes the experienced farmers selective among multiple sustainable practices, i.e., the water management practice decisions of these farmers would rely more on coffee farm production and profitability than on sustainable water management 6 e Scientific World Journal practices considerations. Consequently, they may not be interested in investing their physical resources and time in water management practices and focus on practices that provide them with a higher yield and profit. e results designate that the remoteness of a coffee farm negatively influences the adoption of water management practices. e possible explanation for it is that the farmers who live far away from the plots may limit the amount and number of water management practices they choose to implement since these measures comprise the building of latrines and preparing a solid and liquid waste dump. Furthermore, implementing these measures entails higher transportation costs involved in moving heavy materials and opportunity costs of time. e result also indicated that the farmers who use hired labor are more likely to adopt soil conservation and water management practices. It is plausible because access to hired labor may have the potential to facilitate the adoption of soil conservation and water management practices by enabling the farmers to overcome labor force-related production limitations. Farmers' Socio-Economic Character. e results indicate that literate farmers are less likely to renovate their stock of coffee trees and adopt water management practices. Although education is commonly believed to help farmers to adopt multiple sustainable farm and environmental practices (such as water pollution control measures), several of the water management and renovating coffee tree recommendations are labor and capital intensive. On the other hand, the opportunity costs of applying these practices will be larger for literate farmers. Hence, literate farmers may have better opportunities for off-farm activities and pay less attention to farm practices. In contrast with a previous study of Arslan et al. [37], this study's results have revealed a negative and significant relationship between literacy and the adoption of coffee tree stock renovation and water management measures, partially in line with the findings of Nigussie et al. [23] and Ubertino et al. [20]. e result showed that the family size is an influential factor, although the effect is mixed. e study result indicated that the farmers with a large family size are more likely to renovate their coffee stock. In contrast, the findings reveal that the families with a larger number of people are less likely to adopt water management practices. Similar to the former, several works of literature often suggest that the family size has a positive influence on the adoption of labor-intensive related practices since it is considered a proxy to family labor supplies [32]. However, this may not be the case for environmental sustainability practices since the influence of several environmental management measures (such as water pollution control) will not be detected overnight. Similarly, water quality management measures, such as stream fencing, are less preferred as their benefits on water quality take a long period to be observed [38]. Likewise, coffee farmers in the study area seemed to give higher emphasis to practices that maximize economic returns in a short period than environmental sustainability. A similar result was found in Mexico where coffee farm households who owned large family sizes were less likely to adopt soil conservation practices [20]. e results also reveal that the farmers who have participated in the coffee farm management training are more likely to use fertilizers in their coffee production. It is because training may have a positive effect on the accumulation of coffee-related knowledge by the farmers, which encourages them to use improved inputs (such as compost). e explanation of the finding was in agreement with the study results by Nigussie et al. [23], which revealed a positive and significant relationship between the sustainable land management training receipts by the farm households and the adoption of organic manure in northwestern Ethiopia. Fairtrade Coffee Certification. Finally, the estimates disclose that the farmers who have affiliation with Fairtrade are more likely to renovate their coffee tree stock and adopt water management practices because the Fairtrade scheme highly encourages the coffee-related technology adoption (such as coffee pruning and stumping) in line with environmental sustainability (such as water pollution control) in the study area. Hence, the certification of voluntary sustainability standards such as Fairtrade certification increases, significantly, the adoption of improved coffee practices in Ethiopia [39]. Likewise, the farmers with Fairtrade coffee certification are more likely to adopt environmental sustainability measures that involve water management practices, which is in line with the findings of Giuliani et al [40]. Table 4 presents the results of the ordered probit model of the factors influencing the number of SFEPs adopted among the sampled coffee farm households. e chi-squared statistics from the order probit model are statistically significant (P < 0.001) (chi2(10) � 46.69, Prob > chi2 < 0.001), designating that the model is of good fit. e results further show that only literacy and hired labor were found to be the significant factors influencing the number of SFEPs adoption decisions of the coffee farm households among other variables used in the model. Ordered Probit Model Results. On the other hand, the results indicate that the literacy of the household head is likely to influence the adoption of the number of SFEPs. Although education is commonly considered helpful for farmers in adopting SFEPs, literate farmers often have better opportunities for off-farm activities and may limit the number of practices they choose to implement. e finding of this study is partially in agreement with the study results of Nigussie et al. [23], which indicated that the literacy level of small-scale farmers is supposed to have a negative influence on the adoption of many sustainable land management practices (stone-faced soil bund, traditional stone bund, and inorganic fertilizer technologies) in Ethiopia. In terms of hired labor, the results of the study suggest that the farmers who employ hired labor are more likely to implement the number of SFEPs. e farmers are more likely to adopt different sustainable land management practices with the aid of hiring off-farm labor, however, they are less likely to adopt [23]. Similarly, the study results confirm that hired labor facilitates the intensity of adoption. Conclusions and Policy Implications In East Africa, coffee farming is characterized by poor productivity, largely because of the low adoption of best sustainable practices by the smallholder coffee farmers. Furthermore, compared with other East African countries (Kenya and Rwanda), the adoption of best sustainable practices is very low in Ethiopia. As a result, the smallholder coffee farmers in the country remain in poverty traps although the country has a high potential to increase the coffee production and returns at large by implementing multiple sustainable practices. On the other hand, the bundled sustainable coffee farms and environmental practices seem overlooked by scholars since most previous sustainability studies focus on the economic, livelihood, and poverty alleviation impact of smallholder participation in private sustainability standard schemes. erefore, the present study examined the factors affecting the adoption of bundled sustainable farm and environmental practices (SFEPs) and their intensity at the farm household level in southwest Ethiopia based on cross-sectional data obtained from 153 sampled coffee farm households for the 2019/2020 cropping season. e study results showed that the farmers' adoption of different SFEPs and their intensity of implementation depends on farm and management characteristics (total size of coffee holdings, multiple plots, remoteness of coffee farm, hired labor, and farming experience), socioeconomic In particular, the study results reveal that the adoption of renovated coffee tree stocks was positively influenced by family size and Fairtrade certification holding, while it was influenced negatively by the literacy of the head of the household. Similarly, the adoption of organic fertilizer was positively influenced by coffee farming experience and training receipt of farmers, while it was influenced negatively by multiple coffee plots. Likewise, the implementation of soil conservation measures was positively influenced by hired labor, while it was influenced negatively by coffee farm size. e implementation of water management practices was positively influenced by hired labor and Fairtrade certification holding, while it was influenced negatively by the remoteness of the coffee farms, family size, coffee farming experience, and literacy of the head of the household. Finally, the farmer decisions to adopt the number of SFEPs were positively affected by hired labor, while they were affected negatively by the literacy of the head of the household. In general, the adoption of different SFEPs varies with the farm and household characteristics. As a result, the policy measures should vary for specific farm and household characteristics to promote SFEPs. For example, the measuring supports of coffee technology-related practices such as coffee tree stock renovation should be prioritized for literate small family size coffee farmers, whereas the use of organic fertilizer treatments should be promoted to the inexperienced (younger) and multiple coffee farm holders. Likewise, the sustainable development measure that supports soil conservation practices should be prioritized for a larger coffee farm householder. Similarly, the sustainable environmental measure supports of water quality management practices should be prioritized for the experienced (older), literate, larger householding coffee farmers. Hence, this might be promoted by training, awareness creation concerning sustainable farm management, environmental practices, and asset-building interventions. e adoption of SFEPs measures was also affected by the receipt of training, hired labor, and Fairtrade coffee certification. Additionally, hired labor and literacy of the head of the farm households were also significant predictors of the intensity of bundled SFEPs that are adopted by coffee farmers. Farmers should be encouraged by providing training and supplementing them with farm equipment (used for coffee stock renovation, physical soil and water conservation measures, and toilet construction), promoting small-scale mechanization options to address seasonal labor constraints, as well as the strengthening of Fairtrade or related VSS organizations to induce learning-from-peers effects to facilitate the adoption of multiple SFEPs by coffee farmers in the country. Data Availability e authors would like to declare that they can submit the data and datasets used for this study upon the publisher's request. Additional Points is paper was not able to use a higher sample size because of financial budget limitation for data collection. As a result, a relatively higher sampling error (0.081) was used to decrease the number of sample respondents. Looking at the model presented here as an example, future studies should likely use small sampling error at 5% or less. Ethical Approval Ethical clearance letters were collected from Wollega University Research and Community Service Directorate and Jimma Zone administrative office to care for the data collectors and researchers. Study participants were informed that the clients have full rights to discontinue the study if they lose interest. Hence, all processes of the survey were completed without any obstacles. Consent Not applicable. Conflicts of Interest e authors declare that nobody has competing interests.
8,573.6
2021-12-08T00:00:00.000
[ "Economics", "Agricultural And Food Sciences" ]
Low energy doubly-virtual Compton scattering from di-lepton electroproduction on a nucleon We propose a new way to experimentally determine the subleading low-energy structure constant of doubly-virtual Compton scattering on a proton. Such empirical determination will reduce the theoretical model error in estimates of the hadronic correction to the muonic hydrogen Lamb shift. We demonstrate that the di-lepton forward-backward asymmetry in the $e^- p \to e^- p \, e^- e^+$ process, which can be accessed at electron scattering facilities, yields a large sensitivity to this so far unknown low-energy constant. We propose a new way to experimentally determine the subleading low-energy structure constant of doublyvirtual Compton scattering on a proton. Such empirical determination will reduce the theoretical model error in estimates of the hadronic correction to the muonic hydrogen Lamb shift. We demonstrate that the di-lepton forward-backward asymmetry in the e − p → e − p e − e + process, which can be accessed at electron scattering facilities, yields a large sensitivity to this so far unknown low-energy constant. Extractions of the proton charge radius from muonic hydrogen (µH) Lamb shift measurements over the past decade [1,2] have reported a highly precise proton-radius value, with more than an order of magnitude improvement in the precision. These results disagreed by around 5.6 standard deviations, with the values obtained from measurements of energy level shifts in electronic hydrogen [3] or from electron-proton elastic scattering experiments [4]. This so-called "proton radius puzzle" has spurred a lot of activity, see e.g. [5,6] for reviews. A new round of experiments using electronic hydrogen spectroscopy [7,8], as well as a new electron scattering experiment [9] are favoring the lower value of the radius consistent with the µH spectroscopy results, although a recent electronic hydrogen experiment [10] has also reported a large value of the proton charge radius. To fully clarify this situation, further experiments both with electron beams [11], muon beams [12,13], or by a direct comparison of cross sections for γ p → e + e − p versus γ p → µ + µ − p [14], are presently planned or underway. With the next generation of highprecision experiments both in scattering and spectroscopy, the focus is now shifting to improve the precision on this fundamental nucleon structure quantity, as spelled out recently in [15]. Indeed, to extract the proton charge radius from µH Lamb shift measurements, which is at present the most precise method, the proton form factors, structure functions and polarizabilities are all required as input in a quantitative understanding of the hadronic correction [16][17][18]. At present the theoretical uncertainty due to this correction, which is evaluated in a dispersive framework, is of the same size as the 2P -2S µH Lamb shift experimental uncertainty, and is the main limitation when converting a value of the Lamb shift to a value for the proton radius. The main part of this hadronic uncertainty results from the subtraction function entering the forward doubly-virtual Compton scattering process. It corresponds to the situation where the photons in the Compton process have zero energy and finite virtuality. At second order in the photon virtuality, this function is constrained by the magnetic polarizability, which is determined experimentally [19]. To fourth order in the photon virtuality, one low-energy constant in this subtraction function is at present empirically unconstrained [20], and one relies on chiral effective field theory calculations [17,21] or phenomenological estimates. We demonstrate in this work that this low-energy constant can be accessed experimentally through the forward-backward asymmetry in the e − p → e − p e − e + process. This observable is directly sensitive to the interference between the QED and doubly-virtual Compton amplitudes. A pioneering measurement of this forwardbackward asymmetry in the γp → e − e + p process has been performed at DESY quite some time ago [22] as a test of the Kramers-Kronig relation at high energies. In the present work, we demonstrate that the corresponding experiments with a spacelike initial virtual photon, which can be realized at electron scattering facilities, yield a large sensitivity to this so far unknown low-energy constant. The helicity averaged forward doubly-virtual Compton scattering process (VVCS), γ * (q) + N (p) → γ * (q) + N (p) is described by two invariant amplitudes, denoted by T 1 and T 2 , which are functions of two kinematic invariants: Q 2 = −q 2 and ν = q · p/M , with M the nucleon mass. Its covariant tensor structure in the four-vector indices of initial (µ) and final (ν) photons can be written, following notations from [23], as: withĝ µν ≡ g µν − q µ q ν /q 2 ,p µ ≡ p µ − p · q/q 2 q µ , and where α em = e 2 /4π 1/137. The optical theorem relates the imaginary parts of T 1 and T 2 as: where F 1 , F 2 are the conventionally defined structure functions parametrizing inclusive electron-nucleon scattering, and depend on Q 2 and x ≡ Q 2 /2M ν. The two-photon exchange correction to the µH Lamb shift can be expressed as a weighted double integral over Q 2 and ν of the forward amplitudes T 1 and T 2 [16]. Using the empirical input of F 1 and F 2 , the ν dependence of T 2 has been fully reconstructed in [16] using an unsubtracted dispersion relation, whereas the dispersion relation for T 1 requires one subtraction, which can be chosen at ν = 0 as T 1 (0, Q 2 ). The subtraction function is usually split in a Born part (corresponding with the nucleon intermediate state), and a remainder, so-called non-Born part, which we denote in the following byT 1 (0, Q 2 ). Although the Born part can be expressed in terms of elastic form factors (see [23] for the corresponding expression), the non-Born part cannot be fixed empirically so far. In general, one can however write down a low Q 2 expansion ofT 1 (0, Q 2 ) as: arXiv:2001.10626v1 [hep-ph] 28 Jan 2020 where the term proportional to Q 2 is empirically determined by the magnetic dipole polarizability β M 1 [19]. Theoretical estimates for the subtraction term were given at order Q 4 in heavy-baryon chiral perturbation theory (HBChPT) [17], and covariant baryon chiral perturbation theory (BChPT), both at leading order (LO) due to πN loops, and at next-toleading order (NLO), including both ∆(1232)-exchange and π∆ loops [20,21]. Furthermore, estimates of the subtraction function were also extracted from superconvergence sum rule (SR) relations, relying on a dispersion relation for the difference of the structure function F 1 and a Regge fit of its highenergy behavior [24]. The different values obtained forT 1 (0) are compared in Table I (second column). Even for these theoretically well motivated approaches, the spread among the different estimates is quite large. The resulting uncertainty due to this subtraction term constitutes at present the main uncertainty in the theoretical Lamb shift estimate. We next discuss how to avoid such model dependence, and propose an empirical way to determineT 1 (0). To empirically access the Q 4 term and potentially also higher order terms inT 1 (0, Q 2 ), we consider the full offforward doubly-virtual Compton process, γ * (q) + N (p) → γ * (q ) + N (p ) , where both photons are virtual. In the following, we study the case where the initial photon is spacelike (q 2 < 0), and the final photon is timelike (q 2 > 0), which can be accessed experimentally. In general, the full off-forward doubly-virtual Compton scattering (dVCS) amplitude off a proton is described by 18 tensor structures in the initial (µ) and final (ν) photon four-vector indices [25]. In this work, we will only need the helicity-averaged amplitude, which is described by 5 independent tensors, and can be expressed as [26]: where T µν i are the spin-independent and gauge invariant tensors, symmetric under exchange of the two virtual photons, see Eq. (8) in [20] for the corresponding expressions. Furthermore, in (4), the invariant amplitudes B i are functions of four Lorentz invariants, with ν ≡ q · P/M , where P ≡ (p + p )/2. As the forward VVCS process of Eq. (1) is a special case of Eq. (4), one can express the subtraction function as [20]: where both non-Born amplitudesB 1 ,B 3 are understood in the forward limit (q = q ), i.e.B i (0, q 2 , q 2 , q 2 ) for i = 1, 3. The Born contribution was worked out in Ref. [25]. For the helicity averaged amplitude, it only contributes to the amplitudes B 1 and B 2 , see Eq. (8) in [20]. The non-Born part of the dVCS amplitudes, denoted asB i , can be expanded for small values of q 2 , q 2 , q · q and ν, with coefficients given by polarizabilities. As the ν dependence can be fully reconstructed up to at most one subtraction, through a dispersion relation, we only need to discuss the amplitudes entering the subtraction function. To determineT 1 (0, Q 2 ) up to the Q 4 term, we use the low-energy expansion in k ∈ {q, q } [20]: where β M 2 is the magnetic quadrupole polarizability determined from real Compton scattering [27], and β M 1 (0) is the slope at Q 2 = 0 of the generalized magnetic dipole polarizability which is accessed through virtual Compton scattering, see Ref. [28] for a recent review. The low-energy constant b 3,0 is not determined empirically so far because the tensor structure T µν 3 decouples when either the initial or final photon is real. Using Eq. (6), we can express the Q 4 term in Eq. (3) as: We compare in Table I (third column) several theoretically motivated estimates for b 3,0 . We see that the BChPT including ∆-pole corresponds with a very small value of b 3,0 in comparison with the value resulting from the superconvergence SR estimate for T 1 (0). To empirically determineT 1 (0) and b 3,0 we consider the process of electroproduction of a di-lepton (electron-positron) pair on the nucleon, e − (k) + N (p) → e − (k ) + N (p ) + e − (l − ) + e + (l + ), (8) where the four-momenta of the corresponding particles are shown in parentheses. We define the eight-fold phase space of the reaction (8) in terms of five invariants: and three angles Φ, θ l , and φ l . The invariant s is obtained from the electron beam energy E e as s = M 2 + 2M E e , Φ is the angle of the intial electron plane relative to the production plane, spanned by the vectors q ≡ k − k and q ≡ l − + l + in the c.m. frame ( q + p = 0), and θ l , φ l are the angles of the produced negative lepton in the di-lepton rest frame (with polar angle defined relative to the c.m. direction of q ). The differential cross section of the reaction (8) reads where m e is the lepton mass, λ is the Källén triangle function, and M stands for the amplitude of the reaction (8). shown in Fig. 1. The first four diagrams correspond with the spacelike (SL) and timelike (TL) Bethe-Heitler (BH) processes, whereas the last is the dVCS process. The BH-SL and BH-TL diagrams are fully determined by the nucleon's electromagnetic FFs, represented by the blobs in diagrams 1 -4 in Fig. 1. We adopt the FF parameterization of [4] in the following, which we can analytically continue to the small timelike virtualities considered here. The invariant amplitude for the dVCS process (M C ), where initial (final) photons have spacelike (timelike) virtualities, is given by where M µν is the dVCS tensor of Eq. (4). To access the real part of the dVCS amplitude, with the aim to empirically extract the low-energy constant b 3,0 we consider in the following the forward-backward asymmetry A F B , which is defined in the di-lepton rest frame as: The only non-zero contribution to this observable comes from the interference between processes with an even (BH-SL) and odd (BH-TL and dVCS) number of photon couplings to the di-lepton pair, due to charge conjugation. Explicitly, the numerator in Eq. (11) is proportional to We start our discussion by considering the case of an initial real photon through the γp → e − e + p reaction. In Fig. 2 we show the dependence of the γp → e − e + p cross section on the c.m. energy W for two settings. One of kinematics is approaching the forward real Compton process (left panel), with q 2 and t values near the ones considered in the experiment of [22]. To provide a model of the inelastic effects in the dVCS process we consider an effective description of the non-Born part of the dVCS amplitude in the ∆(1232) region by the ∆-pole amplitude. For the electromagnetic N → ∆ transition we use the empirical parameterization, see [29]. To estimate the accuracy of the description, we also implemented for the near-forward real Compton situation, with −t small and q 2 very close to zero, a full dispersive calculation based on empirical structure functions [30]. The latter calculation was found to be consistent with the so far only data point for A F B [22]. We see from Fig. 2 that around c.m. energy W = 1.25 GeV the Compton part of the cross section integrated over the di-lepton solid angle is reproduced by the Born + ∆-pole description within an accuracy of 5% or better. Furthermore, for larger values of q 2 = −t, the ratio between BH and Compton cross sections decreases. Therefore, one expects an increase of the asymmetry A F B with increasing values of q 2 and −t, as is demonstrated in the lepton angular dependence in Fig. 3 for W = 1.25 GeV. One sees that for q 2 = −t = 0.075 GeV 2 , A F B reaches values between -40% and +30%. Having assessed the sensitivity of A F B to the dVCS amplitude for real photons, we next extend it to the initial virtual photon case through the e − p → e − p e − e + reaction. As A F B depends on the real part of the dVCS amplitude, it holds promise to study the sensitivity on the low-energy constant b 3,0 . In Fig. 4, we show the cross section and A F B for the e − p → e − p e − e + process at W = 1.25 GeV, where the ∆pole was found to yield a very good description of the total Compton result, and in kinematics where Q 2 = q 2 = −t = 0.075 GeV 2 . One notices that in the forward and backward angular ranges the Bethe-Heitler process yields only a small asymmetry. In these ranges, the Compton process yields a large change in the asymmetry, up to 50%. The red band shows the sensitivity on b 3,0 , corresponding with the spread in Table I. For forward and backward angles, where the sensitivity is the largest, the spread in the values for the subtraction constant T 1 (0) in Table I, corresponds with a change in A F B from 20% to 50%. Table I. In Summary, we have explored a direct empirical determination of the subtraction function in the doubly-virtual Compton process on a nucleon, which corresponds at present with the leading uncertainty in the hadronic correction to muonic atom spectroscopy. To fourth order in the photon virtuality, one low-energy quantity in the subtraction function is so far empirically unconstrained. We have demonstrated that it can be accessed experimentally through the forwardbackward asymmetry of the e − p → e − p e − e + process. This observable is directly sensitive to the interference between the QED and dVCS amplitudes. Different theoretical estimates for this low-energy constant induce a change between 20% and 50% in the corresponding asymmetry in the ∆(1232) region. This observable can be accessed by precision experiments at the electron facilities MAMI, MESA, and JLab.
3,829
2020-01-28T00:00:00.000
[ "Physics" ]
BERT-CLSTM Model for the Classification of Moroccan Commercial Courts Verdicts The exponential growth of data generated by the Moroccan commercial court system, coupled with the manual archiving of legal documents, has led to increasingly complex information access. As data classification becomes imperative, researchers are exploring automatic language processing techniques and refining text classification methods. In this study, we propose a BERT-CLSTM model for the classification of Moroccan commercial court verdicts. By adding a Convolutional Long Short-Term Memory Network to the task-specific layers of BERT, our model can get information on important fragments in the text. In addition, we input the representation along with the output of the BERT into the transformer encoder to take advantage of the self-attention mechanism and finally get the representation of the whole text through the transformer. The proposed model outperformed the compared baselines and achieved good results by getting an F-measure value of 93.61%. I. INTRODUCTION T EXT classification is a machine-learning task that assigns a document to one or more predetermined categories based on its content.It is a key problem in natural language processing, with diverse applications such as sentiment analysis, email routing, offensive language detection, spam filtering, news classification, and language identification. Text mining [1] is one of the most important approaches for analyzing massive volumes of textual data.Also, it is used to discover previously unknown relationships and propose solutions to aid decision-making.Many technologies are utilized in the text mining process to attain these aims.Text summarization, translation, categorization, and information extraction are a few examples.This paper's content is limited to text classification. Despite the advances made in text categorization performance, there is still significant potential for improvement, particularly in the Arabic language.According to Internet World Stats, Arabic is the fourth most common language online, with over 225k users, representing 5.2% of all Internet users as of April 2019.Arabic NLP is still a challenging task due to the Arabic language's richness, complexity, and complicated morphology.Arabic features are assorted in abundance aspects compared to other languages.There are several forms of grammatical, variations of synonyms word, and numerous meanings of words which differ based on factors such as the order of the word. Traditional text classification methods use sparse vocabulary features to represent documents and treat words as the smallest unit.Documents represented by such approaches typically have high dimensionality and sparse data, so the classification accuracy is low.Later, with the rise of distributed representation, the usage of high-dimensional dense vector representation documents such as the word2vec or Glove models gradually becomes mainstream.The word vectors trained using this type of method represent the contextual semantic information of the text.Recently, with the emergence of deep learning, more researchers use deep learning neural networks for text classification, such as convolutional neural networks (CNN) and recurrent neural networks (RNN).The method using deep learning for text classification involves feeding text into a deep network to generate a representation of the text and then feeding the text representation into the softmax function to calculate the probability of each category.CNN-based models [2], [3], [4] may generate text representations with local information, whereas RNN-based models [5], [6] generate text representations with long-term information. Nowadays, the process of classifying verdicts of Moroccan commercial courts is done manually.Court staff read each verdict and classify it into a predefined category.This process has many drawbacks.First, the processing time is long.Second, court staff may make mistakes while filing the document, and third, the confidentiality of citizens' data is not respected.Based on the disadvantages of the current system, we propose a BERT-CLSTM model that can provide the same service in a short time.The proposed model combines the advantages of CNN and RNN. The rest of the paper is structured as follows: Section II presents the related literature.In section III, we give details on the dataset created for this task of classification, the data preprocessing, and the model.Section IV presents the experimental setup, evaluation measures, and results, and we discuss them.Finally, conclusions are drawn in Section V. II. RELATED WORK Over the past few years, several researchers have addressed the issue of automatic categorization of legal data and the exploitation of these huge amounts of court-generated data to assist in decision-making.The study presented in [7] aims to classify Arabic news articles based on their vocabulary features.They employed multi-label classifiers like Logistic Regression and XGBoost, with XGBoost achieving the highest accuracy at 84.7%, while Logistic Regression scored 81.3%.Additionally, ten neural networks were constructed, and CGRU proved to be the top-performing multi-label classifier with an accuracy of 94.85%. Research [8] introduce AraLegal-BERT, a bidirectional encoder Transformer-based model that has been thoroughly tested and carefully optimized to amplify the impact of NLPdriven solution concerning legal documents.They fine-tuned AraLegal-BERT and evaluated it against three BERT variations for the Arabic language.The results show that the base version of AraLegal-BERT achieves better accuracy than the general and original BERT over the Legal text. El-Alami et al [9] propose an Arabic text classification method based on Bag of Concepts and deep Autoencoder representations.It incorporates explicit semantics relying on Arabic WordNet and exploits Chi-Square measures to select the most informative features.To produce a high-level representation, they applied successive stacks of Restricted Boltzmann (RBMs).Experiments showed that using the Autoencoder as a text representation model combined with Chi-Square and classifier outperformed state-of-the-art techniques. Hazm et al [10] evaluated Arabic user comments on Twitter using a common form of Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM).Experiments revealed that LSTM outperforms standard approaches in terms of accuracy while requiring less parameter computation, less working time, and more efficiency. Alhawarat and Aseeri [11] implemented a CNN multi-kernel architecture with word embedding (n-gram) to classify Arabic documents of news.Regarding the current studies on Arabic text classification, their approach achieves very high precision using 15 of the publicly available datasets. Galal et al [12] concentrated on classifying Arabic Text using a convolution neural network (CNN).They implemented GStem a new algorithm focused on extra Arabic letters and word embedding distances to group related Arabic words.Their studies have shown that it improves the accuracy of the CNN model when using it as a preprocessing stage. Boukil et al [13] proposed a technique for categorizing Arabic datasets.They utilized an Arabic stemming algorithm to select and reduce features and employed Term Frequency Inverse Document Frequency (TFIDF) for feature weighting.Their study compared the CNN model and standard machine learning methods on a benchmark dataset.The authors found that the CNN model outperformed traditional methods, especially for large and complex datasets. Al-khurayji and Sameh [14] proposed a novel method for Arabic text classification using Kernel Naive Bayes (KNB) classifier.Their approach involved preprocessing documents through tokenization, stop word removal and word stemming.They utilized Term Frequency-Inverse Text Frequency (TF-IDF) for feature extraction and represented terms as vectors.Experimental results on the collected dataset demonstrated the superiority of their methodology, showing excellent precision and efficiency compared to other baseline classifiers. III. RESEARCH METHODOLOGY In this section, we present the corpus created to train and evaluate our model, also the data preprocessing process, and finally, we explain the model architecture. A. Dataset The dataset created to train and evaluate our proposed classifier is a collection of Arabic verdicts issued from the Moroccan commercial courts.It consists of 2821 documents and 66900015 words.The average document size is 23715 words.Documents are categorized under four main classes: Unfair competition, Arbitration, Insurance, and Commercial lease. The dataset was split into a training dataset (75% of each class) used to build the model and a testing dataset (25% of each class) used to evaluate the model's performance.The distribution of documents in each class is presented in Table I. B. Data preprocessing The first step in our preprocessing process was removing stop words, removing foreign characters, and punctuation by applying basic functions.Then, we execute the stemming algorithm, which transforms all words into their stems. C. Model The proposed BERT-CLSTM model exhibits two key characteristics.Firstly, it employs CLSTM to transform the taskspecific layer of BERT, allowing our model to capture local and long-term text representations.Secondly, the output of BERT, along with the CLSTM representation, is fed into the transformer encoder.This facilitates the use of self-attention to focus the final text representation on essential segments.The architecture of the BERT-CLSTM model is illustrated in Figure 1. 1) CLSTM encoder: In order to make the representation of text focus on the information in the text.We use a convolution filter to extract features from T. Assume that the size of the convolution window is 1 × k, the output of the CLSTM encoder is: P is P 1 , P 2 , P 3 , ..., P r and r = mk + 1.Through convolution operation, we can obtain the representation of sentences in windows with size K.For example, P r is the representation of T (r−1) , T r , T (r+1) . 2) Transformer encoder: Similar to multi-layer transformer encoder, to integrate information in P, we adopt transformer encoder to map the local representation P into the representation of whole text.The input of the transformer encoder is C, P 1 , P 2 , P 3 , ..., P r , and we use C ′ which corresponds to C as the of the whole text. 3) Output layer: The output of the model is represented as follows: Where tanh presents the hyperbolic tangent function: IV. RESULTS AND ANALYSIS This section presents the experimental setup, including the parameters used for training and testing the model.We detail the evaluation measures, highlight the results, and provide a discussion of the findings. A. Experimental setup To evaluate our classifier, we used the created dataset (See section III.A).Table II shows the model hyper-parameters. B. Evaluation measures We used a variety of metrics to evaluate the performance of the model.We began by calculating the Precision measure, which indicates the number of documents correctly predicted by the model out of all documents predicted.The precision measure may be calculated using the equation 4: P recision = T rue_P ositives T rue_P ositives + F alse_P ositives Second, the Recall measure, which indicates the number of documents accurately predicted by the model out of the total of documents in the dataset.The recall measure can be computed using this equation 5: T rue_P ositives T rue_P ositives + F alse_N egatives Third, F1-measure which is a metric that can be calculated based on the precision and recall using the following equation 6: The last metric that we use is Accuracy.It is a metric used in classification problems used to tell the percentage of accurate predictions.We calculate it by dividing the number of correct predictions by the total number of predictions. C. Results We applied the baseline models which are: Logistic Regression (LR), Extreme Gradient Boosting (XGBoost), Naive Bayes (NB), and Random Forest (RF).Then, for deep learning models, we use the CNN algorithm with the fastText embedding and of course, our proposed model which is BERT-CLSTM.Table 3 illustrates the details of the model's evaluation findings.Precision, Recall, F1-measure, and Accuracy are denoted by the letters 'P', 'R', 'F', and 'Acc'.respectively. D. Analysis and discussion Table III The proposed method has a better classification effect, and the reason is that the text representation matrix has strong feature representation ability and is more representative, which can provide more category information for text classification.This also highlights the advantage of using CLSTM, which gives the representations of text with local and long-term information, unlike the traditional methods that are based on the bag-of-words model. V. CONCLUSION Arabic is considered one of the most difficult languages to process, due to its high morphological ambiguity, writing style, and lack of capitalization.Therefore, every NLP task involving this language requires a lot of feature engineering and preprocessing.In this paper, we present our model that classifies the Moroccan commercial court verdicts into four categories.The model outperformed the compared baselines and achieved good results by getting an F-measure value of 93.61%. shows that our model outperformed the baseline models and the CNN model in terms of Precision, Recall, F1-Measure and Accuracy.In term of F1-Measure, our proposed model outperforms the CNN+FastText model by 2.86, the RF+TF-IDF model and the NB+N-Gram by 5.64 as well as the XGBoost+count vectors model by 6.27 and the LR+N-Gram model by 7.54. Figure 2 shows accuracy scores for models.In term of Accuracy, our proposed model outperforms the CNN+FastText model by 2.87, the RF+TFIDF model and the NB+N-Gram by 5.58 as well as the XGBoost+count vectors model by 6.21 and the LR+N-Gram model by 7.48. ACKNOWLEDGMENT This work was supported by the Ministry of Higher Education, Scientific Research and Innovation, the Digital Development Agency (DDA) and the CNRST of Morocco [Alkhawarizmi/2020/36]. TABLE I DISTRIBUTION OF DOCUMENTS IN EACH CLASS. TABLE II MAJOR HYPER-PARAMETERS OF THE MODEL. TAOUFIQ EL MOUSSAOUI, LOQMAN CHAKIR: BERT-CLSTM MODEL FOR THE CLASSIFICATION OF MOROCCAN COMMERCIAL COURTS VERDICTS 283Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. TABLE III PERFORMANCE OF OUR MODEL AGAINST THE BASELINE MODELS.
2,997.4
2023-09-17T00:00:00.000
[ "Computer Science", "Law" ]
Randomized and Optimal Algorithms for k-Lifetime Dominating Set in Wireless Sensor Networks In wireless sensor networks, rotating dominating set is an efficient method for balancing the energy consumption of nodes, and thereby extending the network operational time. This method can be abstracted as <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula>-Lifetime Dominating Set in bipartite graph, that partitions the set of graph vertices representing sensors into <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> disjoint dominating sets. However, the considered problem has been proven to be NP-hard, and there is no hope of solving it in polynomial time unless P = NP. Existing studies mainly focus on developing approximation or heuristic algorithms, which usually cannot guarantee a solution for a given problem yes instance. In this study, we first propose a randomized algorithm that can generate a solution with guaranteed probability 1-<inline-formula> <tex-math notation="LaTeX">$\varepsilon $ </tex-math></inline-formula> (<inline-formula> <tex-math notation="LaTeX">$0 < \varepsilon < 1$ </tex-math></inline-formula>). Using the color coding method, we show that the randomized algorithm can be improved to guarantee the generation of a solution for a given problem yes instance in exponential time. Based on the idea of randomized partition, we further present a more practical centralized greedy algorithm, and then a distributed implementation. Simulation results indicate that the centralized algorithm can efficiently generate optimal solutions for almost all the given problem instances if the <italic>partition redundancy</italic> is above a certain limit. Compared with existing algorithm, the centralized algorithm increases the number of dominating sets by factors between 0% and 21%. I. INTRODUCTION Wireless sensor networks provide new applications for environmental monitoring and military surveillance. In some cases, ground access to the monitored area is difficult or dangerous, and the only way to install the sensors is to deploy them from an aircraft. The number of sensors deployed is usually higher than required because of the lack of precise sensor placement. The sensors are charged with a battery whose energy is limited and placed once. Hence, one of the main issues in sensor networks is prolonging the network lifetime by reducing the energy consumption of the sensor nodes [1]- [3]. The associate editor coordinating the review of this manuscript and approving it for publication was Pietro Savazzi . Node activity scheduling is a common method for saving energy. This method schedules sensor activity so that for each sensor, the active state, in which it performs its monitoring task, alternates with a low-energy sleep state. Obviously, in the sleep state, the sensor consumes much less energy than in the active state. The set of sensors is divided into disjoint sets, such that every set completely covers all targets. These disjoint sets are activated successively such that at any moment in time, only one set of sensors is active. The sensors from the active set are in active mode, and all other sensors are in sleep mode. The approach above can be abstracted into k-Lifetime Dominating Set in bipartite graph. Given a bipartite graph, G = (S ∪ T , E), where S represents the set of sensors and T represents the set of targets. A dominating set D ⊆ S is a subset of sensors S, such that every target t ∈ T has a neighbor in D. A k-lifetime dominating set is a partition {S 1 , S 2 , . . . , S k } of S such that every S i (1 ≤ i ≤ k) is a dominating set of T . S i is called the dominator and k is called the network lifetime. By rotating each dominating set periodically, the energy consumption of the sensors can be greatly balanced and the network lifetime can be prolonged. This problem has been proven NP-hard. In accordance with state-of-the-art computational complexity theory, we have to consider algorithms with exponential running time to solve it optimally. However, exponential growth quickly becomes prohibitive when algorithms are run in practice. Most previous studies investigated the considered problem from the perspective of centralized approximation or heuristics, which usually cannot generate the optimal solution (a partition {S 1 , S 2 , . . . , S k } of S such that every S i is a dominating set and k is maximized), or guarantee a solution for a given problem yes instance. The study of efficiently generating an optimal solution for a given problem instance is much less developed. Moreover, in large-scale distributed systems, the use of centralized algorithms based on a global view of networks is infeasible. In this study, we mainly investigate the problem from an optimal algorithm perspective and develop algorithms that can find the optimal (not only approximate) solution practically. First, we present a randomized R-LDSBG algorithm, where each sensor simply assigns itself to a dominator chosen uniformly at random from the set of all possible dominators. The effect of the algorithm is determined by the iteration times of the random assignment process, which can generate a solution for any given yes instance with probability 1 − (0 < < 1) if exponential iterations are permitted. Moreover, the randomized algorithm can be de-randomized and determinately generate a solution for any input yes instance. The de-randomization is obtained using the method of color coding and deterministic construction of the color coding scheme. To the best of our knowledge, this is the first time that the considered problem has been solved optimally. The proposed de-randomized O-LDSBG algorithm guarantees an optimal solution in theory. However, this is unpractical because of its exponential iterations. Based on the idea of randomized partition, we further propose a more practical centralized greedy algorithm, C-LDSBG, to obtain the optimal solution. The assignment process in the R-LDSBG algorithm is completely random, which increases the assignment conflict and number of assignment iterations. To decrease assignment conflict, we propose two main methods. One method is to introduce an object function to evaluate the attractiveness of dominators. Each sensor assigns itself to the dominator with a probability that is positively proportional to the weight of the intersection between the set of targets monitored by the sensor and the set of targets monitored by the dominator. The intersection is weighted based on how likely it is to be monitored by another sensor during the assignment process and the size of the intersection. Another method is to conduct the assignment process in the order of the degree of sensors, from high to low. Simulation results indicate that the centralized greedy algorithm C-LDSBG can achieve optimal solutions efficiently for almost all the given instances when the redundancy degree for the partition task is more than 10. The average iteration times required to obtain the optimal solutions is approximately 450. Compared with existing algorithm, the C-LDSBG algorithm increases the number of dominators by factors between 0% and 21%. Further, we present a distributed implementation of the centralized greedy algorithm. Each sensor simply assigns itself, in turn, to a dominator. The algorithm is completely distributed, and communication is only required to allow each sensor to know its neighbors. The main contributions of this study and the novelty of the proposed methods are as follows. • Using the method of color coding, we present an exact O-LDSBG algorithm that can find the optimal (not only approximate) solution for a given problem instance in theory. This problem can be solved optimally for the first time. • Based on the idea of random partition, we develop a randomized algorithm C-LDSBG that can generate optimal solutions for almost all given problem instances in practice when the partition redundancy is above a certain limit. • We further develop, for the first time, a distributed algorithm, D-LDSBG, for the considered problem, where each sensor can only communicate with its neighbors. The remainder of this paper is organized as follows. In Section II, we provide an overview of relevant literature. In Section III, we describe our computation model and formally define the k-Lifetime Dominating Set problem in bipartite graph. Randomized and optimal algorithms are presented in Section IV. In Section V, a centralized greedy algorithm is presented. Section VI presents a distributed greedy algorithm. In Section VII, a simulation experiment is presented. Finally, Section VIII concludes this paper. II. RELATED WORKS The importance of energy efficiency in wireless sensor networks has led to a plethora of studies on k-Lifetime Dominating Set. The optimal version of k-Lifetime Dominating Set is known as the domatic partition problem. Poon et al. showed that the 3-domatic partition problem is NP-complete on planar bipartite graphs, and the domatic partition problem is NP-complete on co-bipartite graphs [4]. Based on the graph coloring method, Mahjoub and Matula solved the domatic partition problem in random geometric graph and provided up to ( + 1) disjoint (1 − ε) dominating sets on a large range of experimental graphs, where is the minimum degree of the graph [5]. They carried out further research by proposing a more practical solution to the distributed ( + 1) domatic partition problem based on the localized graph coloring method [6]. Communication in wireless sensor networks is often modeled as the so-called unit disk graph (UDG). In UDG, there is VOLUME 10, 2022 an edge between two nodes if their Euclidean distance is most one. Pemmaraju and Pirwani proposed a method of uniform partition and presented deterministic, distributed algorithms for finding a k-domatic partition of size at least a constant fraction of the largest possible (k-1)-domatic partition for any k > 1 [7]. Pandit et al. first presented a constant-factor distributed algorithm that can be implemented in O(log n) rounds of communication on the UDG of order n [8]. Yu et al. proposed another constant-factor approximation algorithm using the skyline property of uniform radius disks [9]. Misra and Mandal studied a distributed domatic partition based scheme for energy-efficient clusterhead rotation in UDG [10]. Variations of domatic partition have also been studied comprehensively. Liang studied the k-tolerant domatic partition from both algorithmic complexity and graph theoretic points of view. They showed that it is NP-complete to decide whether the k-tolerant domatic number of a bipartite graph is at least three, and presented a polynomial time algorithm that approximates the k-tolerant domatic number of a graph of order n within a factor of ((1/k + o(1)) ln n) [11]. Lee et al. demonstrated that the total 3-domatic partition problem on planar graphs is NP-complete [12]. Misra and Mandal studied the connected domatic partition problem and developed a distributed algorithm to construct a connected domatic partition with a guaranteed size [13]. Based on the connectivity decomposition method, Censor-Hillel et al. Presented a distributed algorithm for a generalization of the connected domatic partition problem [14]. Pino et al. studied domatic partition in wireless sensor networks, and introduced three local search based algorithms to increase the network lifetime, where sensors can have different initial energies [15]. Maximizing the number of disjoint dominating sets to increase the lifetime of networking while guaranteeing complete coverage of the monitored targets was considered in [16] and [17]. The problem can be abstracted into the k-Lifetime Dominating Set problem in bipartite graph. Both studies focused on the design of heuristic algorithms and did not provide any worst-case analysis or stringent bounds. A √ n-approximation algorithm for this problem was proposed in [18], where n denotes the number of targets to be monitored. Later, Pananjady et al. proposed a polynomial time ln n-approximation algorithm through suitably defined hypergraph coloring [19]. They further demonstrated a (1 + ε)-approximation algorithm for the 2-dimensional geometric case, where each sensor can monitor the circular area around itself with a given radius. However, all the above heuristic or approximate algorithms are centralized and hence inappropriate in practice. Pananjady et al. proposed an online algorithm with a competitive ratio of ln n [20] with prior knowledge of the minimum degree of the targets. Emek et al. developed an online algorithm for the problem that guarantees a polylogarithmic (O(ln 2 n)) competitive ratio [21] without prior knowledge of the minimum degree of the targets. Maximizing the number of non-disjoint dominating sets in bipartite graph is another method to increase the lifetime of the network. Berman et al. provided a 1 + ln n approximation algorithm based on a ln n approximation to the minimum weighted set cover problem [22]. Kasbekar et al. considered a problem with an additional constraint that each sensor has information only about its neighbors [23]. They provided a distributed algorithm with an O(ln n · ln(nB)) approximation ratio, where B is the maximum battery capacity of any sensor. Recently, Ashwin et al. proved that the problem cannot be approximated within a factor of less than ln n using any polynomial time algorithm, where n is the number of targets [24]. Jia et al. considered a variant of the problem in which sensors only sense directionally, and targets require different coverage quality requirements [25]. They proposed an efficient heuristic algorithm and obtained an upper bound of the optimal solution. Table 1 summarizes the main results for k-Lifetime Dominating Set in bipartite graph. To make the running time easier to read, we use the O * -notation allowing us to omit the polynomial factors. The symbol C denotes centralized algorithm, while D denotes distributed algorithm. The symbol n denotes the number of sensors, and n denotes the number of targets, and c denotes a constant. III. SYSTEM MODEL In this section, we describe the model and introduce the notation used throughout this paper. Table 2 lists the symbols that are most frequently used throughout the paper. We model the network as a bipartite graph G = (S ∪ T , E) with |S| = ns and |T | = nt. Herein, a bipartite graph is one whose vertices can be divided into two independent sets, S and T , and every edge of the graph connects one vertex in S to one vertex in T . The nt targets t 1 , t 2 , . . . , t nt with known locations must be continuously observed, and the ns sensors s 1 , s 2 , . . . , s ns are randomly deployed close to the targets. If a target t ∈ T is within the sensing range of a sensor s ∈ S, then there is an edge (s, t) ∈ E between s and t. We call s can dominate (monitor) t. Let D ⊆ S be a subset of S. If every vertex of T is dominated by some vertex of D, then D dominates T . As we assume that the number of sensors deployed in the field is much greater than the optimum required to perform the monitoring task, an important method of prolonging the network lifetime consists in scheduling the sensor node activity to alternate between active mode and sleep mode [16]- [18]. Assume that each sensor can be active for a unit time of 1. That is, if all sensors are continuously active, then the network lifetime is 1. Assume that all sensors initially have the same amount of power, and their energy depletes at the same rate. To achieve a lifetime of k, the sensors are organized into k disjoint groups (dominators) At any moment in time, only one such group is active for monitoring the targets and consumes energy, while the other groups are in sleep mode with no energy consumption. The formal definition of the considered problem is described as follows. k-Lifetime Dominating Set in bipartite graph If a partition (S 1 , S 2 , . . . , S k ) of S exists such that every dominator S i (1 ≤ i ≤ k) can dominate T , then the given problem instance is called a yes instance. (S 1 , S 2 , . . . , S k ) is called a k-partition of S. Finding a partition {S 1 , S 2 , . . . , S k } of S such that every dominator can dominate T and k is maximized is called the optimal version of the considered problem. If |E| − k * |T | > 0, then partition redundancy exists for the partition task and the redundancy degree is |E| − k * |T |. Suppose that a sensor s is assigned to dominator S i . If N (s) ∩ N (S i − {s}) = ∅, then we say that there exists assignment conflict in the assignment for s, and the conflict degree is See the problem instance of k-Lifetime Dominating Set in bipartite graph in Fig. 2. The instance has eight sensors S = {s 1 , s 2 , s 3 , s 4 , s 5 , s 6 , s 7 , s 8 }, and two targets T = {t 1 , t 2 }. Suppose that k is set to 3. Suppose that s 1 , s 2 , and s 3 are assigned to dominator S 1 , s 4 and s 5 are assigned to dominator S 2 , and s 6 , s 7 , and s 8 are assigned to dominator S 3 . Because is a 3-partition of S and this problem instance is a yes instance. The number of edges |E| is nine, and the number of targets |T | is two. Because |E|−k * |T | = 9−3 * 2 = 3 > 0, partition redundancy exists for the partition task, and the redundancy degree is 3. Because s 1 is assigned to the dominator S 1 and N (s 1 ) ∩ N (S 1 − {s 1 }) = {t 1 } = ∅, there exists assignment conflict in the assignment for s 1 , and the conflict degree is 1. Herein, we are concerned only with the sensor node scheduling mechanism and do not address the problem of selecting which protocol is used for data gathering or node synchronization. IV. RANDOMIZED ALGORITHM In this section, we propose a simple and randomized algorithm for the k-Lifetime Dominating Set problem in bipartite graph, and then show how to de-randomize it. The randomized R-LDSBG algorithm shown in Fig. 1 just assigns each sensor to a dominator chosen uniformly at random. It makes few assumptions about the networks and is simple enough to be implemented. In addition, the expected performance can be guaranteed because the performance of the algorithm is proportional to the iteration times lt of the random partition process. Let lt be Nk k|T | , with Proof: Let (S 1 ,. . . , S k ) be a partition of the set of sensors S such that every dominator S i dominates T . Note that for We say that the sensors in D are properly partitioned by a k-partition (P 1 , . . . , P k ) of S if for two sensors v 1 , v 2 ∈ D, the following conditions hold: If we partition the sensors in S into k disjoint subsets randomly, then the probability that the sensors in D are properly partitioned is not less than k! k k|T | . The R-LDSBG algorithm, shown in Fig. 1, is based on these ideas. According to the discussion above, each random k-partition of S has a probability of at least k! k k|T | to obtain a solution. Because Step 2 loops Nk k|T | times with a probability of at least 1 − (1 − k! k k|T | ) Nk k|T | , the partition constructed in Step 2.1, is a solution. Since lim n→+∞ (1 + 1/n) n = e, (1 − k! k k|T | ) Nk k|T | < e −N . Note that N = ln(1/ε) . It follows In the following, we illustrate how the R-LDSBG algorithm works with the problem instance shown in Fig. 2. The instance has eight sensors S = {s 1 , s 2 , s 3 , s 4 , s 5 , s 6 , s 7 , s 8 }, and two targets T = {t 1 , t 2 }. Suppose that k is set to three, and the iteration time lt is 1. In Step 2.1, each sensor is randomly assigned to some dominator. Suppose that s 1 , s 2 , and s 5 are assigned to dominator S 1 , s 3 and s 4 are assigned to dominator S 2 , and s 6 , s 7 , and s 8 are assigned to dominator S 3 . Since S 1 cannot dominate all the targets T , the algorithm returns ''no''. We further show that the random partition process can determinately generate a solution for a given problem yes instance by deterministic construction of the h-color coding scheme. We first briefly introduce this technique. Consider the execution of Steps 2.1-2.4 on this particular k|T |-coloring f 0 . Given a k-partition P = (P 1 , . . . , P k ) of the k|T | colors, we say that P is a proper partition of k|T | colors if, for any two sensors v 1 , v 2 ∈ D, the following conditions hold: Note that for any two sensors v 1 , v 2 ∈ D, f 0 (v 1 ) = f 0 (v 2 ). Moreover, Step 2.2 enumerates all k-partition of the k|T | colors. Therefore, at least one partition P = (P 1 , . . . , P k ) of the k|T | colors from Step 2.2 is a proper partition. Suppose now we have the proper partition P = (P 1 , . . . , P k ). Then, the corresponding k-partition of S constructed in Step 2.3 is obviously a solution for the considered problem and is returned by Step 2.4 of O-LDSBG algorithm in Fig. 3. In the following, we show how the O-LDSBG algorithm works using an example in In Step 1, a 6-color coding scheme F for D is constructed. According to Lemma 4.2, at least some 6-color coding f 0 in F colors the sensors in D properly: for any two sensors in D, they will be assigned different colors under f 0 . Suppose that Step 2 is conducted under the 6-color coloring f 0 . Now, we are ready to find a proper partition of the 6 colors into k = 3 disjoint groups: for any two sensors s i , s j in D with i = j and 1 ≤ i, j ≤ 8, if s i , s j belong to the same dominator in {D 1 , D 2 , D 3 }, then the color of s i and the color of s j belong to the same group, otherwise ( s i , s j belong to different dominators), the color of s i and the color of s j belong to different groups. Note that Step 2.2 enumerates all the 3-partition of the 6 colors. Hence, at least one 3-partition P = (P 1 , . . . , P 3 ) of the 6 colors is a proper partition. Suppose that the execution of Step 2.2 is on this proper partition P . Subsequently, the corresponding 3-partition {S 1 , S 2 , S 3 } of S constructed in Step 2.3 satisfies D i ⊆ S i with 1 ≤ i ≤ 3. Clearly {S 1 , S 2 , S 3 } is a solution for the given instance. Noth that algorithm O-LDSBG takes exponential time and is infeasible in practice. V. CENTRALIZED GREEDY ALGORITHM In this section, based on the idea of randomized algorithm, we provide a more practical centralized and randomized algorithm to obtain the optimal solution for a given instance of the k-Lifetime Dominating Set problem in bipartite graph. Assume that each sensor has location determination capabilities and can determine which targets of interest it will be capable of monitoring. The sensors send to the base station (BS) their ID numbers and the targets monitored. Then, the BS executes the sensor scheduling algorithm and broadcasts the schedule information. Finally, every sensor schedules itself for active/sleep modes. The pseudocode of the greedy algorithm, C-LDSBG, is shown in Fig. 4. This algorithm is based on the idea of the random algorithm shown in Fig. 1. The random algorithm indicates that the probability of obtaining a solution is proportional to the iteration times of the partition process. If exponential iterations of the partition process are permitted, a solution can be determinately constructed. But, to generate a solution determinately, the iteration times must be huge, which is unpractical. In the greedy algorithm, we use the iteration times lt as the input. The value of lt is balanced between the running times and solution effectiveness. Moreover, the assignment process in the random algorithm is completely random, which increases the number of iterations required to obtain the right partition. In the greedy algorithm, an object function is introduced to evaluate the attractiveness of each dominator to unassigned sensors. The key point is that an unsigned sensor is favored to be assigned to a dominator achieving covering more sparsely uncovered targets. The algorithm takes as the input parameters S−the set of sensors, T −the set of targets, k−the number of dominators, and lt−the iteration times of the random partition process. The greedy algorithm returns the set of dominators (S 1 , . . . , S k ). At the beginning of each recursion of the algorithm, all the sensors in S are selected into C, the set of available sensors, and every dominator S i is set to be empty. The greedy algorithm iteratively executes the process of random partition from Step 1.2. Once a right partition is obtained, the algorithm exits the recursion and returns the set of dominators in Step 1.3. One sensor is assigned to some dominator from the set of sensors S in each iteration of Step 1.2. All sensors from S are assigned to some dominator S i after the iteration of Step 1.2. The assignment process for sensors is conducted in the order of the degree of sensors, from high to low, which is helpful for decreasing the assignment conflict. At each iteration of Step 1.2, a sensor s ∈ C whose degree is the highest in C is assigned to some dominator S j . The sensor with the highest degree is found at the beginning of each iteration of Step 1.2. Every variable a[i] is set to zero, which is used to store a value representing the contribution of the unassigned sensor s to the dominator S i . In Step 1.2.3, for all dominators S i and the given unassigned sensor s, the values of the objective function f are calculated. The objective function for dominator S i and sensor s is given by Step 1.2.9, the sensor s is assigned to some dominator S j according to the probability using the roulette method. In Step 1.3, if every dominator S i can dominate all targets, then the algorithm returns the right partition (S 1 , . . . , S k ). If Step 1 cannot generate a right partition, then we assume that the input instance is a no instance, and return ''no such partition exists''. For given s and S i , f (s, S i ) can be calculated in time |S||T |. Hence, Step 1.2.3 can be finished in time k|S||T |. It follows that Step 1.2 can be finished in time k|S| 2 |T |. Therefore, the C-LDSBG algorithm requires O(lt · k · ns 2 · nt) time, where ns is the number of sensors, and nt is the number of targets. VI. DISTRIBUTED GREEDY ALGORITHM In this section, we give a distributed implementation of the centralized greedy algorithm. The distributed algorithm makes some assumptions about the network: 1. Sensor synchronization has been realized by MAC protocol; 2. Every sensor knows the the parameter k; 3. Every sensor can determine the set of targets it is able to monitoring and recognize them by geographic coordinate; 4. Every sensor has a unique ID number taken from the set of integers {1, 2, . . . , ns}, where ns is the number of sensors; 5. Every sensor can communicate with its neighbors. For two sensors s 1 , s 2 ∈ S, if s 1 and s 2 dominate a common target, then s 1 is the neighbor of s 2 and vice versa. Each sensor s stores two tables. Table 1 is a matrix of k × |N (s)|, and Table 2 is a matrix of 1×|N (s)|. Herein, N (s) denotes the set of targets that are dominated by s. Table 1 for sensor s stores the domination information of every target t ∈ N (s). If the value of entry in row i ∈ {1, . . . , k} and column j ∈ {1, . . . , |N (s)|} is 1, then it indicates that the target j is dominated by the dominator S i . All entries are initialized to 0 and refreshed in the D-LDSBG algorithm (Fig. 6). Table 2 stores the degree d(t) of every target t monitored by s. All entries are set to 0 and then updated by algorithm Initialization in Fig. 5. At Step 1.1.1 in the algorithm Initialization, the sensor s receives message from its neighbor sensor ss. The message includes the ID number of sensor ss, and the targets i monitored by ss. According to the message, the sensor s renews its Table 2: the value of the entry in column i is added by one. The D-LDSBG algorithm is initiated at time tm = 0. When tm < s indicating that it is not the time slot of s for assignment, sensor s receives a message from its neighbor ss. The message includes the ID number of sensor ss, targets i monitored by ss, and dominator j assigned to ss. According to the message, sensor s renews its Table 2: the value of the entry in column i is subtracted by one. Table 1 is renewed using the following method: change the entry in row j, column i, from 0 to 1. When it is the time slot of sensor s for assignment, for all dominators S i , the values of the objective function f are calculated. The object function for the dominator S i is given by f (s, The sensor s is assigned to the dominator S i such that f (s, S i ) is the largest over all dominators (S 1 , . . . , S k ). Note that d(t), N (s), and N (S i ) can be obtained directly from Table 2 or Table 1. Hence, it is easy to see that the algorithm requires |S| 2 |T | 2 time. After the assignment phase, every sensor s is assigned to some dominator S i (1 ≤ i ≤ k) and knows the number i. Sensor s will schedule itself for the active mode from time i-1 to time i and communicate with other sensors by broadcasting. VII. SIMULATION In this section, we evaluate the performance of our centralized greedy algorithm, C-LDSBG, for k-Lifetime Dominating Set in bipartite graph. The evaluation is conducted on a PC with a 2.67GHz Inter Core i5 CPU and 8GB main memory. The problem instances are created as follows. Given the number of sensors |S|, number of targets |T |, and number of edges |E|, a bipartite graph G(S ∪ T , E) is created, where the edges are chosen uniformly at random from all possible sensor-target pairs. A sensor s can dominate a target t if s has an edge connecting it with t. If the degree of some vertex s ∈ S is zero, then the topology G is discarded, and a new topology is generated. We choose this approach for generating topology as opposed to an approach where nodes are in Euclidean space and sensor nodes sense target nodes within a radius of their location because the latter limits the variety of applications. In the simulation, we consider the following tunable parameters: ns, the number of sensor nodes; nt, the number of target nodes; md, minimum degree of sensor nodes; m, the number of edges; k, the preset number of dominators for a given problem instance; lt, the iteration times. A. PERFORMANCE IN ACHIEVING OPTIMAL SOLUTION The effectiveness of the proposed algorithm is proportional to the number of iterations. Thus, we focus on showing how the iteration times needed to generate one solution are affected by different parameters and the average iteration times for different network instances. For any given instance, we set the number of dominators k equal to the minimum degree of the target nodes. Because the optimum number of dominators for any problem instance cannot be larger than the minimum degree of the target nodes, our algorithm actually returns the optimal solution for any given instance in the simulation. In the first experiment, we provide an intuition of the iteration times needed to obtain an optimal solution in 50 network instances. The number of sensor nodes ns is 100, number of target nodes nt is 10, and number of edges m is set to 250. Fig. 7 plots the iteration times when k does not exceed 22. As can be observed, the iteration times vary between 1 and 7. In most cases, our algorithm can return the optimal partition with iteration time of 1. When k is 23, as shown in Fig. 8, the iteration times increase. In most instances, the iteration times are more than 1, and the average iteration times is approximately 8. In the worst case, the iteration times are 49. Fig. 9 shows the distribution of iteration times when k is 24. It can be saw that the iteration times increase dramatically. The average iteration times is approximately 450, and 3160 iteration times are required to obtain the optimal partition for the hardest instance. Fig. 10 shows the impact of iteration times lt on the probability of getting an optimal solution successfully. For every value of the iteration times, we repeat the experiment 20 times and count the times of returning an optimal solution successfully. As we can observe, the probability of generating an optimal partition increases as the iteration times of partition process increase. When the iteration time is bounded by 1, three instances return optimal solutions, and the average probability of getting one solution is 0.15. When the iteration times are bounded by 50, only one instance cannot return one solution. In the following set of experiments, several factors are studied determining the minimum iteration times guaranteeing an optimal solution for a given instance. For each parameter setup, we generate ten network topologies and report the average results. Fig. 11 plots the iteration times when the number of dominators k is changed. As we can see, the iteration times increase slowly when the value of k is no more than 22. But after k exceeds 22, the iteration times needed to generate an optimal solution increase rapidly. The increase in iteration times is proportional to k. This is expected, since with the increase of the value of k, the algorithm has less partition redundancy that can be utilized to get the right partition for given instance. In Fig. 12, we present the average iteration times when the number of sensors changes between 90−110 in increments of 5. As shown in Fig. 6, the decrease in iteration times is proportional positively to the number of sensor nodes. With an increase in the number of sensor nodes, the average degree of the sensor nodes decreases. Thus, the probability of having assignment conflict decreases, which makes it easier for the random partition procedure to obtain a right partition. In Fig. 13, we measure the iteration times when the number of targets varies between 8 and 12. The iteration times increase as the number of targets increases. With an increase in the number of target nodes, for each dominator, more sensor nodes are required to realize full domination. Thus, the probability of having assignment conflict increases, which makes it more difficult for the random partition process to obtain a right partition. Fig. 14 shows the convergence of the iteration times with the number of edges in the network instances. As we can see, the iteration times decrease with the increase of number of edges. The reason is that, with the increase of edges, the algorithm has more partition redundancy that can be utilized to get the right partition. Fig. 15 plots the iteration times when the minimum degree of the sensors changes. As we can see, the iteration times increase with the increase of minimum degree of sensors. This is can be expected, since with the increase of minimum degree of the sensors, the probability of having assignment conflict is increased. Consequently, it is more difficult for the random partition procedure to get a right partition. B. PERFORMANCE COMPARED WITH R-LDSBG The performance of the centralized greedy algorithm C-LDSBG in obtaining the problem solution is compared with the performance of the random algorithm R-LDSBG. Because the iteration times lt are decisive for the effectiveness of obtaining a solution for both C-LDSBG and R-LDSBG, we use iteration times to evaluate the performance. We provide an overview of the iteration times lt required to obtain the problem solution in 200 network instances. The number of sensors is set to 100, and the number of targets is set to 10, and number of edges is set to 250. The minimum degree of the target nodes mt is set to 15, and the number of dominators k is set to mt − 5. The upper bound on iteration times is set to 5000. For every instance, Algorithm C-LDSBG returns the problem solution successfully with iteration time of 1. The R-LDSBG algorithm successfully returns problem solutions for 53 instances, but fails to get problem solutions for the other 147 instances. For the 53 instances, the average iteration times required to obtain the solution is 2774. C. BENCHMARKING THE HEURISTIC ALGORITHM MCCH Our algorithm, C-LDSBG, serves for benchmarking the heuristic algorithm MCCH by Slijepcevic and Potkonjak [16], indicating the quality of their solutions. The MCCH approach favors to select a sensor into current dominator that dominates the largest number of sparsely dominated targets. The heuristic algorithm MCCH is effective and easy to be implemented. It has time complexity O(n 2 ), where n is the number of sensors in the network. Compared to other heuristic algorithms for k-Lifetime Dominating Set in bipartite graph, MCCH has lower execution time. For a given problem instance, we compare the number of dominators produced by the MCCH algorithm with the maximum number of dominators (optimal solutions) produced by our C-LDSBG algorithm. We provide an intuition of the quality of the solutions produced by the MCCH algorithm in 250 network instances. For a given instance, the parameter k is set to the minimum degree of the target nodes. Hence, the proposed C-LDSBG algorithm returns the optimal solution for any given instance. The results are shown in Fig. 16. For almost 30 percent of the 250 problem instances, the heuristic algorithm MCCH returns optimal solutions. In the worst case, the number of dominators computed by the heuristic algorithm MCCH is four less than the maximum number of dominators. VIII. CONCLUSION We concentrate on the design of efficient algorithms that generate optimal solutions determinately for the k-Lifetime Dominating Set problem in bipartite graph both in theory and in practice. We provide a simple algorithm that just partitions the sensors into groups randomly. We prove that the random algorithm can guarantee a solution using color coding. However, it takes exponential time, which is unpractical. Based on the idea of random partition, we then present a more practical centralized greedy algorithm which keep a balance between the running time and solution effectiveness. We further present a distributed version of the centralized algorithm where each sensor can only communicate with its neighbors. To the best of our knowledge, this is the first distributed algorithm for the considered problem. Simulation findings demonstrate that the centralized algorithm can achieve optimal solutions for almost all given instances efficiently when the redundancy degree is no less than 10. Finally, our algorithms also serve for benchmarking heuristic algorithms. However, the proposed centralized algorithm is unpractical when partition redundancy is below a certain limit. One challenge is to efficiently achieve optimal solutions for problem instances when partition redundancy is below a certain limit. Another interesting direction for further work is to conduct research on the application of evolutionary computation algorithms [27], [28] to the considered problem.
9,508.8
2022-01-01T00:00:00.000
[ "Computer Science" ]
LED-based multi-wavelength phase imaging interference microscopy LED-based multi-wavelength phase imaging interference microscopy combines phase-shifting interferometry with multi-wavelength optical phase unwrapping. This technique consists of a Michelson-type interferometer illuminated with a LED. The reference mirror is dithered for obtaining interference images at four phase quadratures, which are then combined to calculate the phase of the object surface. The 2π ambiguities are removed by repeating the experiment using two or more LEDs at different wavelengths, which yields phase images of effective wavelength much longer than the original. The resulting image is a profile of the object surface with a height resolution of several nanometers and range of several microns. The interferographic images using broadband sources are significantly less affected by coherent noise. © 2007 Optical Society of America OCIS codes: (120.3180) Interferometry; (120.5050) Phase measurement; (180.3170) Interference microscopy References and links 1. E. D. Barone-Nugent, A. Barty, K. A. Nugent, “Quantitative phase amplitude microscopy I: optical microscopy,” J. Microscopy 3, 194-203 (2002). 2. C. J. Mann, L. Yu, C. Lo and M. K. Kim, “High-resolution quantitative phase-contrast microscopy by digital holography,” Opt. Express 13, 8693-8698 (2005). 3. P. Marquet, B. Rappaz, T. Colomb, F. Charriere, J. Kuhn, Y. Emery, E. Cuche, C. Depeursinge, P. Magistretti, “ Digital holographic microscopy, a new optical imaging technique to investigate cellular dynamics,” in Biophotonics and New Therapy Frontiers, R. Grzymala and O. Haeberle, eds.,Proc. SPIE 6191, 61910U-61910U5 (2006). 4. Y. Park, G. Popescu, K. Badizadegan, R. R. Dasari and M. S. Feld, “Diffraction phase and fluorescence microscopy,” Opt. Express 14, 8263-8268 (2006). 5. M. Servin, J.L. Marroquin, D. Malacara and F.J. Cuevas, “Phase unwrapping with a regularized phasetracking system,” Appl. Opt. 37, 1917-1923 (1998). 6. Yeou-Yen Cheng and James C. Wyant, “Multiple-wavelength phase-shifting interferometry,” Appl. Opt. 24, 804-807 (1985). 7. C. Wagner, W. Osten and S. Seebacher, “Direct shape measurement by digital wavefront reconstruction and multiwavelength contouring,” Opt. Eng. 39, 79-85, (2000). 8. J. Gass, A. Dakoff, M.K. Kim, “Phase imaging without 2π ambiguity by multiwavelength digital holography,” Opt. Lett., 28, 1141-1143, (2003). 9. D. Parshall and M. K. Kim, “Digital holographic microscopy with dual wavelength phase unwrapping,” App. Opt., 45, 451-459 (2006). 10. M. Tziraki, R. Jones, P. M. W. French, M. R. Melloch and D. D. Nolte, “Photorefractive holography for imaging through turbid media using low coherent light,” Appl. Phy. B, 70, 151-154 (2000). 11. S. Dilhaire, S. Grauby, S. Jorez, L. D. P. Lopez, J. Rampnoux and W. Claeys, “ Surface displacement imaging by interferometry with a light emitting diode,” Appl. Opt. 41, 4996-5001 (2002). 12. L. Repetto, E. Piano and C. Pontiggia, “Lensless digital holographic microscope with light-emitting diode illumination,” Opt. Lett. 29, 1132-1134 (2004). 13. A. Fercher, W. Drexler, C. K. Hitzenberger and T. Lasser, “Optical coherence tomography-principles and applications,” Rep. Prog. Phys. 66, 239-303 (2003). 14. Luxeon Emitter and Star sample information AB11, 2 (Feb 2002). #83569 $15.00 USD Received 30 May 2007; revised 10 Jul 2007; accepted 10 Jul 2007; published 12 Jul 2007 (C) 2007 OSA 23 July 2007 / Vol. 15, No. 15 / OPTICS EXPRESS 9239 Introduction Most biological specimens are hardly absorptive, lacking sufficient contrast for standard bright-field microscopy.Several widely used phase imaging techniques are Zernike phase contrast (ZPC) microscopy, differential interference contrast (DIC) microscopy and Hoffman modulation contrast (HMC) microscopy.The Zernike phase contrast technique converts changes in phase into corresponding changes in amplitude by using a phase plate.ZPC uses common path interferometry, where a partially coherent light beam passes through the specimen.The light that is diffracted due to phase variations of the specimen and the light that passes without diffraction are then focused together to form a phase contrast image of the specimen.The DIC technique converts changes in the refractive index of the sample into amplitude changes by splitting a polarized beam into two perpendicularly polarized beams less than a micrometer apart.As the index of refraction changes, the phase of the two beams changes after passing through the sample.The two beams are then recombined using a Normaski prism and the recombined beam is sent through an analyzer to form an image of the specimen.The HMC microscopy converts optical phase gradients into amplitude using a spatial filter (or, "modulator"), which has three light-passing zones, a slit plate and a circular polarizer.Different specimen thicknesses deflect the light into different modulator zones thus controlling the contrast of the image.These techniques have well known draw backs such as halo effect (ZPC), shadow effect (DIC), and non-linear phase to amplitude conversion.Because of this non-linearity, phase imaging using these techniques is only qualitative.Quantitative phase imaging is important because it provides information of morphology and refractive indices of specimens.Recently a number of quantitative phase imaging techniques have been developed; Barone-Nugent and team have demonstrated a quantitative phase imaging microscope that separates phase information from amplitude information and produces pure phase images [1], several digital holography techniques have been used for quantitative phase imaging [2,3], and diffraction phase and fluorescence (DPF) microscopy has been used for simultaneous quantitative phase imaging and epi-fluorescence imaging of living cells [4]. Phase unwrapping When the optical depth of an object is greater than the wavelength, the phase image contains 2π ambiguities.Therefore, phase data has to be unwrapped before one can obtain an unambiguous optical thickness profile.Software algorithms that exist for detecting and removing 2π ambiguities are largely computation-intensive and prone to errors when the phase profile is noisy [5].While the principle of multi-wave phase unwrapping has been known in interferometry [6,7], known applications have been confined to optical profilers with raster-scanned pointwise interferometry.Other than our recent digital holography experiments [8,9], multi-wave phase unwrapping has not yet been applied to full-frame phase images.The basic principle of two-wavelength optical phase unwrapping in the context of digital holography was presented by J. Gass, A. Dakoff and M.K. Kim [8].In this paper, we extend the technique of multi-wavelength optical phase unwrapping to phase-shifting interference microscopy using light emitting diodes at three different wavelengths. Light emitting diodes (LED) have been used as in interferometric light sources in order to reduce the speckle noise inherent to lasers [10][11][12].Interference of coherent waves produces speckle noise, which limits the phase map's information.Since a LED's coherence length is in the micron range, speckle noise is greatly reduced.LEDs also cost much less than lasers, are easy to use and replace and can reduce overall apparatus dimensions.The LEDs used in this experiment are Luxeon TM Emitter diodes from Lumileds Lighting LLC.All of the LEDs used herein have a Lambertian (high dome) radiation pattern.And their spectra are shown in Fig. 1.These spectra were taken with Ocean Optics SD-1000 fiber optics spectrometer.The peak wavelengths, luminous flux, calculated and measured coherence lengths for the red, amber and green LEDs used in this experiment are shown in Table 1.The calculated coherence length of a light source is given by ( ) ( ) , where λ is the mean wavelength and λ Δ is the full width half maximum (FWHM) of Gaussian spectrum [13].The coherence length was directly measured here by counting the number of fringes in the interference of the tilted mirror object. Methodology The experimental setup is shown in Fig. 2. A set of LEDs illuminate a Michelson interferometer.In a fairly standard arrangement, a combination of a polarizing beam-splitter and two quarter-wave plates in the object and reference arms allow all of the reflected light from the two arms to reach the camera.A polarizer-analyzer pair also allows continuous variation of the relative intensity between the two arms.In order to acquire images with high resolution, two 20X microscope objectives are placed in front of the object and the reference mirror.Images acquired by the CCD are sent to an image acquisition board (National Instruments IMAQ PCI TM -1407) installed in the computer.(2) Multi-wavelength optical phase unwrapping When an object is imaged by using a wavelength smaller than the object's height, the phase image contains 2π ambiguities, as shown in Fig. 3. From Fig. 3, it is clear that there are many distance values for a given phase value.In order to obtain an unambiguous optical thickness profile, we need to have only one z distance for a given phase.These 2π ambiguities are eliminated by using multi-wavelength optical phase unwrapping method.For the m th wavelength λ m , the surface profile Z m of an object is related to the phase difference φ m as follows; It is apparent that an unambiguous range of Z can be increased by using a longer λ. Consider two single wavelength phase maps φ 1 and φ 2 .The beat wavelength Λ 12 for λ 1 and λ 2 is given by .Note that however, the phase noise in each single wavelength phase map is magnified by the same factor as the magnification of the wavelengths.In the twowavelength optical phase unwrapping method introduced by J. Gass, et.al.[8], the phase noise is reduced by using the following steps. 1.The surface profile Z 12 is divided into integer multiples of a single wavelength, say λ 1 . 2. The result is added to the single wavelength surface profile Z 1.This significantly reduces the phase noise in the coarse map.However, at the boundaries of wavelength intervals λ 1, the noise of the single wavelength phase map appears as spikes.3. Remove the spikes by comparing the result with the coarse map surface profile Z 12 .If the difference is more than half of λ 1 , addition or subtraction of one λ 1 depending on the sign of the difference removes the spikes.The resultant 'fine map' has a noise level equal to that of single wavelength surface profile. If a single wavelength phase map φ m contains a phase noise of 2πε m, the two-wavelength phase unwrapping method works properly for 550.18 nm, the maximum noise limit is ε m ~ 4.7 %.Using a larger beat wavelength therefore reduces the maximum noise limit. Three-wavelength optical phase unwrapping The advantage of this three wavelength phase unwrapping method is that the beat wavelength can be increased without reducing the maximum noise limit.Suppose the three chosen wavelengths are λ 1 = 653.83nm, λ 2 = 603.48nm, and λ 3 = 550.18nm.The first two wavelengths give a beat wavelength of Λ 12 = 10.53 μm.Instead of using the surface profile Z 12 of Λ 12 = 7.84 μm, which has a high noise, an identical surface profile can be produced by using surface profiles Z 13 and Z 23 with beat wavelengths of Λ 13 = 3.47 μm and Λ 23 = 6.2 μm, respectively.The resultant "coarse map of coarse maps" φ 13−23 with surface profile Z 13-23 also has the same beat wavelength Λ 13-23 = Λ 13 Λ 23 /| Λ 13 -Λ 23 | = 7.84 μm. The following steps are followed to reduce the noise in Z 13-23 . 1.The surface profile Z 13-23 is divided into integer multiples of one of the wavelengths, say Λ 13 .2. The result is added to the surface profile Z 13 .3. The resultant map is then compared with Z 13-23 .If the difference is more than half of Λ 13 , one Λ 13 is added or subtracted depending on the sign of the difference.This approach reduces the phase noise significantly.Any remaining spikes are due to the noise in φ 13 .We call this result the "intermediate fine map" Z' 13-23 .To reduce the noise in intermediate fine map, we repeat the procedure by dividing Z' 13-23 into integer multiples of λ 1 and adding the result to Z 1 .Then the resultant map is compared with Z' [13][14][15][16][17][18][19][20][21][22][23] .If the difference is more than half of λ 1 , one λ 1 is added or subtracted depending on the sign of the difference.Noise in the final map is equal to the noise in single wavelength phase map Z 1 .The three wavelength phase unwrapping method works when the maximum noise level ε m in the single wavelength phase map is the smaller of Λ 13 /4 Λ 12 ~ 11.1% or λ 1 /4Λ 13 ~ 4.7%.Therefore, the three wavelength phase unwrapping method increases the beat wavelength without magnifying the noise in the final phase map.The object here in Fig. 4 is a micro-electrode array biosensor.It consists of 16 gold electrodes on a Pyrex glass substrate.The center is a 125μm diameter circle with an approximate thickness of 2μm.The center of the device was imaged and the experimental results for twowavelength optical phase unwrapping are shown in Fig. 4. Red (λ 1 = 653.83nm) and green (λ 2 =550.18nm) LEDs are used as the two wavelengths.The beat wavelength Λ 12 = 3.47 μm. Results for two-wavelength optical phase unwrapping Images are of a 184 μm × 184 μm area.Fig. 4(a) shows a single wavelength phase map φ 1 with λ 1 = 653.83nm.The coarse map φ 12 with Λ 12 = 3.47 μm is shown in Fig. 4(b) and the final phase map with reduced noise is shown in Fig. 4(c).Figure 5 shows cross sections of phase maps along the lines shown in Fig. 4 and the phase noise in the chosen regions.Figure 5(a) is a cross section of single wavelength phase map with λ 1 = 653.83nm and Fig. 5(b) is a cross section of coarse map with Λ 12 = 3.47 μm.A cross section of the fine map with reduced noise is shown in Fig. 5(c).For maps (a), (b) and (c), the vertical axis is 4 μm.The root mean square (rms) noise of the coarse map is 43.27 nm.This is shown in Fig. 5(d).Figure 5(e) shows the reduced noise in fine phase map.Since the center of the MEMS device has a curvature, a paraboloid is fitted to the data.The red dotted line is the best-fit parabolic curve.After subtracting the curvature from the data, the Fig. 5(f) shows the corrected phase noise of 10.29 nm. Results for three-wavelength optical phase unwrapping The experimental results for three-wavelength optical phase unwrapping are shown in Fig. 6.Red (λ 1 = 653.83nm), amber ((λ 2 = 603.48nm) and green ((λ 3 = 550.18nm) LEDs are used as the three wavelengths.The beat wavelength Λ = 7.84 µm.The object is the center of the same micro-electrode array bio-sensor used in section (4.1).Images are of a 184 µm × 184 µm area.The comparison of the two-wavelength phase unwrapping method to the three-wavelength phase unwrapping method shows that the three-wavelength phase unwrapping method increases the axial range of the object, without increasing phase noise.In our results, the twowavelength phase unwrapping method produced a 3.47 μm unambiguous range with 10.29 nm phase noise, while the three-wavelength phase unwrapping method produced much larger 7.84 μm unambiguous range with smaller 4.78 nm phase noise. Multi-wavelength optical phase unwrapping method can also be used for biological cells.Figure 8 shows a single wavelength phase map, a coarse map and a fine map of onion cells using red (653.83nm),amber (603.48nm) and green (550.18nm) wavelengths.The beat wavelength is 7.84 μm.Image size is 184 µm × 184 µm.The Final fine map clearly shows the cell walls by eliminating the 2π ambiguities that would exist in a single wave length phase image. Conclusions In conclusion, the effectiveness of multi-wavelength phase imaging for unwrapping phase images with 2π ambiguities has been demonstrated.The three wavelength optical phase unwrapping method can be used to extend the axial range of the object being imaged, without increasing the noise in the phase map.Broadband light sources provide considerably less coherent noise and have added advantages such as lower cost and reduced apparatus dimensions.The multi-wavelength optical phase unwrapping method presented in this paper can be used to check the quality of surface profiles of microscopic objects such as sensors, integrated circuits and many MEMS devices.It can also be used to determine optical thickness profiles of various optical components and biological samples. Fig. 3 . Fig. 3. Phase vs. distance.2π ambiguities occur when the distance is a multiple of the wavelength. Fig. 4 . Fig. 4. Results of two-wavelength optical phase unwrapping: (a) single wavelength phase map, (b) two-wavelength coarse map, (c) two-wavelength fine map with reduced noise. Fig. 5 . Fig. 5. Surface profiles for two-wavelength phase unwrapping.(a) single wavelength surface profile, (b) surface profile of coarse map, (c) surface profile of final unwrapped phase map with reduced noise, (d) noise of the coarse map in the region between the two markers in plot (b).Rms noise is 43.27 nm, (e) noise of final unwrapped phase map in the area shown in (c).Red dotted line is the best fit parabolic curvature and black solid line is data, (f) corrected phase noise of the unwrapped phase map, after subtracting the curvature of the object.Rms noise is 10.29 nm. Figure 6 ( a) is the single wavelength phase map with λ 1 = 653.83nm.The three wavelength coarse map is shown in Fig. 6(b) with an equivalent wavelength Λ 12 = 7.84 µm.The final fine map with reduced noise is shown in Fig. 6(c). Fig. 6 . Fig. 6. Results of three-wavelength optical phase unwrapping: (a) single wavelength phase map, (b) three-wavelength coarse map, (c) three-wavelength fine map with reduced noise.Cross section of each phase map is taken along the lines shown in Fig. 6.These cross sections and phase noise of coarse and fine maps are shown in Fig. 7. Figures 7(a) -7(c) show surface profiles of single wavelength phase map, coarse map and fine map respectively.Vertical axis for each map is 11 μm.Figure 7(d) shows 105.79 nm rms noise of the coarse map.Because of the curvature of the object surface, a paraboloid is fitted with the final fine map data.This is shown in Fig. 7(e).The black line is data and the red dotted line shows the Figure 7 ( d) shows 105.79 nm rms noise of the coarse map.Because of the curvature of the object surface, a paraboloid is fitted with the final fine map data.This is shown in Fig.7(e).The black line is data and the red dotted line shows the best-fit parabolic curve.Corrected phase noise in the fine map is 4.78 nm, which is shown in Fig.7(f). Fig. 7 . Fig. 7. Surface profiles for three-wavelength phase unwrapping.(a) single wavelength surface profile, (b) surface profile of coarse map, (c) surface profile of final unwrapped phase map with reduced noise.(d) noise of coarse map in the area shown in (b).rms noise is 105.79 nm, (e) noise of final unwrapped phase map in the area shown in (c).Red dotted line is the best fit parabolic curvature and black solid line is data, (f) Final noise of the unwrapped phase map, after subtracting the curvature of the object.Rms noise is 4.78 nm. Fig. 8 . Fig. 8. Results of three-wavelength optical phase unwrapping of onion cells: (a) single wavelength phase map, (b) three-wavelength coarse map, (c) three-wavelength fine map with reduced noise. Table 1 . [14]acteristics of LEDs.Luminous flux values are at 350 mA, Junction Temperature T J =25 0 C[14] The Λ 12 can be increased by choosing closer values of λ 1 and λ 2 .The phase map for Λ 12 is obtained by subtracting one single wavelength phase map from the other and then adding 2π whenever the resultant value is less than zero.This phase map is called "coarse map" φ 12 .The surface profile for the coarse map φ 12 is given
4,473
2007-07-23T00:00:00.000
[ "Engineering", "Physics" ]
Stringy Effects and the Role of the Singularity in Holographic Complexity There has been considerable recent interest in holographic complexity. The two leading conjectures on this subject hold that the quantum complexity of the boundary thermofield double state should be dual to either the volume of the Einstein-Rosen bridge connecting the two sides (CV conjecture) or to the action of the Wheeler-de-Witt patch of the bulk spacetime (CA conjecture). Although these conjectures are frequently studied in the context of pure Einstein gravity, from the perspective of string theory it is also natural to consider models of gravity in which general relativity is perturbed by higher powers of the Riemann tensor, suppressed by powers of the string length; in a holographic context, these corrections are dual to corrections in inverse powers of the 't Hooft coupling. In this paper, we investigate the CV and CA conjectures in two stringy models of higher-curvature gravity. We find that the CV complexification rate remains well-behaved, but conversely that these corrections induce new divergences in the CA complexification rate that are absent in pure Einstein gravity. These divergences are intrinsically linked to the singularity, and appear to be generic in higher curvature theories. To the best of our knowledge, infinities originating at the singularity have not yet been observed elsewhere in the literature. We argue that these divergences imply that, in the CA picture, the complexification rate of the boundary theory is a nonanalytic function of the 't Hooft coupling. Introduction The study of quantum gravity has recently been reinvigorated by the introduction of a variety of ideas from quantum information theory, amongst the most exciting of which is the notion of quantum complexity [1]. Complexity is an extremely natural quantity in quantum computation, and intuitively describes how difficult it is to prepare some quantum state [2]. More precisely, fix some set {g i } of "simple" operators, known as gates, on a Hilbert space H, and two states |ψ and |φ . The "relative complexity" of |ψ and |φ is defined as the smallest number of gates by which we can go from |ψ to a state sufficiently close to |φ , i.e. within a small distance ε of |φ . We frequently will simply speak directly of the complexity of |ψ , which is simply its complexity relative to some fixed reference state |ψ 0 . For purposes of holography, it is most interesting to consider the complexity of a state |ψ(t) as it undergoes unitary time evolution. For a general quantum system with continuous time, this problem is too difficult to be tractable. However, for a particularly simple class of systems known as random quantum circuits, we can study the time evolution of complexity, also known as complexification, rather explicitly; see e.g. [3][4][5] for reviews 1 . Consider a set of N qubits with state |ψ(t) , with time evolution occuring in discrete chunks. At each time step, we model time evolution by acting on the qubits with n < N randomly chosen gates, each of which only acts on two qubits at a time. We therefore have |ψ(t) = g n g n−1 · · · g 1 |ψ(t − 1) . (1.1) We are interested in computing the complexity C(t) of |ψ(t) relative to the initial state |ψ(0) , which we take to be "generic" in the Hilbert space. For large N , it turns out that the details of C(t) are essentially independent of both |ψ(0) and on the set {g i } of gates. The complexity of any state relative to itself is zero, so we have C(t = 0) = 0. For generic circuits, after a short period of transient initial behavior, the complexity C(t) grows linearly in time until, at times exponential in N , it saturates at a maximum C max . It stays near C max until times doubly exponential in N , at which time C(t) begins to decrease linearly with time. This continues until C(t) again reaches 0, and then the entire process begins anew. The behavior of C(t) until the saturation time is sketched in Figure 1. Importantly, both the saturation of C(t) at C max and its recurrences in double exponential time are consequences of having a finite dimensional Hilbert space, and thus will not appear in the limit of strictly infinite N . Because in a holographic context we will always take N to be very large, we will primarily be interested in the period of linear growth. There is a simple heuristic to determine the rate of complexification during this regime: the rate is simply the number of qubits performing computation times the average energy per qubit. More simply, we have [1]Ċ ∼ ST, (1.2) where S is the entropy of the system and T is its temperature. It was argued in [10] that the complexification rate of a system of energy E is always bounded from above bẏ This "Lloyd bound" is conjectured to apply to any quantum system. However, more recent developments have cast the exact prefactor of this bound into question (see e.g. [11][12][13]), and we will treat it simply as the statement that quantum complexity should always develop at a finite rate. Complexity in the context of AdS/CFT was first discussed in [1]. As is well-known, AdS/CFT relates the dynamics of a (d − 1)-dimensional conformal field theory (CFT) to the dynamics of type IIB string theory in d-dimensional anti-de-Sitter (AdS) space [14][15][16]. In this vein, the goal of holographic complexity is to relate the complexity C(t) of the boundary state to some aspect of the geometry of its bulk dual. For systems as complicated as CFTs, especially in the strong coupling regime where holographic behavior occurs, there is not yet a suitable definition of complexity, so we will frequently take Eq. 1.2 as the definition of complexity. We will restrict ourselves to studying the thermofield double state of the boundary CFT, defined as |TFD = 1 Z(β) n e −βEn |n L |n R , (1.4) where n runs over the energy eigenstates of the boundary theory. For this state, the bulk geometry takes the form of a two-sided eternal AdS-Schwarzschild black hole [17]. In d dimensions, the AdS-Schwarzschild metric is given by where the d-dimensional emblackening factor is given by (1. 6) where M is the mass of the black hole, set by the temperature β of the boundary theory, and r AdS is the AdS radius, given in terms of the cosmological constant Λ by and a singularity at r = 0; its Penrose diagram is shown in Figure 2. There are two leading proposals for how to relate the complexity of a boundary state to the geometry of its bulk: the complexity-volume (CV) conjecture [1,18,19] and the complexity-action (CA) conjecture [20,21]. We will spend the remainder of the paper discussing these two conjectures, so before we move on we will briefly introduce them; a more detailed review is given in Appendix A, where we explicitly work out the calculations associated to these two frameworks in the case of pure Einstein gravity. The first conjecture we will consider is the complexity-volume (CV) conjecture, first introduced in [1]. According to the CV conjecture, the complexity C of the boundary state is dual to the volume V of the Einstein-Rosen bridge (ERB) in the TFD geometry of the bulk dual, as shown in Figure 3. The volume V (t) of the ERB is defined as the volume of a maximal spatial hypersurface at time t. More precisely, we have (1.10) At late times, the ERB is approximately a constant-r hypersurface, whose position r(t) varies with time. In this way, the time evolution of the location and size of the ERB causes the time evolution of complexity. Once the late-time behavior takes over, we have thaṫ for any fixed time t. At very late times, r(t) saturates to a fixed position r * , which is defined to extremize the "volume functional" V(r) given by Figure 3: The basic setup of the CV conjecture. At very late times, the complexification rateĊ V is proportional to the volume of the late-time maximal hypersurface at r = r * , drawn in red. Thus, for late times the volume growth rate is given bẏ (1.14) The other framework for holographic complexity that we will discuss is the complexity = action (CA) conjecture, first introduced in [20]. The CA conjecture holds that the complexity C(t) of the boundary state is proportional to the action S WdW of the "Wheelerde Witt (WdW) patch," defined as the intersection of the forwards and backwards light cones of two boundary times t L and t R ; we have sketched WdW patches anchored at successive boundary times t L and t L + δt in Figure 4. From the figure, we can see how complexity evolves in time in this context: as we evolve the boundary time forwards, the WdW patch moves, and therefore its associated action changes. We therefore havė The factor of π is chosen such to match the prefactor in Eq. 1.3. In principle, evaluatingĊ A is quite involved. In general, in addition to the familiar bulk action of general relativity, Figure 4: Two WdW patches in an AdS-Schwarzschild spacetime. As we evolve the boundary time t L , the WdW patch moves upwards, which determines the time dependence of complexity. we have null boundaries, corner terms, and other complications. These facets add up to give a highly nontrivial relativity problem. A simpler calculation was suggested in [20,21], where it was argued that, at late times, only the unshaded region in Figure 5 contributes to the time derivative; for the remainder of the paper, we will refer to this region as the δWdW region, and correspondingly denote its action by S δWdW . In [22,23], the full calculation, including a careful treatment of the subtle issue of null boundaries, was performed, and was shown to match the much simpler calculation involving the δWdW patch. We will therefore focus on the δWdW region throughout the paper. From Figure 5, we can straightforwardly read off that 2 where we have instituted a UV cutoff at r = ε to avoid the awkward situation of evaluating a boundary term at the singularity 3 . We will use this UV cutoff throughout the paper. The CV and CA conjectures were originally introduced in the context of pure Einstein gravity. However, string theory naturally comes equipped with an infinite tower of corrections to GR, known as α corrections (see e.g. [30][31][32] for historical references, and [33] for a more modern overview). These corrections are suppressed by powers of the string length squared, and take the form of higher curvature theories. In particular, we supplement the 2 It is argued in [20,21] that all corner terms in the δWdW region cancel by time translation invariance, leaving us with only these contributions. 3 For more on the role of a cutoff in CA, see e.g. [24][25][26]. Such cutoffs are quite relevant to the study of holographic complexity in Jackiw-Teitelboim gravity [27][28][29]. Figure 5: The δWdW region, shown unshaded. At late times, it was argued in [20,21] and verified in [22,23] that this region controlsĊ V . Einstein-Hilbert action in Eq. 1.16 with a series of corrections of the form where L n is in general an n-th order polynomial in the Riemann tensor and possibly the gauge fields. For the TFD geometry, which corresponds to a neutral black hole, the gauge fields will not contribute, and we can think of the α corrections as being built simply out of the Riemann tensor. These corrections come from loop diagrams on the string worldsheet, and are omnipresent in modern string theory. For instance, α corrections are absolutely crucial to the modern understanding of the entropies of BPS black holes, which ranks amongst the foremost successes of string theory; see e.g. [34,35] for reviews. At this point it is useful to emphasize one crucial difference between the CV and CA conjectures. In the CV approach, the essential physics, although occurring inside the horizon, remains well separated from the singularity, as can be seen from Figure 3. For large black holes, the critical radius r * remains near the horizon, and so in a low-curvature region where the gravity approximation can still be trusted. For CA, on the other hand, the singularity is directly involved in the physics. The WdW patch, at least for a neutral black hole, reaches down to the singularity, as can be easily seen in Figures 4 and 5. This is somewhat concerning: as we approach the singularity, the curvature of spacetime becomes strong, and we expect gravitational effective field theory to break down. Nevertheless, it was observed in [20,21] that the contributions to the pure GR action from the singularity are well-behaved, despite naive expectations to the contrary; we review these findings in Appendix A. Once higher curvature effects are taken into consideration, it is unclear whether this well-behavedness at the singularity should still be expected to hold. Indeed, before this work, the question of singularity effects in CA for higher-curvature gravity was fairly unexplored, as most previous work centered on charged black holes, where the WdW patch ends at the inner horizon and so avoids the singularity. Our goal in this paper is to study the effect of α corrections on bulk complexification, in both the CV and CA pictures. Throughout the paper, we will carefully work perturbatively, treating α as a small parameter and only working to the first subleading order. This is the correct framework to use from the perspective of string theory, where we always take the string length to be small. On physical grounds, we expect α corrections to lower the complexification rate relative to the results in pure GR. We present two arguments justifying this expectation. The first originates in the related area of quantum chaos. It is intuitively clear that notions of quantum chaos in gravity, explored in [36][37][38], are related to questions of complexity; see e.g. [39][40][41][42][43] for progress towards making this intuition precise. Chaos can be measured by considering thermal out-of-time-order correlations of the form V W (t)V W (t) β , where β is the inverse temperature. These correlators have a characteristic behavior of the form where λ is the called the "Lyapunov exponent" of the theory. The Lyapunov exponent is bounded from above by [38] λ ≤ 2π β . (1.20) Classical gravity saturates this bound. The effects of α corrections to this correlator were worked out in [37]. It was shown that α corrections preserve the exponential decay in Eq. 1.19, but with the Lyapunov exponent picking up a negative α correction given by [37,38] where c is a constant Intuitively, the Lyapunov exponent should play a similar role to the rate of complexification, so we expect similar corrections to appear here. For an alternative heuristic, we appeal to a more intuitive picture 4 . The reasoning in [18][19][20][21] and related works that led to the origins of the CV and CA conjectures were based on a Rindler space picture, in which the black hole horizon is infinitely large. Because we are explicitly interested in corrections coming from a finite string length, we cannot take this limit, and instead must consider strictly finite size black holes. To obtain an intuition for what to expect away from this limit, it is instructive to consider the opposite limit, in which the strings are larger than the black hole. In this limit, the probe string wraps around the black hole completely, and so each differential length of the string is attracted to the black hole. Adding up all these attractive forces, we see that the string feels no force, and is essentially free. We expect that free theories should complexify less quickly than strongly interacting theories, so this suggests that stringy effects should lower the complexification rate. Higher-curvature corrections to bulk complexification have been studied in previous work; see e.g. [44][45][46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62] for a variety of bulk Lagrangians and geometries. We will discuss two higher-curvature theories of gravity in this paper, both of which are directly motivated by string theory. The first is the well-known theory of Gauss-Bonnet (GB) gravity, where we supplement the Einstein-Hilbert Lagrangian with a term appropriate for computing the Euler character of a four-manifold: This is in some sense the canonical higher-curvature theory, as any quadratic polynomial in the Riemann tensor can be repackaged into this form by an appropriate change of variables. The GB Lagrangian also appears naturally in many string compactifications, see e.g. [63]. Additionally, as an example of a Lovelock theory, the GB theory is an example of a highercurvature Lagrangian that is not a higher-derivative theory. This means in particular that there are no ghosts, and it is possible to render the variational problem for the metric well-posed by adding in an appropriate boundary term [64]. We must mention here that both the CV and CA proposals have already been studied in this theory; see e.g. [47][48][49][50]. We will nonetheless work this theory out explicitly, for two reasons. First, our perturbative framework will lead to us reaching qualitatively different conclusions than those references, which worked nonperturbatively α , and secondly, the GB theory is an excellent warmup for the rather more complicated theory we introduce below. The GB theory has one drawback for use in holographic contexts: it doesn't correspond in a precise way to a known boundary theory. Even if there were a satisfactory definition of complexity for boundary theories, we wouldn't have a theory with which to compare to bulk results. To remedy this, the second theory we consider will be the known higher-curvature theory which corresponds to the leading finite-'t Hooft correction to four-dimensional N = 4 supersymmetric Yang-Mills (SYM) theory. The 't Hooft coupling λ is defined as The standard holographic dictionary relates λ to α by For the specific case of N = 4 SYM, It is well-known that to compute the leading correction in λ, we must perturb the Einstein-Hilbert by a quartic polynomial in the Weyl tensor 5 : This theory, which first appeared in [67], has been used to compute corrections to the thermodynamics [68,69], entanglement entropies [70] 6 , and hydrodynamics [65,66,71,72] of N = 4 SYM, and has been a fertile ground for precision tests of AdS/CFT. Additionally, quantum chaos in this model [73] has been studied the context of pole skipping [74]. 5 More precisely, it was shown in [65,66] that, in addition to the expected λ −3/2 corrections, this term also is also sufficient to compute corrections on √ λ/N 2 ; we will neglect this subtlety in the remainder of this paper. 6 This reference computed the Euclidean action of an AdS black hole solution to this theory. This is conceptually similar to the CA analysis we will perform here, but is logically distinct and involves a different set of technical challenges. However, from a purely bulk perspective, there are issues with this theory. Unlike GB gravity, this theory is a genuine higher-derivative theory. This means in particular that the question of introducing a boundary term is quite subtle [75,76]. However, a very simple boundary term for f (Riemann) theories has recently been derived in [51,77]. We will use this term to facilitate a CA analysis for the Weyl 4 theory. To the best of our knowledge, this is the first time that holographic complexity has been considered for this theory, and therefore the first holographic attempt at treating non-universal contributions to complexification in N = 4 SYM. We consider this to be a "slam dunk" example of complexification in higher-curvature gravity, and it is the main focus of our paper. In the remainder of this paper, we will analyze bulk complexification for these two higher-curvature theories, using both the CV and CA proposals. The results of these analyses are summarized in Table 1. These calculations were performed with the Mathematica packages ccgrg [78,79] and xTensor [80]. Table 1: A summary of the results of this paper. We consider two higher-curvature gravity theories, the five-dimensional Gauss-Bonnet theory defined in Eq. 2.18 and the Weyl 4 theory defined in Eq. 2.33. We analyze each of these theories in both the complexityvolume [18,19] and complexity-action [20,21] frameworks. We find that, although for CV divergences do appear near the singularity in the Weyl 4 theory , it is possible to extract a finite correction toĊ V in both cases. For CA, conversely, all results are divergent, and these divergences appear to be unavoidable in any neutral black hole background. We argue below that these divergences are generic, and should appear for general higher-curvature theories. One of the major themes of this paper is that holographic complexity is extremely sensitive to singularity effects and the breakdown of gravitational EFT. In particular, we will see that higher-curvature effects cause divergences to appear in both CV and CA calculations, and moreover that these infinities render the final CA complexification rate divergent. To the best of our knowledge, this is the first appearance of divergences in the complexification rate directly related to the singularity in the holographic complexity literature 7 ; we anticipate, however, that similar divergences would be found in essentially any other higher-curvature theory, at least in the neutral black hole background. The divergences we will encounter are in the CA complexification rateĊ A (t). These stand in distinction to divergences in the total complexity C(t), which have been previ- 7 On the other hand, divergences in the complexification rate originating at the asymptotic boundary of AdS space have been observed before [81], in the context of both CV and CA. The divergences observed there are related to time-dependent perturbations, and do not seem to be related to the breakdown of bulk EFT. Nonetheless, they are extremely interesting for understanding the relationship between the Lloyd bound and the holographic complexity conjectures. ously observed in the literature (see e.g. [23,82]). In terms of the bulk geometry, these divergences in the total complexity are related to the divergent volume of the AdS spacetime. Moreover, it is fairly clear why, from the boundary side, we should expect the total complexity to be divergent: the Hilbert space of a realistic boundary theory is infinite dimensional, so the relative complexity between any two states should generically also be infinite. The interpretation of divergences in the complexification rate, on the other hand, is more subtle. The Lloyd bound suggests that any system, even if it has an infinitedimensional Hilbert space, should have a finite complexification rate. However, from our holographic calculations, it seems that the leading correction to the CA complexification rate is divergent, in stark contrast to this intuition. From this perspective, therefore, it is difficult to make sense of these CA results. The corrections we will compute here, although divergent, always enter with a negative sign, as anticipated above. This suggests that, in the context of the CA proposal, all stringy black holes decomplexify infinitely quickly! We will, however, propose an alternate interpretation. Our results imply that, when the viewed as a function of the 't Hooft coupling λ, the CA complexification rateĊ A (λ) has a well defined value at λ = ∞, but a poorly defined first derivative. This is redolent of the behavior of e.g. √ x at x = 0. We therefore believe that our results constitute strong evidence that, if the CA proposal should prove correct, then the complexification rate must be nonanalytic in the 't Hooft, at least around λ = ∞. In this way, even though dĊ A /dλ is infinite,Ċ A (λ) itself can be well-defined everywhere. The outline of this paper is as follows. First, in Section 2, we perform the CV analyses for our two higher curvature theories. As part of these calculations, we must necessarily compute corrections to the emblackening factor. We then proceed to use this corrected metric to perform CA analyses for the two theories. First in Section 3 we perform the CA analysis for Gauss-Bonnet gravity in four and five dimensions. Although the fourdimensional theory is topological, this short exercise will establish conventions and nomenclature before the five dimensional calculations that follow it. Next, in Section 4 we will perform the CA analysis for the Weyl 4 theory that computes the leading-λ correction in N = 4 SYM. This calculation is our main result. Critical to this discussion will be the f (Riemann) boundary term derived in [51,77]. Finally, we conclude with discussion and interpretation in Section 5. CV at Higher Curvature We will begin with exploring stringy corrections to the CV conjecture. As mentioned above, the CV proposal identifies the complexity of the boundary state with the volume of the Einstein-Rosen bridge (ERB) in the TFD geometry. To evaluate this in the higher curvature gravity theories, we will need to compute the perturbed emblackening factor. In studying the corections to the metric, we will follow the framework laid out in [70]. Following that paper, we will parameterize the perturbed metric as Here, ε is a dimensionless expansion parameter that depends on the theory in question, and f 0 is the unperturbed emblackening factor given in Eq. A.2. This parameterization is chosen so that the coordinate position of the horizon does not change: a(r) and b(r) vanish where f 0 (r) does, i.e. at the event horizon r = r H defined in Eq. A.4. In this section, we will first compute the corrected metric and then from the metric compute the corrected extremal radius r * and complexification rateĊ V for our two higher-curvature theories. For a metric of the form in Eq. 2.1, the "volume functional" defined in Eq. 1.12 takes the form To recap briefly, we need to find the extrememum of this functional, which we denote by where r 0 is defined in Eq. A.5. To avoid dealing with the square root, it will actually be more convenient to extremize the square of the volume functional, Taking the derivative, we have We therefore have that At the extremum, this derivative should vanish to order ε. The O(ε 0 ) piece automatically vanishes by the definition of r 0 . and we can make the O(ε) piece vanish by setting 8 Plugging back into Eq. 1.12, we then have that and therefore thaṫ (2.10) In this section, we will carry out this analysis for first the Gauss-Bonnet theory and then the Weyl 4 theory, with surprising results. Although for GB we will find that the physics is quite similar to the pure gravity case, for Weyl 4 we will see that V(r) diverges as we approach the singularity, and therefore that radically different physics emerges. To understand how this happens, it is instructive to consider instead the case of pure general relativity, for which it doesn't happen. For pure GR, we have (2.11) For small r, V(r) scales as so V(r) vanishes at the singularity. More geometrically, this means that the vanishing of the three-sphere volume at the singularity overwhelms the divergence in f 0 , keeping the late-time slice shown in Figure 3 away from the singularity. However, this need not be the case in general. For higher-curvature theories, the perturbed emblackening factors will diverge more strongly at the singularity. For sufficiently large powers of r, this divergence can instead overwhelm the vanishing of the three-sphere, thus leading V to be divergent at the singularity, and moving r * to r = 0. This explains the difference between the two theories: in the GB case we will have f 1 ∼ r −4 , but instead we will have f 1 ∼ r −12 for the Weyl 4 theory. Based on simple dimensional analysis, we expect the behavior of the Weyl 4 theory to be prevalent in generic higher-curvature theories: higher powers of α correspond in general to more negative powers of r in f 1 , and therefore drag us towards the divergent regime. In principle, there might be corrections toĊ V other than those that we have considered here 9 . To see why, it is convenient to make an analogy to black hole entropy. The Bekenstein-Hawking entropy (2.13) 9 We thank R. Myers for enlightening comments on this point. is completely universal, but to compute higher-curvature corrections to this entropy we must instead use the Wald entropy [83], where γ ij is the induced metric on the horizon, L is the Lagrangian of our higher-curvature theory, and ε µρ is an antisymmetric two-tensor with components along the t and r axes. In terms of the area A of the black hole, this entropy admits an expansion of the form [34,35] Computing the entropy of a black hole amounts to finding values for the proportionality constants c i . One might expect that, in analogy with black hole entropy, there should exist a "Wald complexity" of the forṁ However, in the absence of a Wald complexity we do not know how to compute these corrections in the first place, so of course we cannot evaluate them, and Eq. 2.10 is the best that we can currently do. See, however, [84][85][86], for progress on finding these corrections. CV for Gauss-Bonnet Gravity We will now perform the above analysis for perturbative GB gravity. In order to compute the corrected complexification rate for GB gravity, we must first find the correction to the metric. In four dimensions, the GB term is topological, and thus the metric does not get corrected. From Eq. 1.14, we therefore can easily see that there is no correction to the CV rate. We are therefore led to consider naturally the GB theory in five dimensions. For this theory, the expansion parameter ε used in Eq. 2.2 is defined as where we have rescaled α to give a dimensionless expansion parameter. In terms of α, the five-dimensional GB action is [47-49, 64, 87, 88] S GB, bulk = 1 16πG is the cosmological constant. To find the forms of f 1 and f 2 , we will follow the strategy laid out in [70] and plug the metric ansatz, including the expansions of a and b and the known form of f 0 , into Eq. 2.18. The entire action will only depend on r, so we will obtain an effective one-dimensional action for f 1 and f 2 , from which we can obtain Euler-Lagrange equations which we can solve for f 1 and f 2 . It is essential that we expand the action to second order in α, as the expansion to first order will vanish since f 0 minimizes the EH Lagrangian. Doing so, we find the EL equations (2.20b) These EL equations have solutions where C 1 and C 2 are integration constants which we now need to fix. We will follow the conventions for these constants set in [70]. In particular, C 2 is simply a time reparameterization, so we are free to set it to zero; we fix C 1 so that f 1 and f 2 are finite at the horizon. This gives us that where r H is defined in Eq. A.4. Putting it all together, we have that . (2.23) It is worth mentioning that the form of the metric derived here is an exact solution to GB gravity. The action in Eq. 2.18 has equations of motion given by [87] where we have defined an effective stress tensor T µν by (2.25) For spacetime dimension d > 4, a general AdS-Schwarzschild solution to these equations has been found [87,88]. It is of the usual spherically symmetric and static form given in Eq. 2.1, with a and b given by 4). (2.27) This is the exact solution to the GB equations of motion, and was studied in a CV context in e.g. [49]. However, this metric is nonlinear in α. Because we are imagining the GB term as the leading term in an infinite series of α corrections to Einstein gravity, however, this form of the metric is unsuitable for our purposes, as it consists of an entire power series in α . Instead, we should linearize this solution, which returns us to the form of the metric derived above. Therefore, the metric given in Eq. 2.23 is the appropriate form of the metric for studying stringy corrections. Using the metric in Eq. 2.23, we can straightforwardly compute V(r), which is given by (2.28) The square V 2 (r) of the volume functional is plotted in Figure 6 10 for several values of α. We find similar physics to the familiar CV analysis of pure GR, as can be seen directly in Figure 7. We see a single maximum inside the horizon, where the maximum-volume hypersurface will saturate at late times. The new position r * of the maximum is given as in Eq. 2.4, with r 1 given by . (2.29) This is the local maximum of the volume functional promised in [18]. By inspection, it is clear that r 1 > 0, so the α correction moves the maximum closer to the horizon. Of course, this analysis is only valid for small α; for large α, the physics may be quite different, as indicated in [18]. From Figure 7, we can immediately see some important physics: as predicted on physical grounds in Section 1, we can see that the α correction reduces the complexification rate! More quantitatively, we have that the O(α) portion of V(r * ) is given by It is worth remarking on one quirk of Figures 6 and 7. Taken literally, it appears from these plots that V 2 is negative for very small r, and therefore that V is imaginary. This is unphysical, and is merely an artifact of perturbation theory. Indeed, one can solve for the (2.32) This is nonanalytic in α, and should therefore not be taken seriously in a strictly perturbative framework. That higher powers in α will erase this seeming second horizon can be seen directly from Figure 8, where we have plotted the volume functional of the nonlinear metric in Eq. 2.26. CV for the Weyl 4 Action We will now repeat the above analysis for the Weyl 4 theory. Here the bulk action is [66][67][68][69][70][71][72]89, 90] 11 where we have defined a dimensionless expansion parameter AdS . (2.34) For convenience, we will occasionally refer to the quartic polynomial in Eq. 2.33 simply as W . As before, we want to insert the metric ansatz in Eq. 2.1, with a(r) and b(r) as in Eq. 2.2 12 , into this action to obtain an effective Lagrangian for f 1 and f 2 , exactly as was done 11 One might wonder whether we must worry about the effect of a spatially varying dilaton [91]. However, it was argued in [65,66] that the dilaton only contributes to order γ 2 , and thus that we can neglect it in this perturbative context. We thank R. Myers for discussion of this point. 12 Of course, here ε should be replaced with γ. for topological black holes in [70]. Doing so yields the E-L equations 0 = 3 −f 2 r 13 r 4 + r 2 r 2 AdS − µr 2 AdS − 2r 14 f 2 2r 2 + r 2 AdS − 20µ 3 r 6 AdS 144r 4 + 160r 2 r 2 AdS − 171µr 2 AdS 2r 2 AdS r 13 which can be solved to yield where C 1 and C 2 are integration constants. Following the same reasoning as before, we choose C 2 = 0 and pick C 1 so that f 1 and f 2 are well-behaved at the horizon. Doing so gives us (2.38b) We therefore have that AdS −24r 4 −16r 2 r 2 AdS +9µr 2 AdS r 12 (2.39) We have plotted V 2 (r) for several values of γ in Figures 9. and 10. As anticipated, we see radically different physics than is traditionally observed in CV analyses; this can be seen very clearly in Figure 11, where we have graphed V 2 (r) for the Weyl 4 theory alongside the pure GR curve. Instead of a single maximum inside the horizon, past which V(r) vanishes as we approach the singularity, we now have two extrema. As we progress inwards past the usual maximum, we now hit a new minimum, beyond which V(r) diverges as we approach the singularity. These extrema are shown more clearly in Figure 10, where we have zoomed in on their locations. The usual caveat that our analysis is only valid for small γ continues to hold here. For our usual parameter values (r AdS = 10, 000, µ = 1, 000, 000, 000), numerical tests indicate that radically different behavior begins to happen around γ ∼ 0.00015; we will restrict ourselves to working at γ smaller than this value. The location of the maximum can again be found by means of Eq. 2.4. Plugging the result into Eq. 2.39 gives that the correction is given by (2.40) It can be easily verified numerically that this correction is negative for a wide range of parameters, matching the intuition laid out in Section 1; this can also be seen in Figure 12, where we have zoomed in on the local maximum. This suggests that the correction to the complexification rate should bė However, the interpretation of these results is somewhat subtle. Because V diverges as we approach the singularity, the volume in Eq. 2.40 does not maximize V(r) behind the horizion. In particular, a UV cutoff at r = ε can easily be made to satisfy by simply picking ε small enough. Thus, as ε approaches the singularity, the maximum of V(r) on the interval r ∈ [ε, r H ] will in general be V(ε), which would suggest that instead of Eq. 2.41 we should have insteadĊ where V(r) is given for general r in Eq. 2.39. On the other hand, it is hard to understand how we might claim to have good perturbative control over such a UV cutoff. It is implicit in Eq. 2.4 that r * is the only extremum of V(r) that can be reached perturbatively. Put another way, there is no obvious way to impose a UV cutoff at r = ε in such a way that we can smoothly obtain the pure GR result in Eqs. A.5 and A.7 by taking a γ → 0 limit. Indeed, in a perturbative framework, it does not appear to be possible to even find a closed-form expression for the position of the minimum of V(r). One could imagine solving the condition V (r) = 0 for a general r, instead of for r * , but this amounts to solving a sixteenth-order polynomial for r, which of course can't be done in closed form 13 . The upshot is that, working strictly in perturbation theory, it is difficult to claim to have analytical control of what happens at r < r * . This is intimately related to the breakdown of gravitational EFT discussed in Section 1. In light of the above discussion, it seems clear to use that the correct for the CV complexification rate is the finite result given in Eq. 2.41 rather than the divergent result in Eq. 2.43. We will, however, return to this discussion in Section 5, after encountering similar divergences in the CA complexification rate. CA for Gauss-Bonnet Gravity Having obtained above the corrected metrics for both of our perturbed theories, we can now perform the CA analysis. We will begin with the comparatively simple case of GB gravity, for which the action is where the bulk action is given in Eq. 2.18 and the complete boundary action for a constantr hypersurface is [64] S GB, bdy [r] = 1 8πG (3.2) In this expression, h ab is the induced metric on the boundary, G ab is its Einstein tensor, and K ab is the extrinsic curvature tensor. In principle, the path forward is quite simple. We simply plug the metric in Eq. 2.1, with the form of f 1 and f 2 given in Eq. 2.23, into Eq. 3.1, expand all of the terms to linear order in α, and evaluate. There are two types of contributions: one type originates from evaluating the ordinary Einstein-Hilbert action, plus the GHY term, on the perturbed metric, and the other comes from plugging the unperturbed metric itself into the GB action (and its boundary term). These two contributions are separately divergent, and the divergences do not in general cancel to give a finite answer. Because of the detailed nature of the calculations, we will not be especially explicit, especially with the boundary terms. To provide a more explicit example, we will first work out in Section 3.1 the very simple case of four-dimensional GB gravity. In four dimensions the GB term is topological, so neither the equations of motion nor their solution are 13 Numerical tests indicate that, of the 16 roots, only four of them are real: the maximum and minimum seen in Figure 10 and their unphysical counterparts at negative r. The other twelve roots are complex, and therefore also unphysical. corrected. Thus, of the two classes of contributions to the CA rate we discussed above, we will only need to consider the second type in this example. In Section 3.2 we will return to the five dimensional example. Before we proceed, we must mention that a similar analysis of the CA conjecture for GB gravity was performed in [48], and in [47] CA was studied for a generalization of GB gravity known as Lovelock gravity. Our analysis differs from those in two crucial aspects. First, these works used the nonlinear form of the metric given in Eq. 2.26. As mentioned above, this form of the metric is unsuited for use in perturbation theory, as it corresponds to an infinite tower of α -suppressed terms. Thus, the metric used here is more appropriate for use in a stringy context. More importantly, in both works, but especially in [47], the focus was on charged black holes. For a charged black hole geometry, the WDW patch does not extend to the singularity. Because the singularity is the origin of the divergences we encountered above, and will encounter again below, for a charged black hole these divergences cannot appear in the first place. However, given the privileged role of the TFD geometry, which corresponds to a neutral black hole, the effect of the singularity is crucial for the holographic interpretation of complexity. A Topological Example: GB Gravity in Four Dimensions We begin by considering the GB action in four dimensions, for which the bulk and boundary actions are (3.3b) The four-dimensional Gauss-Bonnet(-Chern) theorem says that the Euler character χ 4 of a four-manifold M is given by so in four dimensions GB gravity is topological, and this theory is essentially just general relativity. In particular, the Einstein field equations do not change, so the metric does not pick up a correction, and the four-dimensional AdS-Schwarzschild solution is where the emblackening factor f is simply with no order α correction. The thermodynamics of Schwarzschild-type black holes is well understood in this theory. The temperature of the black hole horizon is proportional to f evaluated at the horizon, so the temperature therefore receives no O(α ) contribution. Conversely, the Wald entropy does get corrected; the corrected entropy S α is given by (see e.g. [92][93][94][95][96]) is the Bekenstein-Hawking entropy and χ is the two-dimensional Euler character of the horizon. We are interested in spherical horizons, so we have χ = 2. We will use this formula to repackage the CA complexification rate for this theory in terms of the corrected entropy, but first we must compute the rate. This analysis is much simpler than the CA calculations that will follow, so we will be extremely explicit. There is no metric correction, so we simply need to evaluate the order-α action in Eqs. 3.3a and 3.3b on the metric in Eq. 3.5. It is straightforward to compute the curvature invariants, which are given by where we have dropped the explicit r dependence of f . We will follow this convention for the remainder of the paper. We therefore have that Thus the O(α ) bulk action is Near the singularity, the emblackening factor scales as f ∼ 1/r, so we have a cubic UV divergence from the singularity. We will now move on to the boundary term, which will be seen to cancel this divergence. For a constant-r hypersurface, we choose a unit normal On such a surface, we have an induced metric h ab given by (3.14) The Einstein tensor for this metric is Similarly, from the form of n µ , we can straightforwardly compute that the extrinsic curvature tensor K ab is given by We can therefore work out that the curvature scalars in Eq. 3.3b are This is exactly the negative of the bulk action, so we have a matching cubic divergence from the r = ε boundary term that will render the total action finite. Putting it all together, we have that Thus we see that, for the topological Gauss-Bonnet theory, the CA complexification rate picks up no O(α ) correction. We therefore find that the complexification rate is exactly what we found in Appendix A, namelẏ However, in the corrected theory, it is more natural to write this expression in terms of the corrected quantities, so we invert Eq. 3.7 to find thaṫ The point of this exercise is twofold. First, we have shown an explicit example of a CA analysis in higher curvature gravity. This will enable us to be rather less explicit in the following sections, where the calculations are somewhat more involved. Much more importantly, we have seen that divergences are bound to appear in the action when higher curvature terms are allowed in the gravitational action. These divergences originate at the singularity, as did the divergence seen in the CV analysis of the Weyl 4 theory, and appear to be an essential feature of the WdW patch of a neutral black hole. In this example, all the divergences cancelled. However, this is a somewhat fine-tuned condition, and we will see below that such cancellations are not generic. A Dynamical Example: GB Gravity in Five Dimensions We now turn to the five dimensional GB theory. As mentioned above, we will be rather less explicit here than we were in the previous section. To recap briefly, the action is given in Eqs. 2.18 and 3.2, and the metric is of the form where a(r) is defined in terms of f 0 (r) and f 1 (r) as in Eq. 2.2 and f 1 (r) = f 2 (r) is given in Eq. 2.23. These are all of the ingredients for this calculation, and now all that remains is to assemble them. We will begin with the bulk terms. For a metric of the form given in Eq. 2.1, it is straightforward to verify that R = − a r 2 + 6 (ra + a − 1) r 2 (3.24a) R µν R µν = a 2 r 4 + 3 5r 2 a 2 + 8r(a − 1)a + 8(a − 1) 2 + 6a a r 3 2r 4 (3.24c) R 2 = a r 2 + 6 (ra + a − 1) These curvature invariants depend only on r, so the time and angular integrals in Eq. 2.18 are trivial. We can therefore evaluate them straightforwardly to find Although only the second term has an explicit α factor, both terms have a O(α) contribution, as can readily be seen from Eq. 2.2. Inserting the coordinate form of a(r) in Eq. 2.23 and discarding the O(α 0 ) term from the unperturbed Ricci scalar, we have This is the complete bulk contribution to the CA complexification rate for the 5d GB theory. We will now proceed to the boundary terms. The only boundaries we are interested in are constant-r hypersurfaces. For such boundary surfaces, we can define a purely radial normal vector and from there compute the extrinsic curvature matrix exactly as was done before. This gives us KK ab K ab = (ra + 6a) r 2 a 2 + 12a 2 8r 3 a 3/2 (3.29d) As before, these invariants depend only on r, so the time and angular integrals are trivial, leaving us with = Ω 3 δt 8πG As before, we now plug in the coordinate expression for a and drop the O(α 0 ) term to find that We can then insert this into Eq. 1.15 to find the correction to the CA complexification rate for 5d Gauss-Bonnet gravity: Eq. 3.34 is the main result of this section. In particular, it contains a divergent part: This divergence is troubling, and requires careful interpretation. Before we interpret this result, however, we will see in the next section that a similar result holds in the Weyl 4 theory. CA for the Weyl Action We will now move on to the CA analysis for the Weyl 4 theory. In the context of the CA conjecture, this calculation gives the leading finite-λ correction to the rate of complexification for N = 4 SYM in the 't Hooft limit. As before, we have S Weyl = S Weyl, bulk + S Weyl, bdy . (4.1) We will begin with the bulk part, shown in Eq. 2.33, and proceed exactly as we did above. Dropping the O(γ 0 ) term and integrating, we have that We are now free to move on to the boundary term. Although a boundary action appropriate for some propagating metric modes has been introduced in the context of N = 4 hydrodynamics [71], to the best of our knowledge no explicit boundary term for the Weyl 4 theory has been given in the literature. This stands in stark contrast with the GB theory considered above, for which the appropriate boundary term is well-known [64]. The crucial difference is that, whereas Gauss-Bonnet gravity is not a higher derivative theory, the same is not true of the Weyl 4 theory. Indeed, general f (Riemann) theories have fourth order equations of motion, and the Weyl 4 theory is no exception. For general higher-derivative theories, the question of well-posedness is subtle (see e.g. [75,76]), and it is not a priori clear that there should exist a simple boundary term. However, a boundary term for f (Riemann) theories was recently derived in [51,77]. We will briefly describe the origin of this boundary term before evaluating it. We begin with a generic f (Riemann) action, and introduce auxiliary fields φ µνρσ and ψ µνρσ , which we take to have the same symmetries as the Riemann tensor, so that ψ µνρσ = ψ ρσµν , etc. We repackage the action in terms of these fields as We see immediately that ψ µνρσ is simply a Lagrange multiplier setting φ equal to the Riemann tensor on shell. This action has equations of motion [51,77] 8πGT If we insert Eqs. 4.7b and 4.7c into Eq. 4.7a, we recover the well-known equations of motion for f (Riemann) gravity. The crucial point, however, is that, at the level of the equations of motion, this is not a higher-derivative theory: nowhere do we find more than two derivatives acting on any single field. Thus, we expect to be able to find a boundary term. Such a term was indeed found, and for spacelike hypersurfaces takes the form [51,77] S bdy = 1 4πG where n µ is the outward pointing unit normal. Inserting Eq. 4.7b, we have In the case of interest, we have so we are led to consider the following boundary term: In principle, all we need to do is insert the metric in Eq. 2.1 into this action, but that requires us to actually evaluate the derivative in Eq. 4.11. The most convenient way to do so is to use the definition (4.12) to express everything in terms of the Riemann tensor itself, and then differentiate termby-term. The result is shown in Figure 13. Figure 13: The closed form of dW/dR µνρσ . Internal indices are given Latin labels. We can now directly plug in the metric, to find the surprisingly simple result (4.14) Thus we have the correction to the CA complexification rate for the Weyl 4 theory: Eq. 4.15 is the main result of this section. As before, the complexification rate has a divergent part: This divergence is manifestly negative, continuing the trend we have seen in all of our results. We will spend the remainder of the paper interpreting this divergence, as well as the other divergences that we have encountered. Conclusion We have studied holographic complexity in the Gauss-Bonnet and Weyl 4 theories and found, through a careful perturbative analysis, a series of unexpected divergences in the complexification rate, especially in the CA framework. These divergences ultimately originate from the singularity in the bulk geometry, and to the best of our knowledge have not yet been observed in the literature. In light of the discussion in Section 1 of gravitational EFT and its breakdown at the singularity, it is perhaps somewhat unsurprising that CA calculations at higher curvature are divergent. After all, we are trusting semiclassical gravity in a regime where it shouldn't work! On the other hand, CV physics traditionally has been observed to avoid the singularity, even in the presence of higher-curvature corrections. We see here that this pattern continues, albeit with reservations. Although the volume functional V(r) does diverge at the singularity, it is nevertheless possible to obtain a sensible perturbatively finite answer by simply neglecting the geometry near the singularity. This of course is not possible in the CA framework, where we have no choice but to go right down to the singularity. In this context, therefore, CV seems like a safer proposal, since we should always be able to avoid the high-curvature region in a way that is impossible in CA. However, even in CV there may be divergences near the singularity, and so we have learned an important lesson: whenever we are behind the horizon, as we must be to study complexity [19], we must be wary of the singularity, even when naively we might think ourselves safe. In the CA context, our results would seem to suggest that, once stringy effects are taken into account, all neutral black holes perform infinitely fast decomplexification! This immediately seems unphysical. From the Lloyd bound [10] and the chaos bound [17], we expected stringy effects to lower the complexification rate relative to the case of pure general relativity. Even still, the geometries we have studied here correspond to finite temperature boundary states, and on general grounds we expect such states to complexify, rather than decomplexify. Even a zero-temperature system, such as a BPS black hole, has a vanishing complexification rate, rather than a negative rate, much less a divergent negative rate, so it is a priori unclear what this result would even mean from a strictly computational point of view. On the other hand, without a definition of the complexity of a state in the strongly coupled boundary field theory, it is hard to make this intuition precise, and a principled interpretation of these results seems beyond our reach. Nevertheless, it seems clear to us, even without a precise boundary picture, that these results should not be taken to imply that CA predicts a divergent complexification rate. Instead, we have a situation where, when viewed as a function of the 't Hooft parameter 14 ,Ċ A (λ = ∞) is well-defined, but its derivatives are infinite. This suggests to us thatĊ A (λ) is simply not analytic around λ = ∞ 15 . In this case, the first derivative would naturally be ill-defined at λ = ∞, but could be well-defined elsewhere. Consider as an example the function √ x. For small x, we could naively try expanding √ x about x = 0 to find Thus we see that the Taylor series doesn't converge, even though we know that √ x is well defined for all positive x. This seems extremely reminiscent of what we have observed here, and so we are led to conjecture thatĊ A (λ) is nonanalytic. With this in mind, we have sketched a possible graph ofĊ A (λ) in Figure 14. We certainly do not claim that this is an accurate depiction, but it clearly illustrates the nonanalyticity at λ = ∞. This suggests a natural question: can the α series of corrections to the complexification rate be resummed? Even if each term in a Taylor series diverges, we can occasionally extract a meaningful, finite answer from it. It is tempting to conjecture, in light of the discussion above, that this should be possible, and that by doing so we could indeed see thatĊ A (λ) is well-defined for large but finite λ. However, doing so would be extremely difficult: in addition to an infinite number of terms in the bulk Lagrangian, we would need to solve for an infinite number of metric corrections, as well as an infinite number of boundary terms (although in the absence of gauge field contributions the results of [51,77] should suffice to compute many of these boundary terms). Thus, actually resumming the series seems implausible, but it might be possible to argue on general grounds that the series is resummable, and therefore thatĊ A (λ) is well-defined. Even in the absence of an explicit infinite series to be resummed, it would be interesting to obtain an estimate for the value ofĊ A (λ) at large but finite λ. To do so, we need a 14 In light of the discussion in [65,66], the parameter with respect to which we have expanded is not simply λ but instead a linear combination of λ −3/2 and √ λ/N 2 . For brevity, we will simply refer to this as the 't Hooft coupling. 15 This interpretation was suggested by L. Susskind. 1/λ C  (λ) Figure 14: A possible graph ofĊ(λ). We have chosen this function to demonstrate visually the possible effects ofĊ(λ) not being analytic at λ = ∞, but do not claim that it should be interpreted as a precise prediction for the form ofĊ(λ). quantitative estimate of where the cutoff ε should be placed. We can choose the cutoff to be no less than a string length away from the singularity, where the curvature should be large on the string scale and our perturbation theory breaks down. Using the minimum value of the cutoff, from Eqs. 2.17 and 2.34, we have that the cutoffs for our two theories should be located at We have reproduced the plots of V 2 (r) for our two theories with ε marked in Figures 15 and 16. This simple prescription allows us to provide an estimate for the CA complexification rate for both theories. Eqs. 3.34 and 4.15 both contain an explicit ε dependence. We can obtain an estimate by simply plugging in the appropriate value of ε from Eq. 5.2 into these expressions. For large but finite λ, this should give a numerically reasonable estimate forĊ A (λ), as the portion behind the cutoff corresponds to a breakdown of perturbation theory. However, these estimates, although always finite, should not be thought of as genuine perturbative corrections. In general, they will contain fractional powers of λ, and therefore are not a genuine Taylor (or Laurent) series. This of course simply reflects the nonanalyticity we observed above. On the other hand, the interpretation of the CV results is significantly more straightforward. In Eqs. 2.31 and 2.41, we have finite expressions for the leading perturbative correction to the strict GR results of [1,18,19]. These results suggest that the CV com- plexification rateĊ V (λ) should admit a well-defined Taylor series at λ = ∞. This of course means that the CV framework is compatible with the complexification rate being an analytic function of the 't Hooft coupling, in stark contrast with what was seen for CA. This simpler behavior reflects that the CV physics remains in the low-curvature region, and therefore remains amenable to a perturbative treatment. From the boundary side, this lines up very nicely with the proposed boundary dual of volume in [85,86], from which we would expect that, if CV is ultimately the correct proposal, then the complexity should be an analytic function of the coupling. In the presence of a well-defined proposal for boundary complexification rates for genuine CFTs, these results would allow us to distinguish between the CV and CA results. If hypothetically a boundary result existed, and admitted a Taylor series in the 't Hooft coupling, we could hope to discard CA in favor of CV based on these results alone. Conversely, if corrections to this boundary calculation also diverged, we could hope to attain precision matching of the coefficients and orders of the divergences in the boundary and CA calculations. Our results would therefore allow us to differentiate the CA and CV conjectures from each other in a way that is impossible in the strict GR limit 16 . Although the search for a precise definition of complexity for boundary states is a hot topic for current research (see e.g. [43,85,86,[99][100][101][102][103][104][105][106][107][108][109][110][111]), it remains an open question, and it is not clear that we should expect a boundary comparison in the near future. where the mass parameter µ is related to the mass M of the black hole by The event horizon is located at A.1 CV for Einstein Gravity We will begin with the CV conjecture [1,18,19]. Our starting point is the metric in Eqs. A.1 and A.2, from which we can straightforwardly compute V(r), which is plotted in Figure 17, and extremize to find that This is exact, but fairly messy; to compare to the simple form of the answer in [18], it is convenient to take the high temperature limit µ 1, which corresponds to dropping the one in Eq. A.2. In this case one finds r * = r AdS µ 2 (A. 8) and therefore that V (r * ) = µr AdS 2 . (A.9) Putting it all together, we have thatV = 8πGM r AdS 3 , (A.10) which exactly matches [18]. However, in this paper we will not work in the high temperature limit, and will instead work with the full metric in Eq. A.2. A.2 CA for Einstein Gravity We will now move on to the CA conjecture [20,21]. Our goal here is to use Eq. 1.17 to compute the action of the δWdW patch shown in Figure 5. For general relativity, the bulk action in Eq. 1.17 is the usual Einstein Hilbert action action, 11) and the boundary action S bdy is the Gibbons-Hawking-York (GHY) boundary term, which for spacelike hypersurfaces is given by where h ab is the induced metric on the boundary and K is the trace of the extrinsic curvature tensor.
14,697.2
2019-02-25T00:00:00.000
[ "Physics" ]
A Robust Dynamic Classifier Selection Approach for Hyperspectral Images with Imprecise Label Information Supervised hyperspectral image (HSI) classification relies on accurate label information. However, it is not always possible to collect perfectly accurate labels for training samples. This motivates the development of classifiers that are sufficiently robust to some reasonable amounts of errors in data labels. Despite the growing importance of this aspect, it has not been sufficiently studied in the literature yet. In this paper, we analyze the effect of erroneous sample labels on probability distributions of the principal components of HSIs, and provide in this way a statistical analysis of the resulting uncertainty in classifiers. Building on the theory of imprecise probabilities, we develop a novel robust dynamic classifier selection (R-DCS) model for data classification with erroneous labels. Particularly, spectral and spatial features are extracted from HSIs to construct two individual classifiers for the dynamic selection, respectively. The proposed R-DCS model is based on the robustness of the classifiers’ predictions: the extent to which a classifier can be altered without changing its prediction. We provide three possible selection strategies for the proposed model with different computational complexities and apply them on three benchmark data sets. Experimental results demonstrate that the proposed model outperforms the individual classifiers it selects from and is more robust to errors in labels compared to widely adopted approaches. Supervised classification plays a vitally important role for analyzing HSIs by assigning image pixels into distinct categories or classes of interest available in a scene, based on a relatively small amount of annotated examples. Recent comprehensive surveys on HSI classification in remote sensing include [11][12][13][14][15]. It is generally agreed upon that incorporating spatial context together with spectral information leads to better classification results than using spectral information alone [16][17][18][19][20]. Further improvements in the classification accuracy can be obtained by combining multiple data sources, e.g., by augmenting HSI data with Light Detection and Ranging (LiDAR) data [21][22][23], Synthetic Aperture Radar (SAR) data [24,25] and/or high-resolution colour images [26][27][28]. Fusion of these multiple data sources is typically accomplished at feature level [29][30][31], or at decision level [26,32,33]. The concept of multiple classifier systems has been widely studied as a method for designing high performance classification systems at the decision level [34][35][36]. Among these, the Dynamic Classifier Selection (DCS) [37,38] approach selects the classifier that yields the highest probability of being classified correctly. By design, the combined classifier outperforms all the classifiers it is based on [33,39,40]. The key idea of DCS is to identify the best classifier dynamically for each sample from a set of classifiers. This classifier is usually selected based on a local region of the feature space where the query sample is located in. Most works use the K-Nearest Neighbors technique (grouping samples with similar features) to define this local region [41][42][43]. Then, for a given unseen sample the best classifier is estimated based on some selection criteria [44,45]. In this work, we group samples differently, by incorporating the robustness concept to the model specification. While current machine learning systems have shown excellent performance in various applications [46][47][48], they are not yet sufficiently robust to various perturbations in the data and to model errors to make them reliably support high-stakes applications [49][50][51]. Therefore, increasing attention is being devoted to various robustness aspects of the models and inference procedures [52][53][54]. The work in [55,56] proposed robust methods by adopting empirical Bayesian learning strategies to parameterize the prior and used this Bayesian perspective for learning autoregressive graphical models and Kronecker graphical models. Earlier work by one of us [57] analyzed the global sensitivity of a maximum a posteriori (MAP) configuration of a discrete probabilistic graphical model (PGM) with respect to perturbations of its parameters, and provided an algorithm for evaluating the robustness of the MAP configuration with respect to those perturbations. For a family of PGMs, obtained by perturbation, the critical perturbation threshold was defined as the maximum perturbation level that does not alter the MAP solution. In classification problems, these thresholds determine the level to which the classifier parameters can be altered without changing its prediction. The experiments in [57] empirically showed that instances with higher perturbation thresholds tend to have a higher chance of being classified correctly (when evaluated on instances with similar perturbation thresholds). We combined this property with DCS and applied it to classification in our earlier work [58], but only as a proof of concept for toy cases with binary classes and two classifiers. In a follow up work [59] we presented an abstract concept of how the robustness measures can be employed to improve the classification performance of DCS in HSI classification. Here we develop a novel robust DCS (R-DCS) model in a general setting with multiple classes and multiple classifiers, and use it to take into account the imprecision of the model that is caused by errors in the sample labels. The main novelty lies in interpreting erroneous labels as model imprecision and addressing this problem from the point of view of the robustness of PGMs to model perturbations; this also sets this work apart from our previous-more theoretical-work on robustness of PGMs [57][58][59], which did not consider the problem of erroneous label. The main issue with erroneous labels, also referred to as noisy labels [60,61], is that they mislead the model training and severely decrease the classification performance [62][63][64]. Recent works that address this problem usually focus on noisy label detection and cleansing [65][66][67]. However, detection of erroneous labels is never entirely reliable, and their correction even less so, especially when the sample labels are relatively scarce or spatially scattered across the image. Thus, it is imperative to study the robustness of classifier models under different levels of label noise and to understand how the performance of different classifiers deteriorates with label noise. We hypothesize that the framework of imprecise probabilities can offer a viable approach to improve a classifier's robustness to low-to-moderate amounts of label noise. Therefore, we build our robust DCS (R-DCS) model based on an imprecise probabilistic extension of a classical PGM. Particularly, we build on Naive Bayes Classifiers (NBCs), but it is possible to extend the proposed framework to other classification models. We use an adapted version of the Imprecise Dirichlet Model (IDM) [68] to perturb the local probability mass functions in the model to corresponding probability sets. This imprecise probabilistic extension of an NBC is called a Naive Credal Classifier (NCC) [69]. The amount of perturbation of such an NCC is determined by a hyperparameter that specifies the degree of imprecision of the IDM. The maximum value of the hyperparameter under which the NCC still remains determinate-yields a single prediction-is the perturbation threshold of the NCC. Such perturbation thresholds essentially show how much we can vary the local models of the NBC without changing the prediction result, thereby providing us with a framework for dealing with model uncertainty. The influence of label noise on the classification performance was studied earlier from a different perspective in [70,71]. The work in [70] showed experimentally that NBCs yield favourable performance in the presence of label noise compared to classifiers based on KNN, support vector machine (SVM) and decision trees [72]. These conclusions were based on empirical classification results on thirteen synthetic datasets, designed for several selected problems from the UCI repository [73]. The work in [71] empirically analyzed the effect of class noise on supervised learning on eight medical datasets. Only the end classification results were analysed in both works. Here we take a different approach and we characterise statistically the effect of erroneous labels on statistical distributions of features and on the estimated spectral signatures of landcover classes. The empirical results explain from this perspective clearly the reasons for the considerable robustness of NBCs to label noise and at the same time they also show how erroneous labels affect the actual conditional distributions of features given the class labels, introducing inaccuracies in the classification process. This motivates us to employ the framework of imprecise probabilities to develop a classifier that is more robust to model uncertainties. As a first step, we use this framework to analyze the effect of noisy labels on the robustness of the predictions of NBCs. Next, we put forward our robust DCS (R-DCS) model and first apply it to HSIs classification in the presence of noisy labels. We perform dynamic selection among two classifiers: one based on spectral and the other based on spatial features. Both of these are formulated as an NBC. Specifically, we apply a principal component analysis (PCA) to extract spectral features from HSIs, where the decorrelating property of PCA justifies the conditional independence assumption that NBC relies on. The spatial features are generated by applying morphological operators on the first five principal components (PCs). We also apply our R-DCS model to a multi-source data set that includes HSI and LiDAR data. For this data set, we perform dynamic selection among three classifiers: the two classifiers corresponding to HSI and a third classifier based on the elevation information in the LiDAR data. For the selection criteria for our R-DCS, we define three selection strategies-R-T, R-LA and R-EU-that differ in computational complexity. R-T simply selects the classifier with the highest perturbation threshold. While computationally efficient, this approach does not always perform well because the exact relation between perturbation thresholds and performance differs from one classifier to another. Two other strategies are proposed to improve upon this by determining empirical relations between the perturbation thresholds of different classifiers and their probabilities of correctly classifying the considered instance. Particularly, the empirical probabilities of correctly classifying the test sample are estimated based on the training samples that are closest to the test sample in terms of a given perturbation distance. In R-LA, the perturbation distance between two data samples is defined by the absolute value of the difference in their perturbation thresholds for a given classifier. R-EU defines this distance as the Euclidean distance in a space spanned by the perturbation thresholds of all the considered classifiers. R-EU is computationally more complex but outperforms the others in most cases of practical importance. Experimental results on three real data sets demonstrate the efficacy of the proposed model for HSIs classification in the presence of noisy labels. In the two HSI data sets, the R-EU strategy performs best among the three selection strategies when the label noise is relatively low, while R-T and R-LA offer better performance when the label noise is rather high. In the multi-source data set, the R-EU strategy always outperforms the other methods in all cases. The main contributions of the paper can be summarized as follows: (1) We characterise statistically the effect of label noise on the estimated spectral signatures of different classes in HSIs and on the resulting probability distributions (both prior probabilities and conditional probabilities given the class label). This analysis provides insights into the robustness of NBC models to erroneous labels, and at the same time it shows in which way errors in data labels introduce uncertainty in the involved statistical distributions of the classifier features. (2) We further analyze the effect of noisy labels on the robustness of NBCs and, in particular, on perturbation thresholds of the corresponding NCCs. The results show that this robustness decreases as the amount of label noise that is applied increases. (3) In order to cope with this decrease in robustness due to label noise, we propose a robust DCS model, dubbed R-DCS, which selects classifiers that are more robust, using different selection strategies that are based on the critical perturbation thresholds of the involved classifiers. In particular, we provide three possible selection strategies: R-T, R-LA and R-EU. Two of these (R-T and R-LA) already appeared in a preliminary version of this work [59], but in a more abstract set-up, without exploring their performance in the presence of label noise. The third selection strategy, R-EU, proposed here, is computationally more complex, but performs better than the other two in cases with low to moderate levels of label noise (up to 30%), which are of most interest in practice. (4) The proposed R-DCS models are validated on three real data sets. Compared to our conference paper [59], we take into account the label noise into HSI classification and conduct more experiments to evaluate the proposed model. The results reveal that the proposed model outperforms each of the individual classifiers it selects from and is more robust to errors in labels compared to some common methods using SVM and graph-based feature fusion. The rest of this paper is organized as follows. Section 2 contains preliminaries, reviewing briefly the basic concept behind NBC, its imprecise-probabilistic extension NCC and the notion of perturbation thresholds that we build upon. Section 3 describes three representative (hyperspectral and hybrid) real data sets that we use in our experiments. The following two sections contain the main contributions of this work: Section 4 is devoted to the statistical characterisation of the influence of noisy labels on spectral signatures, on statistical distributions of the classifier features and on perturbation thresholds. In Section 5, the proposed model R-DCS is described and three possible selection strategies based on imprecise-probabilistic measures are defined. The overall proposed framework for robust dynamic classifier selection for hyperspectral images is presented and discussed. Experimental results on the three real data sets are reported in Section 6. The results demonstrate that the proposed model outperforms each classifier it selects from. Comparing to the competing approaches such as SVM and graph-based feature fusion, the proposed model proves to be more robust to label errors, inheriting this robustness from the NBCs that are at its core. A detailed discussion of the main results and findings is presented in Section 7, and Section 8 concludes the work. Preliminaries We first introduce the basic concept behind Naive Bayes Classifiers (NBCs) in this section. Next, imprecise extensions of NBCs-called Naive Credal Classifiers (NCCs)-are introduced and their perturbation thresholds are defined. Naive Bayes Classifiers Let C denote the class variable taking values c in a finite set C. We denote by F i the i-th feature variable taking values f i in a finite set F i , i ∈ {1, . . . , m}, where m is the number of features. For notational convenience, we gather all feature variables in a single vector F = (F 1 , . . . , F m ) that takes values f = ( f 1 , . . . , f m ) in F 1 × · · · × F m . For any given feature vector f, an NBC returns the Maximum a Posteriori (MAP) estimate of the class variable C, assuming the conditional independence P(f|c) = ∏ m i=1 P( f i |c). The estimated class is thus: The involved (conditional) probabilities are typically estimated from data. To avoid falsely estimated zero probabilities due to empirical estimation, we adopt the common method of Laplace smoothing [74][75][76], meaning that for all i ∈ {1, ..., m}, c ∈ C and f i ∈ F i : where n is the total number of data points, n(c) is the number of data points with class c and n(c, f i ) is the number of data points with class c and i-th feature f i . Naive Credal Classifiers and Perturbation Thresholds The Naive Credal Classifier (NCC) [69] is an extension of the Naive Bayes Classifier to the framework of imprecise probabilities that can be used to robustify the inferences of an NBC. Basically, the idea is to consider an NBC whose local probabilities are only partially specified. In particular, instead of considering a probability mass function P(C) that contains the probabilities P(c) of each of the classes c ∈ C, an NCC considers a set of such probability mass functions, which we denote by P (C). Similarly, for every class c ∈ C and every i ∈ {1, . . . , m}, it considers a set P (F i |c) of conditional probability mass functions. In general, these local sets can be learned from data, elicited from experts, or obtained by considering neighbourhoods around the local models of an NBC. We here consider the first option. In particular, we use a version of the Imprecise Dirichlet Model (IDM) [68], suitably adapted such that it is guaranteed to contain the result of Laplace smoothing. In particular, P(C) is taken to belong to P (C) if and only if there is a probability mass function t on C such that where s is a fixed hyperparameter that determines the degree of imprecision. For every i ∈ {1, . . . , m} and c ∈ C, the local set P (F i |c) is defined similarly. If we now choose a single probability mass function P(C) in P (C) and, for every c ∈ C and i ∈ {1, . . . , m}, a single conditional probability mass function P(F i |c) in P (F i |c), we obtain a single NBC. By doing this in every possible way, we obtain a set of NBCs. This set is a Naive Credal Classifier (NCC) [69]. Classification for such an NCC is done by performing classification with each of the NBCs it consists of separately. If all these NBCs agree on which class to return, then the output of the NCC will be that class. If they do not agree, the result of the NCC is indeterminate and consists of a set of possible classes, amongst which it is unable to choose. The maximum value of s for which the result of the NCC is still determinate is a particular case of the critical perturbation threshold defined in [57]. It provides a numerical indication of the robustness of the NBC's prediction with respect to changes to the probabilities that make up the model. Furthermore, it has also been observed that for any given instance, the corresponding critical perturbation threshold serves as a good indicator for the performance of the original NBC: instances with higher thresholds are classified correctly more often [57]. In the following, we denote this perturbation threshold by s (per) and we compute it from the data at hand using the algorithm in [57]. Datasets We conduct our experiments on two real HSI datasets: Salinas Scene and HYDICE Urban, and a multi-source data set GRSS2013. The Salinas Scene dataset was gathered by the AVIRIS sensor with 224 bands in 1998 over Salinas Valley, California. The original data set consists of 512 × 217 pixels with a spatial resolution of 3.7 m per pixel. It includes 16 classes in total. For our experiments, we select a typical region of size 100 × 80 shown with a false color image in Figure 1a. There are six classes in this region, as listed in Table 1, which also shows the number of labeled samples per class. Figure 1b shows the ground truth spatial distribution of these classes. Brocoli-green-weeds-2 652 3 Grapes-untrained 1965 4 Lettuce-romaine-4wk 711 5 Lettuce-romaine-5wk 930 6 Lettuce-romaine-6wk 229 Total 5501 The HYDICE Urban dataset was captured by the HYDICE sensor. The original data contains 307 × 307 pixels, each of which corresponds to a 2 × 2 m 2 area. There are 210 wavelengths ranging from 400 nm to 2500 nm, resulting in a spectral resolution of 10 nm. In our experiments, we use a part of this image with size 200 × 200 shown in Figure 1c. The number of bands was reduced from 210 to 188 by removing the bands 104-108, 139-151 and 207-210, which were seriously polluted by the atmosphere and water absorption. Detailed information on the available classes and the number of samples per class is given in Table 2. The ground truth classification is shown in Figure 1d. Our third data set, GRSS2013, was a benchmark data set for the 2013 IEEE GRSS data fusion contest [77]. This data set, consisting of HSI and LiDAR data was acquired over the University of Houston campus and the neighboring urban area in June 2012. The HSI has 144 spectral bands and 349 × 1905 pixels, containing in total 15 classes as shown in Table 3. The false color image is shown in Figure 2a and the ground truth classification is shown in Figure 2b. 1 Trees 3123 2 Concrete 1410 3 Soil 637 4 Grass 4044 5 Asphalt 912 No. Class Name Labelled Samples Total 10,126 Robustness Analysis with Noisy Labels We now move on to analyze the effect of errors in labels on the estimation of spectral signatures of landcover classes and on the estimated statistical distributions of the classes and of features given the class labels. This analysis gives an insight into how errors in labels introduce uncertainty about the models that various classifiers rely upon. In order to study (and further on improve) the robustness to these model uncertainties, we adopt the framework of imprecise probabilities. Model Uncertainties Due to Noisy Labels Here we analyze the influence of label noise on the estimated spectral signatures of different classes in an image as well as the effect on the estimated prior probabilities of the given classes and the conditional probability distributions of the features given the class label. The experiments here are conducted with NBCs. The NCC framework, which was introduced in Section 2.2, will be used further on to define the perturbation thresholds that will be employed in our new model. We conduct experiments on the two real HSIs described in Section 3 and shown in Figure 1. To reduce the data dimensionality, PCA is commonly applied on the original HSIs data. We use discretized principal component values as the features for NBCs. Due to the decorrelating property of PCA, these features are conditionally independent given the class label, and thus conform to the assumption of NBC. In all the experiments, the PC values are uniformly discretized into twenty intervals. We define the level of label noise ρ as the proportion of training samples that have wrong labels. These erroneous labels are chosen at random in C \ {c} as in [63], with C the set of class values and c the true class. Figure 3 shows an illustration of introducing label noise in the Salinas Scene dataset. The first PC is shown in Figure 3a. All the labelled samples in Class 1 are highlighted in Figure 3b. Next, we randomly select 50% of the highlighted samples as the training samples for Class 1 (Figure 3c). We also select at random a given portion ρ of the total training samples (from various classes) and flip each of them to one of the remaining classes at random. Figure 3d illustrates an instance of the resulting Class 1 labels for ρ = 0.5. Different colours denote different original classes of the training samples that were flipped to Class 1. Note that the choice of ρ is here merely for clearer illustration purposes; a situation with 50% of wrong labels is unlikely to be relevant in practice. Figure 4 shows the average spectral signatures of each class on two real HSI data sets: Salinas Scene and HYDICE Urban. These average spectral signatures are obtained by taking the mean value of spectral intensities for each class. Without label noise the spectral signatures of different classes show different trends. In the presence of label noise, the spectral signatures falsely appear to be more similar to each other, which will affect inevitably the classification accuracy. Observe that when ρ = 0 the spectral responses of Class 1 and Class 2 in Salinas Scene are quite similar. This is because the materials corresponding to theses two classes are similar (two types of brocoli). For HYDICE Urban, the spectral signatures without label noise are rather different from each other and the effect of erroneous labels is evident. In both cases, label noise obviously tends to uniformise all the spectral signatures as expected, because now each of them is computed from a mixture of different classes. Figure 5 shows average spectral signatures together with their standard deviation regions, for two different classes from HYDICE Urban, without label noise and with label noise ρ = 0.5. The solid line in the middle shows for a given class the mean value of the pixel intensities in each spectral band, whereas the shaded region denotes the standard deviation over all the labeled pixels in that class. With erroneous labels, the spread gets larger and the means of the intensities change as well. Figure 6 shows overlays of spectral signatures for ρ = 0 and ρ = 0.5 for the six classes from another test image (Salinas Scene). We observe again that errors in labels result in wider shaded regions (larger standard deviations) and in decreased differences between the average spectral signatures of different classes due to the mixing effect, echoing the results in Figure 4. Next, we analyze the effect of label noise on the estimated prior probabilities of different classes and the conditional probability distributions of the features given the class label. The prior probabilities and the conditional probabilities are computed by Equation (2). Each of the results is obtained as an average over 10 runs on different training samples. In each run, 50% of the labelled samples from each class are selected at random as training samples and their labels are perturbed at random according to the given ρ. Figure 7 shows the effect of label noise on prior probabilities of classes with ρ ∈ {0, 0.1, 0.3, 0.5} in the two data sets. While the actual prior probabilities of different classes are significantly different from each other, e.g., for the Salinas Scene data set, p(C = 3) = 0.36 and p(C = 6) = 0.04 for ρ = 0, these differences become smaller when label noise increases. Figures 8 and 9 show the probability distributions of the first two PCs in the two data sets, respectively, given the class label and with different levels of label noise ρ ∈ {0, 0.1, 0.3, 0.5}. For the Salinas Scene data set, when ρ = 0.5, the distribution conditioned on class 6 changed a lot in both PCs, since there are relatively few labelled samples from class 6 in this data set. The distributions conditioned on other classes keep a similar shape when increasing ρ from 0 to 0.5, but the peak values decrease and the distribution shape gets more and more flat compared to the distributions without label noise. The distributions of the second PC show similar behaviour, becoming more flat when the level of label noise increases. The conditional distributions for the HYDICE Urban data set show similar behavior to those for the Salinas Scene data set. The presented results provide insights into how erroneous labels affect the estimation of the spectral signatures per class as well as the prior probabilities of the classes and the distributions of the classifier features (i.e., the underlying statistical model that governs the operation of NBCs and other related statistical models) conditioned on the class label. The analysis of these results demonstrates clearly that erroneous labels lead to model uncertainties, which will in their turn affect the classification performance. In order to mitigate this, models that are robust to model uncertainty are needed. We build such a robust classifier using the framework of imprecise probabilities. In particular, we adopt NCCs, an extension of NBCs, which allows us to assess how robust an NBC is with respect to model uncertainty. A clever dynamic selection among multiple NBCs will then lead to a robust dynamic classifier selection approach that we advocate in this work. Impact of Noisy Labels on Perturbation Thresholds An NCC provides an elegant way to account for model uncertainties by extending the probability mass functions in an NBC to corresponding sets of probabilities. Recall that the perturbation threshold of an NCC s (per) is defined as the maximum value of s under which the NCC remains determinate. Here we analyze the effect of noisy labels on these perturbation thresholds. We conduct experiments on the Salinas Scene data set. Figure 10 shows the correlation between the level of label noise and the perturbation thresholds. Let Por(s (per) ; ρ) denote the portion of samples whose perturbation thresholds are larger than s (per) under the label noise level ρ. The family of curves Por(s (per) ; ρ)in Figure 10 shows clearly that the portion of samples whose perturbation thresholds are above a given level drops when the level of label noise increases. In other words, the perturbation thresholds for some samples get smaller when the level of label noise increases. This means that, as expected, the robustness of the classification will decrease when ρ is larger. Given the correlation between robustness and accuracy [57], this will result in a lower accuracy as well. In order to mitigate this drop in robustness (and hence accuracy), we will now develop a method that uses perturbation thresholds to minimize this unwanted effect. Figure 10. The effect of erroneous labels on perturbation thresholds of NCCs estimated empirically on the Salinas Scene data set. The portion of samples whose perturbation threshold is above a given value s (per) is plotted for different values of ρ and denoted by Por(s (per) ; ρ). Observe that this portion Por(s (per) ; ρ) drops when ρ is larger indicating that with increasing the label noise the performance becomes less reliable on more and more samples. Robust Dynamic Classifier Selection (R-DCS) In this section, we develop a robust dynamic classifier selection (R-DCS) model to improve the classification performance under noisy labels. We first introduce some notation and then propose three possible selection strategies to estimate the best classifier among a set of available ones. Finally, an application of these R-DCS models in hyperspectral image classification is presented. Notation Let Ψ = {ψ 1 , ψ 2 , ..., ψ L } be a pool of base classifiers that are used for DCS. In particular, each ψ l ∈ Ψ is an NBC. Let X = {x i } be a set of training samples and Y = {y i } a set of testing samples. The methods described below can be applied in a general context, but in our hyperspectral image application, the samples x i ∈ R m and y i ∈ R m are vectors composed of pixel values at a particular spatial location in m spectral bands. We denote by s (per) l,i the perturbation threshold of the l-th classifier (ψ l ) in sample i. Selection Strategies for R-DCS The key idea of a DCS is to try and find the classifier with the highest probability of being correct for a given unseen sample. We propose a classifier selection approach making use of the observation that, for a given fixed classifier, instances with higher perturbation thresholds tend to have a higher chance of being classified correctly. Based on this general concept, we provide three concrete selection strategies employing perturbation thresholds as follows. The R-T Strategy In order to select the most competent classifier among a set of available ones, a first idea is simply to choose the classifier with the highest perturbation threshold for each sample. We refer to this strategy as R-T. Let λ j ∈ {1, ..., L} denote the index of the base classifier that will be assigned to sample j. The R-T strategy selects for each test sample y j the classifier ψ λ j ∈ Ψ that exhibits the highest perturbation threshold: The R-LA Strategy Instead of analysing each sample individually, we now take into account a local surrounding region of the image sample. This local surrounding includes the nearest neighbors of the test sample in terms of a given distance. First we define this distance metric for each classifier separately and we refer to this strategy as R-LA. In particular, for each classifier we choose N training samples whose perturbation thresholds are closest to that of the test sample. To that end, we define the perturbation distance d l (x i , x j ) between two data samples x i and x j as the absolute value of the difference between their perturbation thresholds for the classifier l: Furthermore, we let N l,j be the set of N training samples that are the nearest neighbors of y j in terms of d l (x i , y j ). For each sample y j to be classified, we then determine the most competent classifier ψ λ j as follows: whereÑ l,j is the subset of N l,j composed of those training samples that are correctly classified by ψ l . Figure 11a illustrates this strategy with an artificial example with twenty training instances and two classifiers. The R-EU Strategy The R-EU strategy also aims to choose a classifier based on a local surrounding of the image sample in terms of the perturbation thresholds, but now a common set of nearest neighbors is defined based on a single perturbation distance. We define this common perturbation distance as the Euclidean distance in the space spanned by the perturbation thresholds of the different classifiers: where s (per) k,i and s (per) k,j are the threshold values of the k-th classifier for the sample x i and x j , respectively and L is the total number of classifiers. Now let N eu,j be the set of N training samples that are the nearest neighbors of y j in terms of d eu (x i , y j ). For each sample y j to be classified, we then estimate the most competent classifier ψ λ j as follows: whereÑ eu,l,j is the subset of N eu,j composed of those training samples that are correctly classified by ψ l . Figure 11b illustrates this strategy with a fictitious example involving twenty training samples and two classifiers. Discussion of the Proposed Strategies Among the three proposed selection strategies for R-DCS model, the R-T strategy is the simplest one. It operates on each sample separately, selecting the best classifier according to their perturbation thresholds for the particular sample. However, it ignores the fact that the exact relation between perturbation thresholds and performance may differ from one classifier to another. The other two strategies R-LA and R-EU improve upon this by incorporating information from a local surrounding of the test sample in the perturbation thresholds space. R-LA, which is based on absolute distance between the samples, is computationally simpler than R-EU, which is based on Euclidean distances. Notably, R-T is a parameter-free method, and for R-LA and R-EU only a simple parameter N needs to be chosen, which is particularly convenient from a practical point of view. R-DCS in Hyperspectral Image Classification We apply the proposed R-DCS model in hyperspectral image classification. The proposed model can be seen as an ensemble learning method, which consists of classifiers with different input features. We conduct experiments in the subsequent sections on three real remote sensing data sets, including two HSIs and one multimodal data set (HSI+LiDAR). In the experiments with HSIs alone, we employ the spectral and spatial features of HSIs as the inputs of our method as illustrated in Figure 12. While in the experiment with HSI and LiDAR, apart from the two features of HSIs, an additional feature with altitude information of objects from LiDAR is utilized. In Figure 12, we construct two classifiers, one operating on spectral and the other one on spatial features. PCA is employed for spectral feature extraction and morphological profiles [78] for spatial feature extraction. A morphological profile is constructed by the repeated use of morphological openings and closings with a structuring element of increasing size. In this work, we extract spatial features by morphological profiles composed of morphological openings and closings with partial reconstruction on PCs, similarly as in [78,79]. Note that in the general case, the R-DCS framework allows us to assign an arbitrary number of feature vectors to every pixel and feed each of those feature vectors to its own classifier. In the particular scheme from Figure 12, however, we assign to every pixel in a HSI two feature vectors: one composed of spectral features and the other composed of spatial features. For each of these feature vectors, the corresponding perturbation threshold is calculated, as explained in Section 2.2, using the algorithm of [57]. A dynamic classifier selection is then conducted for each test sample according to one of the selection strategies from Section 5.2. In the case of more than two types of features from one or more data sources, we need to calculate the perturbation thresholds for L ≥ 2 feature vectors following the same procedure as above and apply the same selection strategies, which are already formulated in general for an arbitrary number L classifiers. Algorithm 1 shows the entire process of our R-DCS model for the case with L classifiers according to Equation (7); end for do Experimental Results in HSI Classification We evaluate the performance of our methods on three real data sets: two HSI data sets (Salinas scene and Urban area HYDICE) and a multi-source data set (GRSS2013), which contains HSI and LiDAR data. The details of the three data sets were described in Section 3. In the following experiments, we extract the first 50 PCs for the spectral features as a compromise between the performance and complexity for all the methods. The morphological profiles for spatial features are generated from the first 5 principal components (representing more than 99% of the cumulative variance) of the HSI data with 5 openings and closings by using disk-shaped SE (ranging from 2 to 10 with step size increment of 2). The morphological profiles for elevation features are generated from the LiDAR data with 25 openings and closings by using disk-shaped SE (ranging from 2 to 50 with step size increment of 2). The values of each PC and morphological profile are uniformly discretized into 10 intervals. We compare the proposed R-DCS model under different selection strategies with the following schemes: (1) NBC implemented with spectral features alone (NBC-Spe), with spatial features alone (NBC-Spa) and with elevation features alone (NBC-LiDAR). (2) K-nearest neighbors (KNN) classifier with spectral features of HSIs. The number of neighbors is obtained by five-fold cross validation over the training samples. (3) Support vector machine (SVM) classifier with polynomial kernel, applied on spectral features. (4) Generalized graph-based fusion (GGF) method of [29], which makes use of all types of features. Observe that GGF and the proposed R-DCS combine different types of features, while other methods use one or the other type of features. The only parameter of the proposed model is the number of neighbors N in the R-LA and R-EU methods. We estimate this parameter by five-fold cross validation over the training samples, as we do for the KNN method. Three widely used performance measures are used for quantitative assessment: overall accuracy (OA), average accuracy (AA) and the Kappa coefficient (κ). Overall accuracy is the ratio between correctly classified testing samples and the total number of testing samples. Average accuracy is obtained by first computing the accuracy for each class and then considering the average of these accuracies. The Kappa coefficient, finally, measures the level of agreement between the ground truth and the classification result of the classifier [80]: κ = 1 corresponds to complete agreement and hence a perfect classifier, whereas κ = 0 corresponds to a classifier that ignores the feature vector and classifies purely at random. Let n i,j be the number of testing samples in class i that are labeled as class j by the classifier. Then OA, AA and κ are computed as: where n t := ∑ i ∑ j n i,j is the total number of testing samples, n c is the number of classes, n i,+ := ∑ j n i,j is the number of testing samples in class i, EA = ∑ i (n i,+ /n t )(n +,i /n t ) is the expected accuracy of a classifier that ignores the feature vector and n +,i := ∑ j n j,i is the number of testing samples that are classified in class i. In the following experiments, 10 percent of the labeled samples are used for training and the rest are for testing. The reported experimental results are averages over 10 runs with different training samples. Experiments on the HYDICE Urban Data Set The first experiment is conducted on the HYDICE Urban data set. The reference classes and their corresponding number of labeled training and testing samples are shown in Table 4. The false color image and the ground truth are shown in Figure 1a,b. The classification results for the HYDICE Urban data set with different levels of label noise are listed in Table 5, where the best result is marked in bold. All the three proposed strategies outperform the basic classifiers they select from with different levels of label noise. The proposed R-DCS model with the R-EU strategy outperforms the others in most cases in this data set. When the level of label noise is 0, the GGF method performs the best and the two NBC models: NBC-Spe and NBC-Spa are inferior to all other methods. However, all the three proposed strategies improve the performance of NBC with about 3% improvement over NBC-Spe and 5% improvement over NBC-Spa for the R-EU strategy. With increasing levels of label noise, the accuracy of the GGF method decreases heavily from 94% to 64%, and similarly for SVM from 91% to 58%. Remarkably, all of our methods show only a slight decrease in OA which is within 5%. The more complex strategy R-EU outperforms the more simply ones R-T and R-LA, especially for the cases with less errors in labels. Table 5 also demonstrates that the proposed methods mostly outperform (and the R-EU strategy always outperforms) all the reference methods in terms of all the performance measures (OA, AA and κ) in all cases where label noise exists. Table 6 shows the classification accuracy per class for the HYDICE Urban data set with different classifiers. We compare ρ = 0 and ρ = 0.5 to study the change in accuracy for each class in the presence of label noise. The results show that the class-specific accuracies mostly drop significantly due to the effect of label noise. Class 2 is an exception: label noise there increases the accuracies for NBC-Spe, NBC-Spa, KNN, SVM and R-T. We can also see from Table 6 that when ρ = 0, the GGF method yields the highest accuracy in each class. Our proposed methods R-LA and R-EU outperform NBC-Spe and NBC-Spa in Class 2 with about 20% improvement. When ρ = 0.5, all the methods perform badly in Class 3 due to the limited number of training samples. Our proposed methods show competing accuracies in the first three classes and the R-LA method performs the best in Class 4 and Class 5. Table 4. Reference classes for the HYDICE Urban data set. 1 Trees 312 2811 3123 2 Concrete 141 1269 1410 3 Soil 64 573 637 4 Grass 404 3640 4044 5 Asphalt 91 821 912 Total 1013 9113 10,126 In Figure 13a we compare the performance of our best strategy for the HYDICE Urban data set, R-EU, with the reference methods. With increasing levels of label noise, the performance of the GGF and SVM deteriorates significantly, while NBCs, KNN and the proposed methods are more stable in terms of OA. The classification accuracy of all the three analysed strategies is depicted in Figure 13b for different levels of label noise. All the strategies outperform the individual NBCs (NBC-SPE and NBC-SPA) that they select from. When the level of label noise is less than 0.5, the R-EU method performs better, while the R-T and R-LA methods achieve higher accuracy at ρ = 0.5. Compared to NBC with spatial features, accuracy improvement is above 3% at different levels of label noise, which is significant in the task of HSI classification. Compared to some of the most competitive methods SVM and GGF, an important improvement is obtained in the presence of label noise. Experiments on the Salinas Scene Data Set The second experiment is conducted on the Salinas Scene data set. Information about the classes and number of samples used for training and testing are listed in Table 7. The false color image and ground truth are shown in Figure 1. The classification results for the Salinas Scene data set with different levels of label noise are depicted in Table 8 and Figure 14. The three strategies within the proposed method perform similarly due to the better performance of NBCs on this data set. The proposed R-DCS model under any of the three presented strategies outperforms all the other methods at every level of label noise, even when the level of label noise is 0. When the level of label noise increases, the accuracy of SVM drops heavily from 98% to 69%, and similarly for GGF the accuracy drops from 99% to 87%. The KNN's accuracy drops only a little in this data set from 94% to 92%. For the proposed model, with any of the three selection strategies, the decrease in the accuracy is also only about 2%. It is interesting to notice that with lower levels of label noise, the R-EU strategy performs better than R-T and R-LA, while the opposite is true at larger levels of label noise. Experiments on GRSS2013 Data Set The third experiment is conducted on the GRSS2013 data set. Information about the classes and number of samples used for training and testing are listed in Table 9. The false color image and ground truth are shown in Figure 2. The classification results for the GRSS2013 data set with different levels of label noise are depicted in Table 10 and Figure 15. Our proposed methods and the representative GGF method combine the spectral features, spatial features and elevation features in these experiments. The results show that our R-EU method yields the best performance in terms of OA, AA and κ at each level of label noise. R-LA outperforms the three NBCs, i.e., NBC-Spe, NBC-Spa and NBC-LiDAR, at low levels of label noise (ρ ≤ 0.2), but performs worse than NBC-Spa when ρ > 0.2. When label noise rises from 0 to 0.5, the OA of SVM drops heavily from 85% (ρ = 0) to 54% (ρ = 0.5) and similarly GGF has an OA decrease of 39%. KNN proves to be much more robust to the label noise-its OA decreases only by 3% in the same range of ρ values. Among our methods, R-T does not yield good results on this data set. Compared with R-LA, R-EU performs consistently better, demonstrating the effectiveness of the R-EU strategy. Performance at Extremely Large Levels of Label Noise The previous analysis showed that the performance of NBC-based classifiers and our approach that is build upon NBC remains remarkably stable even at large levels of label noise (ρ = 0.5). To explain this behaviour and to explore at which levels of label noise this performance starts to drop, we also perform experiments with extremely large levels of label noise (ρ > 0.5). For these experiments, we choose the selection strategy that yields the best performance the most times (R-T for the two HSI data sets and R-EU for the GRSS2013 data set), and compare it with other methods. Figure 16 shows the performance of the resulting classifiers under different levels of label noise on the three analysed data sets. In the three data sets, the overall accuracy of NBC-Spe decreases gradually with increasing label noise in the range ρ < 0.5, and it drops abruptly afterwards reaching a value near zero when ρ = 0.9. The reason for this sharp decrease can be attributed to the flattening of the conditional densities for larger ρ as shown in Figures 17 and 18 for the first PC in the two HSI data sets. (Note that the statistical distributions in Figures 17 and 18 as well as the statistical distributions in Section 4 were obtained with 50% of the labeled samples per class in order to allow for a more reliable empirical estimation of the corresponding distributions.) NBC-Spa and our method R-T in the two HSI data sets show a similar evolution in both data sets and have a sudden drop in OA around ρ = 0.7. KNN and GGF suffer from a significant performance drop at ρ = 0.6, while SVM shows approximately linear decrease in both datasets. The trend exhibited by NBCs and our proposed method is much better than a linear decrease in the accuracy, because when the accuracy drops below a certain level the exact values do not matter anymore as all methods are useless in that range. In the GRSS2013 data set, NBCs with different types of features and our method R-EU show a similar evolution and have a sudden drop in OA around ρ = 0.7. KNN shows behaviour that is similar to NBC-Spe, while SVM and GGF almost decrease linearly in this data set. Discussion The experimental results presented in the previous section as well as the statistical characterization in Section 4, provide new insights into the effects of label noise on supervised classification of hyperspectral remote sensing images. Empirical conditional probability distributions of HSI features conditioned on class labels, as well as prior distributions of class labels, exhibited graceful flattening with increasing amounts of label noise. Their evolution clarifies why Bayesian classifiers such as the relatively simple NBCs are much more robust to label noise than some other, more complex methods. These results are consistent with previous findings from [70,71], where it was experimentally established that NBCs yield better classification accuracy in the presence of label noise compared to classifiers based on KNN, SVM, and decision trees. Our experimental analysis shows also clearly that incorporating spatial features into the classification process not only improves the classification accuracy but increases robustness to label noise as well. It is well established that using spatial information typically improves the classification accuracy, and some recent works that addressed HSI classification in the presence of noisy labels [66,81] also incorporate spatial features. We addressed the effect of label noise from the point of view of the robustness of probabilistic graphical models to model perturbations. We built a robust dynamic classifier selection method upon this reasoning. The proposed approach enjoys remarkable robustness to label noise, inherited from the naive Bayesian classifiers that lie at its core. We instantiated our general approach with three particular selection strategies that have different levels of complexity. The proposed approach improves upon the NBCs that it combines and lends itself to incorporating easily multiple data sources and multiple types of features. Both NBC and the proposed robust dynamic classifier exhibit a characteristic trend in the presence of label noise: the classification accuracy decreases very slowly until the level of label noise becomes excessively high (60% or more erroneous labels) and then it drops abruptly. The evolution of the probability distributions of HSI features and estimated priors for class labels provides a nice explanation for this behaviour as was pointed out in the previous section. Interestingly, classifiers based on SVM and on an advanced graph fusion method show a faster decrease of the classification accuracy with increasing levels of label noise. While these more sophisticated classifiers outperform the other analysed ones in the case of ideally correct labels, they appear to be rather more vulnerable to label noise and become inferior to NBCs, KNN and the proposed approach already when a small percentage of labels are erroneous. Based on these results and findings, we believe that the following research directions are interesting to explore: (1) Analyzing the performance of more advanced Bayesian classifiers, e.g., based on Markov Random Fields, in the presence of label noise. (2) Exploring which levels of label noise are acceptable for a given tolerance in the classification accuracy and how robust are different learning models in this respect. This can significantly help in practice for optimizing the resources and ensuring the prescribed tolerance levels. (3) Deep learning methods are becoming the dominant technology for supervised classification. It is well known already that these models tend to be extremely susceptible to various degradations in the data such as noise and to different adversarial attacks. It would be of interest to study thoroughly their behaviour in the presence of label noise. Motivated by the excellent performance of Bayesian models to erroneous labels, a natural idea to explore is how Bayesian approaches can be incorporated to improve the robustness of deep learning methods to label noise. Conclusions In this work, we started by analysing the effect of errors in data labels on the estimation of spectral signatures of landcover classes, on the estimated statistical distributions of features given the class labels and on the prior probabilities of the given classes. The analysis reveals that NBCs are remarkably robust to label noise but also that erroneous labels introduce uncertainties to models, which inevitably deteriorate the performance of all classifiers, including NBCs. To deal with the imprecision of the model that is caused by errors in the sample labels, we proposed a novel, robust dynamic classifier selection model, that we refer to as R-DCS. The R-DCS model is based on imprecise-probabilistic robustness measures and was applied to HSIs classification in the presence of errors in data labels. Three possible selection strategies are presented for the R-DCS based on the robustness measures. All the provided strategies outperform the classifiers they select from, but their performance differs for different levels of label noise. The R-EU strategy performs better than the other two in most cases, while R-T and R-LA enjoy the benefit of lower computational complexity than R-EU. The experimental results also demonstrate that the proposed model is more robust to label noise compared to some common classification approaches such as KNN, SVM and graph-based feature fusion.
12,570.6
2020-09-01T00:00:00.000
[ "Environmental Science", "Computer Science", "Engineering" ]
Decision Rules Construction: Algorithm Based on EAV Model In the paper, an approach for decision rules construction is proposed. It is studied from the point of view of the supervised machine learning task, i.e., classification, and from the point of view of knowledge representation. Generated rules provide comparable classification results to the dynamic programming approach for optimization of decision rules relative to length or support. However, the proposed algorithm is based on transformation of decision table into entity–attribute–value (EAV) format. Additionally, standard deviation function for computation of averages’ values of attributes in particular decision classes was introduced. It allows to select from the whole set of attributes only these which provide the highest degree of information about the decision. Construction of decision rules is performed based on idea of partitioning of a decision table into corresponding subtables. In opposite to dynamic programming approach, not all attributes need to be taken into account but only these with the highest values of standard deviation per decision classes. Consequently, the proposed solution is more time efficient because of lower computational complexity. In the framework of experimental results, support and length of decision rules were computed and compared with the values of optimal rules. The classification error for data sets from UCI Machine Learning Repository was also obtained and compared with the ones for dynamic programming approach. Performed experiments show that constructed rules are not far from the optimal ones and classification results are comparable to these obtained in the framework of the dynamic programming extension. Introduction Currently, data amounts grow constantly and uncontrollably, practically in every domain of life. It results in the demand for efficient and fast methods of their analysis. Generally speaking, raw data occupy more and more disk space, but without converting them to any kind of useful knowledge, they are just redundant. The question is then how to derive knowledge from raw data. Data mining as a scientific discipline tries to answer it. This domain has been developed for many years already; as a result, there are plenty of already existing methods for multiple applications [1][2][3][4]. Nevertheless, with the growth of data amounts, the existing methods also need to be further developed and new approaches need to be proposed. The process of knowledge extraction from data sets is called learning. Learning can be basically divided into two subcategories: supervised and unsupervised learning. The main difference is that with supervised learning, there is a supervision of the learning process (which in practice means that data sets are labeled and labels are assigned to each of the item from the set under consideration). As for unsupervised learning, the labeling does not exist, and the task is basically to find any relations between data items. There are also so-called semisupervised learning methods that combine both labeled and unlabeled records being considered [5]. In this article, we study one of the most popular supervised learning methods, i.e., decision rules construction. Such rules are known and popular as a form of knowledge representation used often in the framework of supervised machine learning. In the general case, decision rules studied in this paper are expressed as follows [6]: IF conditions THEN conclusion. The "IF" part of a rule is known as the rule antecedent (premise part) and it contains one or more conditions (pairs attribute = value) connected by conjunction. The "THEN" part is the rule consequent which present a class label. Based on conditions included in the premise part, the rule assigns class labels to objects. An example would be a rule which predicts when a student enjoy skating: IF (Temperature = low) AND (Ice_Skating_Rink = yes) THEN (Enjoy_Sport = yes). Decision rules are popular because of their form which is simple and easy accessible from the point of view of understanding and interpretation of knowledge represented by them. One of the most popular evaluation measure is length, which corresponds to the number of descriptors (pairs attribute = value) on the left-hand side of the rule. Another popular measure is support, which represents the number of objects from the learning set that match the rule. There are also many other indicators for evaluation of decision rules [7][8][9]; however, in this paper, these two are considered. Construction of short rules with good support is an important task considered in this work. In particular, the choice of short rules is connected with the minimum description length principle [10]: "the best hypothesis for a given set of data is the one that leads to the largest compression of data". Support allows to discover major patterns in the data. These two measures are interesting from the point of view of knowledge representation and classification. Unfortunately, the problems of minimization of length and maximization of support of decision rules are NP-hard [11][12][13]. The most part of approaches for decision rules construction, with the exception of brute force, Boolean reasoning, and dynamic programming, cannot guarantee the construction of optimal rules, i.e., rules with minimum length or maximum support. The main drawback of such approaches is limitation of the size of data if the user would like to obtain a solution in some acceptable time. For this reason, authors propose some heuristic, an algorithm for decision rules construction. Generating rules should be enough good from the point of view of knowledge representation and classification. The paper consists of five main sections. The Introduction, which contains the authors' contribution, is followed by the Related Works section. Then, in the Materials and Methods section, the proposed approach and main notions, including the entity-attribute-value model, standard deviation function, algorithm for construction of rules and computational complexity analysis, are described. In Section 4, the results of experiments connected mainly with evaluation of constructed decision rules using length, support and classification error are presented. Finally, the conclusions are discussed in Section 5. Contribution In this paper, we propose an algorithm for decision rules construction that belongs to the group of heuristics as it constitutes approximate rules, but which grows from the root of dynamic programming extensions-based approaches. There exist some similarity between these methods connected with partitioning decision table into subtables; however, the main difference is based on selection of attributes and construction of rules which are close to optimal ones. For this reason, the experimental results obtained for the proposed approach were compared with the results known for dynamic programming extensions from the point of view of knowledge discovery and knowledge representation. It was shown that the proposed approach allows us to reduce the number of attributes under consideration even to 60% from the whole set of attributes and obtain classification results comparable to the ones obtained by the dynamic programming extension. The idea of dynamic programming approach for decision rules optimization is based on partitioning of a decision table into subtables which are created for each value of each conditional attribute. In this way, a directed acyclic graph is obtained which nodes correspond to subtables and edges are labeled by values of attributes. Based on the graph, the so-called irredundant decision rules are described, i.e, rules with minimal length or maximal support. However, if the number of attributes and their values is large, the size of the graph (the number of rows and edges) is huge. Therefore, obtaining an exact solution within a reasonable time is not always achievable. In the proposed approach, the stage of construction of a graph based on which decision rules are described is omitted. Decision table is partitioned into subtables, but only for the values of the selected attributes. Moreover, to accelerate calculations, the decision table is transformed into the so-called entity-attribute-value model [14]. Selection of attributes and their evaluation is based on the analysis of the spread of their values' standard deviation for decision classes. If the value of standard deviation is high, there is a high possibility that the attribute can distinguish objects with different decisions. Based on values of selected attributes, corresponding subtables from the input table are created. The process of partitioning of corresponding subtables is finished when all rows in a given subtable have the same class label or all values of selected attributes were considered. Then, decision rules are created basing on corresponding values of selected attributes. In some authors' previous work [15][16][17], a modification of the dynamic programming approach for decision rules optimization was proposed; however, the idea was connected with decreasing the size of the graph. Subtables of an input decision table were constructed for one attribute with the minimum number of values, and for the rest of the attributes, the most frequent value of each attribute (value of an attribute attached to the maximum number of rows) was selected. In the presented approach, the idea of selection of attributes is different and construction of the graph is omitted. Related Works There exists a variety of approaches for construction of decision rules. The used approaches depend on the aim for which the rules are constructed. The two main perspectives are knowledge representation and knowledge discovery [18]. Since the aims are different, algorithms for construction of rules with their many modifications are different and quality measures for evaluating of such rules are also different. Basically, approaches for construction of decision rules can be divided into two categories: • allowing to obtain exact rules on the data set under consideration, • generating approximate rules, not perfectly suiting the learning set, but aiming to create rules applicable for general use (these methods will be called heuristics further in this work). Among the first group, there are algorithms which allow to obtain all decision rules based on exhaustive strategy, there are: the brute-force approach which is applicable to decision tables with a relatively small number of attributes, Boolean reasoning [19], and the dynamic programming approach [20,21] proposed in the framework of rough sets theory [22]. Among the second group of methods, there are popular algorithms based on a sequential covering procedure [23][24][25]. In this case, decision rules are created and added to the set of rules iteratively until all examples from a training set will be covered. Among them, we can distinguish the general-to-specific search methods, e.g., CN2 [26] and PRISM [27] algorithms. In the framework of directional general-to-specific search approach, the AQ family of algorithms can be indicated [28]. The RIPPER algorithm [29] is an example of search methods with pruning. LEM2 belongs to methods based on reduct from rough sets theory [30]. There are also plenty of heuristics which are useful in situations where it is difficult to find an exact solution in some acceptable time. However, such algorithms often pro-duce a suboptimal solution since they cannot avoid local optima. Popular approximate approaches include greedy algorithms [31][32][33] and the whole group of biologically inspired methods, among which the following are worth mentioning: genetic algorithms [34][35][36], ant colony optimization algorithms [37,38], swarm-based approaches [39] and many others as described in [40,41]. The task of constructing a decision rule is similar to the task of finding the features that define entities of some category. Attributes can be considered as functions which mapping a set of objects into a set of attributes' values. The premise part of a decision rule contains one or more conditions in a form attribute = value. The consequent part of a rule represents a category. For both tasks, there may be a problem how to choose the features that the best describe a given category (concept). Constructed rules can be considered as patterns that cover many situations and match as many examples as possible, so they should be short according to number of conditions to allows some generalization. However, such rules should also take into account unusual situations, ensuring the correct classification of such objects. Often, the same category is described by more than one rule. Therefore, the set of rules learned from data should not contain contradictory or redundant rules. It should be small such that all rules together cover all examples from learning set and describe a given category in a fairly comprehensive way. As it was mentioned above, there exist plenty of algorithms which use different methods and measures during process of rules construction and choosing attributes that constitute premise part of rules. Materials and Methods In this section, notions corresponding to the proposed approach, the entity-attributevalue model, standard deviation as a distinguishability measure, algorithm for construction of rules and analysis of its computational complexity are presented. Decision Rules Construction Approach The three main stages of the proposed decision rules construction approach are the following: 1. Transformation decision table T into EAVD model, 2. Calculation of standard deviation based on averages' attributes values per decision class, 3. Construction of decision rules taking into account selected attributes. They are discussed in the next sections. Main Notions One of the main structure for data representation is the decision table [22]. It is defined as where U is a nonempty, finite set of objects (rows), A = { f 1 , . . . , f n } is nonempty, finite set of attributes, f : U → V f is a function, for any f ∈ A, V f is the set of values of an attribute f . Elements of the set A are called conditional attributes and d / ∈ A is a distinguished attribute, called a decision attribute. The decision d determines a partition {Class 1 , . . . , A minimum decision value that is attached to the maximum number of rows in T is called the most common decision for T. The table T is called degenerate if T is empty or all rows of T are labeled with the same decision's value. A table obtained from T by the removal of some rows is called a subtable of the table T. Let T be nonempty, f i 1 , . . . , f i m ∈ { f 1 , . . . , f n } and a 1 , . . . , a m be values of attributes. The subtable of the table T that contains only rows that have values a 1 , . . . , a m at the intersection with columns f i 1 , . . . , f i m is denoted by Such nonempty subtable (including the table T) is called separable subtable of T. In the paper, decision rule is presented in the following form: where v is the most common decision for T . The length of the decision rule is the number of conditions (pairs attribute = value) from the left-hand side of rule. The support of the decision rule is the number of objects from T matching the conditional part of the rule and its decision. Entity-Attribute-Value Model The proposed approach for construction of decision rules is based on representing decision table in the entity-attribute-value form (abbreviated as EAV). The decision table formed in a way that each object contains a set of conditional attributes' values and decision (each object occupies one row in the decision table) is converted in a way that each value of attribute constitutes a separate row in the derived EAV table. EAV form is very convenient for processing large amounts of data. It is mainly due to the fact, that such a form allows to utilize RDBMS (ang. Relational Database Management System) mechanisms directly. The idea to utilize SQL and RDBMS for data mining tasks has known advantages. As SQL is designed to facilitate dealing with large data sets efficiently, it is a natural choice for machine learning related tasks. Additionally, current RDBMSes are well designed to store and retrieve data efficiently and fast. Combining this technological achievement with efficient algorithms can lead to satisfactory level of rule generating system. The idea of SQL-based approach for association rules generation has been introduced in [42] and extended to decision rules construction in [14]. Exemplary EAV table can be created in RDBMS as shown in the Listing 1 (Post-greSQL example). In order to have better control on the association of the decision to the attribute value, the EAV form can be extended to contain decision too [43]. Such a format of representation can be denoted as EAVD (entity-attribute-value_with_decision form). Standard Deviation as a Distinguishability Measure Standard deviation (abbreviated as STD) has been used in this paper as a distinguishability measure. It is based on Bayesian data analysis where attributes and their values are evaluated subject to possible decision classes [44]. We calculated standard deviation of the average values of each decision table attributes grouped per decision values. As attributes in real-life applications often are non-numerical, authors take into consideration numerical equivalents of such attributes. They are simply ordinal numbers according to attributes' values appearance in the data set under consideration. The higher the standard deviation is, the greater the diversity among averages values of attributes in particular classes. Such an observation can lead to the conclusion that the higher the STD is, the better the distinguishability between decision classes becomes. As a result, the attributes of high standard deviation should be prioritized when forming decision rules. This forms a ranking of attributes and allows to perform a feature selection step to the rule generation algorithm. The exemplary SQL code to calculate the mentioned standard deviation is as shown in the Listing 2 (PostgreSQL example). Due to the fact that the proposed algorithm needs to work on any alphanumerical values, there was a need to introduce a dictionary table. Each attribute value needs to be converted into its numerical representative, whilst real values need to be transferred to the proposed dictionary table. The mentioned numerical representatives are subsequent RDBMS tables identifiers (in our approach). The exemplary dictionary table is as shown in the Listing 3. r i a l primary key , value c h a r a c t e r varying ) ; In order to better illustrate the distinguishability of attributes based on STD, an exemplary graph has been suggested (Figure 2), for attributes from decision table presented in Figure 1. It presents normal distributions of attributes among which every has a different STD value. The amplitude of the plots is not important whilst the width of each curve is a direct indicator of the distinguishability level. Basing on such a distribution graph, the conclusion can be make that the ranking of attributes is as follows: f2, f3, f1. Construction of Decision Rules The proposed algorithm can be expressed by means of the following pseudo-code (see Algorithm 1). The Algorithm 1 has been designed to deal with discrete attributes. Nevertheless, the is no assumption on numerous nature of attributes. As a result, symbolic attributes need to be converted to numerical representatives. An additional dictionary table has been introduced. It gathers symbolical attributes with their numerical representatives. Having numerical values of attributes obtained, the standard deviation of their average values per decision class can be calculated. These are standard deviations of the average values of each attribute numerical representation for each decision value. The higher the calculated standard deviation, the higher the distinguishability of decision classes basing on the considered attribute. It allows to generate rules basing on only the best attributes from the point of view of distinguishability. The percentage of best attributes that are used for generation need to be chosen empirically and strongly depends on the structure of the data set under consideration. Due to the fact that the attributes are ordered taking into account the distinguishability they offer with respect to the decision values, the generated rules will consist of small number of attributes (making them close to optimal taking into account their length). Creation a ranking of attributes' basing on STD values introduces a feature selection step. From the point of view of dynamic programming's algorithm graph construction, the proposed algorithm perform graph pre-pruning, as it omits creation of unnecessary paths that will never be utilized for rules generation. Having chosen the attributes to be considered, the Algorithm 1 generates rules as described below. Firstly, the set of unique combinations of attributes with values per row in the input decision table needs to be determined. The combinations contain only of the best attributes chosen in the previous step. It is where the row information is also needed. The attributes need to appear in the subsequent combinations ordered descending by the values of previously calculated standard deviations. Nevertheless, the information on decision is not taken into consideration at this stage. Then, for each attribute combination, the subsequent separable subtables of the input decision table get determined. It makes the algorithm similar to the dynamic programming approach which utilizes partitioning of the decision table into separable subtables. For each of the attribute combinations, the procedure starts by taking the first attribute with its value (it is visible now why the attributes in combinations need to be ordered decreasingly by the previously calculated standard deviations). For this attribute and its value, the separable subtable of input decision table gets determined. After that, it is verified if the subtable is degenerate. If it is in fact degenerate, a decision rule is constructed. It consists of the attribute with its value and the decision in the degenerate table. If the separable subtable is not degenerate, the subsequent attribute from the attribute combination gets chosen and then the subsequent separable subtable gets generated. If it is degenerate, the procedure stops and a decision rule gets generated. The procedure continues until either the subtable is degenerate or all the attributes of a given combination are processed. If all the attributes have been used and the subtable is still not degenerate, a decision rule gets generated basing on all these attributes with their values and the most common decision from the corresponding separable subtable. Algorithm 1 Pseudo-code of algorithm generating decision rules for a decision table T. Input: Input decision table T, number p of best attributes to be taken into consideration. Output: Set of decision rules R represent T as entity-attribute-value (EAV) form with separate decision; represent each attribute's value in a discrete numerical form; obtain attributes' standard deviation per decision class. take p number of attributes of largest STD-in a descending order; from T in EAV form select sets v of unique values (including decision) of attributes grouped per decision table's rows; while there exist sets v i in v not marked as processed do generate one-item v i set with initial value from v i which corresponds to creation of separable subtable T = T( f i , a i ); set v i is not processed; while iterations number < sizeo f (v i ) OR separable subtable is not degenerate do extend v i by supplying it with the subsequent element from v i which corresponds to next partition of T ; end while generate decision rule basing on the values of attributes from v i (consequent is the most common decision for T corresponding to v i ); supply the set R with the newly created rule; set v i being processed. end while Where: T is a decision table; p is the ceiling of then number of percentage of the selected best attributes in the formed ranking; R is the set of generated rules; v is the unique set of values from the T in EAV form grouped per rows of input table T; v i and v i are temporary subsets of v for the sake of rule generating iteration, based on the values included in the set v and its subsets separable subtables are created. Figure 1. Example 1. This example presents work of the Algorithm 1 for decision table shown in Percentage of best attributes which will be taken into consideration is 40%, so ceiling of the number of attributes needs to be taken is 2. The attributes standard deviations per decision class are calculated using averages' values of attributes, for each attribute and decision value from EAVD representation, and they are the following: for f 1 , STD = 0.29, for f 2 , STD = 0.58, for f 3 , STD = 0.5. It allows to create a ranking of attributes: So the chosen best attributes are: f 2 and f 3 . Standard deviations are the highest and normal distributions are the widest (see Figure 2). Value sets v of the chosen p attributes, for each row, from the decision table are the following: Separable subtable T(f2,1) is not degenerate (rows have different decisions), so subtable T( f 2, 1) needs to be partitioned and subtable T( f 2, 1)( f 3, 1) is obtained. It is degenerate table, so decision rule f 2 = 1 ∧ f 3 = 0 → d = 2 is derived. This rule is associated with the second and third row from exemplary decision table. Derived decision rule f 2 = 0 → d = 3 is associated with the fourth and fifth row of exemplary decision table. Subtable T( f 2 , 0) is degenerate and decision rule f 2 = 0 → d = 3 is associated with the fourth and fifth row of exemplary decision table. The resulting set R of decision rules derived for rows from exemplary decision table is as follows: When duplicated rules are removed we obtain: The state transition diagram (Figure 3) visually presents the Algorithm 1 for decision rule construction. Taking into account the state transition diagram, it is seen that the algorithm consists of a set of preprocessing steps before two main rule generating loops start to work. Two nested loops are a potential bottle neck from the computational complexity analysis point of view, but as it is explained in the next section, in an average case it is negligible. Moreover, thanks to the attribute ranking, in the most likely case scenario, algorithm can finish operation fast and generate good quality rules (which is detailed described in the Section 4). Algorithm Computational Complexity Analysis In order to calculate the computational complexity of the proposed algorithm, let us analyze each of the algorithm's steps. The steps have been rewritten from Figure 3 and gathered with their execution times in Table 1. The information is based on the assumption that N is the number of objects in the decision table T, whilst P is the number of input parameters, V is the number of valuesets v i and V i is the number of attributes' values for a given attribute i. Taking into account the properties mentioned earlier: 1 ≤ V ≤ N and 1 ≤ V i ≤ N, computational complexity of the algorithm can be determined: • optimistic complexity: • average complexity-due to the fact that V i is typically a small constant, sometimes even equal to 2 (for binary attributes) whilst V is typically close to N, the complexity can be expressed as follows: • pessimistic (worst) complexity: Taking into account the computational complexity of the dynamic programming approach introduced in [21], it is seen that the proposed solution is much more time efficient. Whilst for the described algorithm, the computational complexity is linear in the average scenario, for the dynamic programming approach, it could be exponential in many cases. Are all valuesets v i marked as processed V · t 6 , where 1 ≤ V ≤ N 7 Create 1 element valueset consisting of first value from item v i taken from set v Mark v i as not processed Does the corresponding subtable is degeneare? Update v i by adding next value from the cartesian product set Formulate decision rule from v i and the most common decision d V · t 11 , where 1 ≤ V ≤ N 12 Associate the row with the formulated rule Mark the valueset v i as processed Results Experiments were performed on decision tables from UCI Machine Learning Repository [45]. When for some of the decision tables, there were attributes taking unique value for each row, such attributes were removed. When some of the decision tables contained missing values, each of these values was replaced with the most common value of the corresponding attribute. When, in some of the decision tables, there were equal values of conditional attributes but different decisions, then each group of identical rows was replaced with a single row from the group with the most common decision for this group. The aim of performed experiments was to measure quality of decision rules constructed by proposed algorithm. Decision rules were evaluated from the point of view of: • knowledge representation, i.e., length and support of obtained rules were calculated and compared with optimal decision rules obtained by dynamic programming approach, • knowledge discovery, i.e., classification error was calculated and compared with classifiers obtained by dynamic programming approach. Tables 2-4 gather information on minimum, average and maximum rule length and minimum and average and maximum support of the decision rules generated by the proposed algorithm, respectively. The results were obtained for the 100%, 80% and 60% of best attributes chosen during the rule generation. Taking into account the rule length it can be seen that for some data sets, the maximum and average values are much smaller than the number of conditional attributes in decision table, for example, lymphography, zoo-data. In case of support, maximum value should be noticed comparing to the number of rows in decision table, for cars, hayes-roth-data, house-votes and zoo-data. Table 3. Rule-quality measures for 80% of best attributes. The mentioned quality measures needed to be compared with the ones obtained for the optimal rules (with respect to length and support respectively) generated by the DP (dynamic programming) approach shown in [46]. In order to be able to make the comparison more informative, the relative difference of the respective results have been calculated. It is defined as follows: Decision The results are presented in Tables 5-7. Following the formula for the relative difference, it can be seen that positive values mean that results for the proposed algorithm are larger than the ones obtained for the DP approach, whilst negative values mean that results for the DP approach are larger than the ones obtained for the proposed algorithm. Zero means that both values are equal. The presented results show that the rules generated by the proposed algorithm are not far from the optimal ones. Moreover, reducing the number of attributes resulted in an improvement of the rule quality (for 60% of attributes quality is the closest to the optimal one). Values in Tables 5-7 marked in bold denote results close to the ones obtained for the DP approach for the rules optimized with respect to their length and support respectively. It should be noticed that in Table 7 for the balance-scale decision table, the negative relative difference value is due to the fact that the number of attributes in data set was reduced to 60% of the whole number of attributes, so the average value of the constructed rules was shorter than the optimal value. Experiments connected with classification have also been performed. To make comparison with results obtained for DP approach, the classification procedure was the same as the one described in [20]. Table 8 presents average classification error, for decision tables with 100%, 80% and 60% of best attributes, using two-fold cross validation method. For each decision table experiments were repeated 50 times. Each dataset was randomly divided randomly into three parts: train-30%, validation-20%, and test-50%. Rule-based classifier was constructed on the train part, then pruned by minimum error on validation set and used on test part of decision table. Presented classification error it is the number of objects from the test part of decision table which are incorrectly classified divided by the number of all objects in the test part of decision table. The last row of Table 8 presents the average classification error for all considered decision tables. Column Std denotes standard deviation for obtained results. The smallest mean errors have been marked in bold-the ones for which mean error is smaller than twenty percent. The classification results obtained for the DP approach (introduced in [20]) are presented in Table 9. Statistical analysis of classification results using the Wilcoxon two-tailed test has also been performed as per [47]. The classification results have been compared with the ones for DP approach for decision rules optimized relative to length and support respectively. For Table 8 it turns out that min(W + , W − ) > W crit (for 100%, 80% and 60% of best attributes), so the conclusion can be made that there is no significant difference between classification results obtained by the proposed algorithm and the DP approach. The null hypothesis has been confirmed. The goal of the proposed algorithm was to construct short rules, of enough good support, but keeping high level of decision values' distinguishability. All experiments were performed on a portable computer with the following technical specifications: • Intel i5-8365U CPU, • 16 GB of RAM memory, • Windows 10 Enterprise x64 operating system. The algorithm has been implemented in Java 8 accompanied by Spring Boot framework. All data related calculations have been performed on PostgreSQL 13.0 RDBMS. Software communicates with the database through JDBC connector. Conclusions A new algorithm for decision rules generation has been introduced in this paper. It has been shown that in the average case its computational complexity is linear. It makes the algorithm applicable to a vast majority of different data sets. Moreover, it has been experimentally shown that the rules generated by the mentioned algorithm are, of comparable classification quality to the ones generated by the dynamic programming approach which is not enough good from the time and memory complexity point of view, in opposite to the proposed one. Additionally, presented approach due to its specificity, allows to reduce number of attributes choosing the most significant ones and thus allowing to minimize computational effort even to a larger extent. Having compared the results by means of Wilcoxon test, it is possible to state that for 100%, 80% and 60% of attributes, classification results for the rules generated by the proposed algorithm are comparable to the ones obtained by the dynamic programming approach. It means that the number of attributes taken into consideration can be reduced by forty percent and still the classification results are satisfactory. Furthermore, reduction of the number of attributes often increases the support of generated decision rules and helps to avoid their over-learning. In our future works, we would like to compare the proposed solution with other approaches for decision rules construction, e.g., heuristic-based approach. We also plan to look for further improvements and algorithm tuning. Additionally, we plan to look for its potential real-world applications. Author Contributions: Conceptualization K.Ż. and B.Z.; methodology, K.Ż. and B.Z.; investigation, B.Z. and K.Ż.; software, K.Ż.; validation, K.Ż. and B.Z.; writing-original draft preparation, K.Ż. and B.Z.; writing-review and editing, K.Ż. and B.Z. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Publicly available datasets were analyzed in this study. This data can be found here: [45].
8,191.4
2020-12-24T00:00:00.000
[ "Computer Science" ]