text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Study of Pressure Drops and Heat Transfer of Nonequilibrial Two-Phase Flows
: Currently, there are no universal methods for calculating the heat transfer and pressure drop for a wide range of two-phase flow parameters in mini-channels due to changes in the void fraction and flow regime. Many experimental studies have been carried out, and narrow-range calculation methods have been developed. With increasing pressure, it becomes possible to expand the range of parameters for applying reliable calculation methods as a result of changes in the flow regime. This paper provides an overview of methods for calculating the pressure drops and heat transfer of two-phase flows in small-diameter channels and presents a comparison of calculation methods. For conditions of high reduced pressures p r = p / p cr ≈ 0.4 ÷ 0.6, the results of own experimental studies of pressure drops and flow boiling heat transfer of freons in the region of low and high mass flow rates (G = 200–2000 kg/m 2 s) are presented. A description of the experimental stand is given, and a comparison of own experimental data with those obtained using the most reliable calculated relations is carried out.
Introduction
An important trend in the development of new energy conservation technologies is creating more miniature technical objects, an effort that requires extensive background knowledge of hydrodynamics and heat transfer in single-phase convection and flow boiling in mini-channels.
The opportunity to accurately predict the pressure drops and heat transfer and the selection of mini-channel geometry and working conditions are important factors for the design of the optimal settings of heat exchangers. In various fields of technology, one of the effective methods of heat transfer from heating surfaces is the boiling of liquid. It is necessary to experimentally confirm methods for calculating pressure drops and heat transfer.
Pressure Drops
The two-phase pressure drop in micro-channels is relatively high compared to conventional channels due to their very small size and relatively high mass flow rates, the latter being necessary to achieve acceptable heat transfer coefficients. Due to the high-pressure gradient, saturation temperature, and, consequently, thermophysical properties, there is a difference in pressure in mini-channels when the pressure in a certain axial position of the channel drops below the saturation pressure of the liquid, and the liquid temporarily over- Gravitational pressure drop is usually neglected. If the flow is adiabatic, pressure drop due to flow acceleration is neglected too. Most techniques for calculating pressure drop relate to either a homogeneous or separated flow model.
Homogeneous Equilibrium Model
For a homogeneous equilibrium model, it is assumed that liquid and gas mix with each other, and the pressure drop of a two-phase flow can be calculated using the correlations for a single-phase flow. For this, the values averaged over the entire cross-section are taken as the calculated thermophysical properties, while there is heat transfer between the phases.
where ξ TP is obtained by the Filonenko formula [1]: 1.82 log 10 (Re TP ) − 1.64 2 (3) µ TP is calculated according to the method of Cicchitti et al. [2], which is the most popular method and researched for a wide range of the Reynolds numbers: As it said in Zubov et al. [3], in the limit of high velocities of the mixture at high reduced pressures, there is reason to draw an analogy between the homogeneous model and the continuum model for gas flow. The traditional homogeneous flow model for shear stress on the wall uses the formula: where ρ β = ρ β + (1 − β)ρ : And then, pressure drops are calculated as: In two-phase flow, according to research by Venkatesan et al. [4], Cioncolini et al. [5], and Choi and Kim [6], the homogeneous flow model is applicable only to bubbly flow. Homogeneous flow conditions are fulfilled at high flow rates and mass flow rate steam content less than 0.1. At large values of the mass vapor quality x > 0.1, the homogeneous model, as a rule, is not applied. Under the conditions of subcooled flow, a calculation based on the homogeneous model demonstrates acceptable accuracy [7].
Separated Flow Model
Liquid and gas in the separated flow model move together, and it is taken as a fact that there is a clear phase boundary through which evaporation occurs. In the last decade, a number of studies have been published on pressure drop in micro-channels, based on the methodology of Lockhart and Martinelli [8], who proposed using the two-phase multiplier (9) to relate the pressure drop of a two-phase flow to the pressure drop of the liquid phase: An example of such research is the work of Hwang et al. [9], where the working fluid was R134a, and the hydraulic diameter varied from 0.244 to 0.792 mm. The study concluded that the pressure drop increased with an increase in the Reynolds number and was similar to the pressure drop for single-phase flow in a channel with a larger equivalent diameter. In addition, the pressure drop in a two-phase flow increases with a decrease in the inner diameter. The two-phase multiplier is calculated as: where: To calculate the two-phase multiplier Φ l in conventional channels, for example, the Friedel method is often used [10], in which pipes larger than 4 mm are examined. The two-phase multiplier is calculated as: 045 We 0.035 (12) where: Most formulas and methods, including the ones above, are suitable for relatively small amounts of fluid and a limited range of flow parameters and geometries. Thus, it is necessary to check the accuracy of the prediction models and select the best formula for predicting pressure drops in mini-channels.
Investigations of Heat Transfer in Mini-Channels
In the literature, there are many methods for determining the heat transfer coefficient when boiling a liquid flow in channels.
One of the best-known methods was by J. Chen [11], which was derived in a paper in which the boiling of a saturated water flow in a circular vertical micro-channel was investigated.
Lazarek and Black [12], when calculating the heat transfer coefficient, came to the conclusion that nucleate boiling was the main one that occurred during their tests, since the heat transfer coefficient depended on mass flow rate and heat flux density.
The experimental data in a study by Tran et al. [13] showed that when the value of vapor quality x > 0.2, the heat transfer coefficient does not depend on it. Here, heat transfer depended mainly on the mass velocity and not on the heat flux density. It was found that the border between the regions of the dominance of nucleate boiling and evaporation is rather abrupt and occurs at significantly smaller changes in saturation temperature than predicted.
Kenning-Cooper [14] noted that in the annular flow regime, the heat transfer coefficient is well described by J. Chen [11], but for slug flow, Chen's method gives deviations. In addition, nucleate boiling is sensitive to surface conditions as opposed to evaporative conditions. Gungor and Winterton [15] developed a correlation that is versatile in its application and generally gives a more accurate fit to the data than the correlations proposed by the authors of the studies reviewed. The average deviation between the calculated and measured heat transfer coefficient was 21.4% for saturated boiling and 25.0% for supercooled boiling.
Shah [16] presented, in graphical form, a general correlation called CHAPT for estimating the saturated boiling heat transfer coefficients for subcritical heating of the flow in pipes.
Liu-Winterton [17] presented a comparative analysis of the previously obtained data by J. Chen [11], Gungor and Winterton [15], and Shah [16]. In their correlation, the authors introduced the Prandtl constraint as a parameter that affects the coefficient of influence of the convective component on the heat transfer coefficient.
Kandlikar [18] conducted a comparative analysis of the earlier studies concentrating on the data obtained by the different researchers. For the correlations, data from 24 experimental studies were obtained. For comparison, the correlations of J. Chen [11], Gungor and Winterton [15], and Shah [16] were considered.
Sun and Mishima [19] conducted a comparative analysis of 13 previously obtained correlations, forming a new database. The results showed that J. Chen's correlation [11] and its modifications were not very well suited for mini-channels and that Lazarek and Black's correlation [12] was the most suitable. Table 1 summarizes the most famous works. Obviously, the data in most of the experimental works now available in the literature were obtained for low and moderate reduced pressures [20]. In addition, the authors proposed the calculation methods, which have an empirical nature and are more suitable for describing experiments close to those described by the authors. In the field of high reduced pressures, the analysis of the literature shows a lack of researches. The size of the channel significantly affects the character of vaporization during flow boiling. In the region of high reduced pressures, based on the analysis performed in [21], it can be assumed that, in mini-channels, the flow regimes become identical to those seen in conventional channels. In this case, the relationships for the normal channels may be used to calculate the pressure drop and heat transfer. Based on this assumption, a method for calculating heat transfer for subcooled flow boiling in mini-channels was tested in [7].
The heat flux density was calculated as follows: Water 2021, 13, 2275
of 13
It is assumed that convective heat transfer acts in the same way as in a single-phase turbulent flow: q con = α con (T wall − T fluid ) (17) where α con is calculated using the Petukhov formula with employees in the form [22], adjusted for the difference between the wall and liquid temperatures: Pr l Pr wall 0.25 (18) To calculate q boil in conditions of saturated flow boiling in relation (16), it is advisable to use the equation proposed by V.V. Yagov [23]: σ(λT s ) 1/2 and ∆T s =T wall − T s (all properties are determined at saturation temperature T s ). Modified version of Equation (19) for q boil for subcooled flow boiling presented in the paper [24].
Experimental Setup Description
The scheme of the experimental setup is shown in Figure 1. The hydraulic circuit allows maintaining stable flow parameters at pressures up to 2.7 MPa and temperatures up to 150 • C. A multistage centrifugal pump was used for the creation of working fluid circulation (location 6 in Figure 1). The mass flow rate was measured with a high-precision coriolis flowmeter (location 7). The test section is shown in Figure 2. Vertical stainless-steel tubes with heated lengths of 51 mm each and internal diameters of 1 mm and 1.1 mm were used as mini-channels. The tube was electrically insulated and hydraulically sealed through PTFE (polytetrafluoroethylene) seals. Electrodes were soldered to the tube with tin. The working fluids in this study was R125, which has a critical temperature of 66.023 • C and a critical pressure of 3.6177 MPa. Heat capacity, heat of vaporization and critical pressure of R125 are much lower than that of water, which is very convenient for achieving the desired parameters. The working fluid was cooled by water in a recuperative heat exchanger (location 12). High reduced pressure in the circuit was created by using a thermocompressor (location 1). A pressure sensor with a measurement accuracy of 0.2% was used for measuring pressure and pressure drops across the inlet and outlet of the test section. Chromel-Copel cable thermocouples with a cable diameter of 0.7 mm measured the inlet and outlet temperatures.
The test section was heated with alternating current. The electrical current strength was measured using an LA 55-P current transducer. The measurement error of the electric power was 1%.
The test section is shown in Figure 2. Vertical stainless-steel tubes with heated lengths of 51 mm each and internal diameters of 1 mm and 1.1 mm were used as mini-channels. The tube was electrically insulated and hydraulically sealed through PTFE (polytetrafluoroethylene) seals. Electrodes were soldered to the tube with tin. The test section is shown in Figure 2. Vertical stainless-steel tubes with heated lengths of 51 mm each and internal diameters of 1 mm and 1.1 mm were used as mini-channels. The tube was electrically insulated and hydraulically sealed through PTFE (polytetrafluoroethylene) seals. Electrodes were soldered to the tube with tin. The design of the test section had temperature-compensation. The platform with the inlet collector was mounted on two vertical metal rods on which it could slide. In this way, the inlet collector had a vertical degree of freedom. The platform of the inlet collector was held by a spring along the rods towards the tube to avoid vibration and ensure the stability of the test tube.
Five Chromel-Copel thermocouples were used to take the measure values of the wall temperatures. On five cross-sections (T1-T5, see Table 2) of the working area of the tube on opposite sides of the tube diameter, the wires (diameter 0.2 mm) were welded using lasers. This mounting method for the thermocouples created low thermal inertia for the sensors and allowed the measurement of the average temperature of the wall along its The design of the test section had temperature-compensation. The platform with the inlet collector was mounted on two vertical metal rods on which it could slide. In this way, the inlet collector had a vertical degree of freedom. The platform of the inlet collector was held by a spring along the rods towards the tube to avoid vibration and ensure the stability of the test tube.
Five Chromel-Copel thermocouples were used to take the measure values of the wall temperatures. On five cross-sections (T1-T5, see Table 2) of the working area of the tube on opposite sides of the tube diameter, the wires (diameter 0.2 mm) were welded using lasers. This mounting method for the thermocouples created low thermal inertia for the sensors and allowed the measurement of the average temperature of the wall along its perimeter. The inner wall temperatures were calculated using a correction for the wall conductivity.
Pressure Drop
In this study, experimental data on pressure drop for a range of mass flow rates G = 200-2000 kg/(m 2 s) were obtained at two channels with diameters 1.0 and 1.1 mm. The data were obtained for a wide range of heat flux density, which made it possible to obtain bubble and film flow regimes. Figures 3 and 4 show the primary pressure drop data. For most of the obtained characteristics, the ∆p(q) regions of various flow regimes were observed, such as: convective heat transfer, when the pressure drop remained almost unchanged; nucleate boiling with an intense increase in pressure drop; film boiling regime, when the growth of pressure drops stopped. drops stopped.
With an increase in the diameter, the pressure drops decreased for the same values of mass flow rate (see Figure 3), which is quite natural. Analysis of the effect of reduced pressure on the pressure drops at the same values of G = 600 kg/(m 2 s) (see Figure 4) allowed us to draw the following conclusion: the flow regime changed with an increase in the reduced pressure: the region with non-increasing pressure drops due to the heat load increases from 50 to 125 kW/m 2 . To obtain pressure drops by calculation, the earlier described methods were used. In most of the experiments, the calculation method by [10] had too significant deviation with increasing heat flux, as can be seen in Figure 5, probably due to the quadratic dependence of the two-phase multiplier Ф on the vapor quality х. In addition, this method has been developed for channels with diameters greater than 4 mm. As a result, it was concluded that it was not suitable for generalizing the obtained data on mini-channels, even under conditions of high reduced pressures. With an increase in the diameter, the pressure drops decreased for the same values of mass flow rate (see Figure 3), which is quite natural. Analysis of the effect of reduced pressure on the pressure drops at the same values of G = 600 kg/(m 2 s) (see Figure 4) allowed us to draw the following conclusion: the flow regime changed with an increase in the reduced pressure: the region with non-increasing pressure drops due to the heat load increases from 50 to 125 kW/m 2 .
To obtain pressure drops by calculation, the earlier described methods were used. In most of the experiments, the calculation method by [10] had too significant deviation with increasing heat flux, as can be seen in Figure 5, probably due to the quadratic dependence of the two-phase multiplier Φ l on the vapor quality x. In addition, this method has been developed for channels with diameters greater than 4 mm. As a result, it was concluded that it was not suitable for generalizing the obtained data on mini-channels, even under conditions of high reduced pressures.
The analysis of the experimental data showed that the reduced pressure mostly affected the correspondence of the calculated values to the experimental data. For the investigated range of mass flow rates G = 200-2000 kg/(m 2 s) and values of vapor quality (up to x ≈ 0.4) for reduced pressure p r = 0.43, the best agreement with the experimental data was observed for the method of [9], which was based on a split flow model. An example of calculations for pressure p r = 0.43 is shown in Figure 6.
To obtain pressure drops by calculation, the earlier described methods were used. In most of the experiments, the calculation method by [10] had too significant deviation with increasing heat flux, as can be seen in Figure 5, probably due to the quadratic dependence of the two-phase multiplier Ф on the vapor quality х. In addition, this method has been developed for channels with diameters greater than 4 mm. As a result, it was concluded that it was not suitable for generalizing the obtained data on mini-channels, even under conditions of high reduced pressures. The analysis of the experimental data showed that the reduced pressure mostly affected the correspondence of the calculated values to the experimental data. For the investigated range of mass flow rates G = 200-2000 kg/(m 2 s) and values of vapor quality (up to x ≈ 0.4) for reduced pressure pr = 0.43, the best agreement with the experimental data was observed for the method of [9], which was based on a split flow model. An example of calculations for pressure pr = 0.43 is shown in Figure 6. For the data obtained at reduced pressure pr = 0.57, the calculation using the homogeneous models of [2] and [3] was in better agreement with the experiment than the calculation using the split flow model. Figure 7 shows an example of a calculation for pr = 0.57 and average mass flow rate G = 750 kg/(m 2 s). The following Table 3 presents the generalization of all experimental pressure drop data summarized by the three considered calculation methods. The data are divided into two groups according to the values of the reduced pressure. For the data obtained at reduced pressure p r = 0.57, the calculation using the homogeneous models of [2] and [3] was in better agreement with the experiment than the calculation using the split flow model. Figure 7 shows an example of a calculation for p r = 0.57 and average mass flow rate G = 750 kg/(m 2 s). For the data obtained at reduced pressure pr = 0.57, the calculation using the homogeneous models of [2] and [3] was in better agreement with the experiment than the calculation using the split flow model. Figure 7 shows an example of a calculation for pr = 0.57 and average mass flow rate G = 750 kg/(m 2 s). The following Table 3 presents the generalization of all experimental pressure drop data summarized by the three considered calculation methods. The data are divided into two groups according to the values of the reduced pressure. The following Table 3 presents the generalization of all experimental pressure drop data summarized by the three considered calculation methods. The data are divided into two groups according to the values of the reduced pressure. From the analysis of the generalization of the obtained experimental data on pressure drop, it can be concluded that there was a significant effect of reduced pressure on the agreement of the calculated values obtained using the methods of [2,3,9] with the experimental data. As can be seen from Table 3, the homogeneous model was more suited to high reduced pressures, and the split flow model showed a good result at lower reduced pressure. This is probably due to a change in the structure of flow boiling with an increase in pressure as a result of a decrease in the diameter of the vapor bubble.
Flow Boiling Heat Transfer
Primary data on heat flux based on wall overheating relative to the saturation temperature for different mass flow rates are shown in Figure 8. At G ≤ 1750 kg/(m 2 s), the contribution of convective heat transfer to total heat transfer was insignificant, and the boiling curves lay close to each other with a temperature deviation of about 1 • C. With increasing G, the contribution of convective heat transfer to total heat transfer became significant, which is quite natural, and the boiling curve for G = 2000 kg/(m 2 s) was significantly higher than for other points. Thus, nucleate boiling was obviously the main mechanism of heat transfer at the given mass flow rates.
Water 2021, 13, x FOR PEER REVIEW 10 of 14 agreement of the calculated values obtained using the methods of [2,3,9] with the experimental data. As can be seen from Table 3, the homogeneous model was more suited to high reduced pressures, and the split flow model showed a good result at lower reduced pressure. This is probably due to a change in the structure of flow boiling with an increase in pressure as a result of a decrease in the diameter of the vapor bubble.
Flow Boiling Heat Transfer
Primary data on heat flux based on wall overheating relative to the saturation temperature for different mass flow rates are shown in Figure 8. At G ≤ 1750 kg/(m 2 s), the contribution of convective heat transfer to total heat transfer was insignificant, and the boiling curves lay close to each other with a temperature deviation of about 1 °С. With increasing G, the contribution of convective heat transfer to total heat transfer became significant, which is quite natural, and the boiling curve for G = 2000 kg/(m 2 s) was significantly higher than for other points. Thus, nucleate boiling was obviously the main mechanism of heat transfer at the given mass flow rates. The dependence of the heat transfer coefficient on the heat flux density for one mode in comparison with the calculation results given by the Petukhov Formula (18) for convective heat transfer is shown in Figure 9. It was possible to obtain a small area of convective heat transfer data points due to the impossibility of making the temperature at the entrance of the test section and, consequently, at subcooling below the room temperature. However, it can be seen from Figure 9 that the calculated values coincided with the experimental data in the region of convective heat transfer, which makes it possible to verify the experimental data. The dependence of the heat transfer coefficient on the heat flux density for one mode in comparison with the calculation results given by the Petukhov Formula (18) for convective heat transfer is shown in Figure 9. It was possible to obtain a small area of convective heat transfer data points due to the impossibility of making the temperature at the entrance of the test section and, consequently, at subcooling below the room temperature. However, it can be seen from Figure 9 that the calculated values coincided with the experimental data in the region of convective heat transfer, which makes it possible to verify the experimental data.
in comparison with the calculation results given by the Petukhov Formula (18) for convective heat transfer is shown in Figure 9. It was possible to obtain a small area of convective heat transfer data points due to the impossibility of making the temperature at the entrance of the test section and, consequently, at subcooling below the room temperature. However, it can be seen from Figure 9 that the calculated values coincided with the experimental data in the region of convective heat transfer, which makes it possible to verify the experimental data. Comparison of the data obtained from calculations using Formulas (16)- (19) with the primary experimental data for a low mass flow rate and saturated liquid is shown in Figure 10. A comparison example of the calculation with the experimental data from [7] and [25], corresponding to moderate subcooling and high mass flow rate, is shown in Figure 11. Comparison of the data obtained from calculations using Formulas (16)- (19) with the primary experimental data for a low mass flow rate and saturated liquid is shown in Figure 10. A comparison example of the calculation with the experimental data from [7] and [25], corresponding to moderate subcooling and high mass flow rate, is shown in Figure 11.
The graphs show that the calculated values were in good agreement with the experimental data. One of the features of the subcooled flow boiling, as can be seen from the primary data obtained, was a higher wall overheating (see Figure 11) compared with the saturated boiling (see Figure 10). Figure 12 shows the data obtained in the current study for the most requested range of low and moderate mass flow rates G = 200-1000 kg/(m 2 s). Generalization was performed using Formulas (16)(17)(18)(19). The calculation results were in good agreement with the experimental data for x > 0, and the mean absolute error is 16%. Comparison of the data obtained from calculations using Formulas (16)- (19) with the primary experimental data for a low mass flow rate and saturated liquid is shown in Figure 10. A comparison example of the calculation with the experimental data from [7] and [25], corresponding to moderate subcooling and high mass flow rate, is shown in Figure 11.
The graphs show that the calculated values were in good agreement with the experimental data. One of the features of the subcooled flow boiling, as can be seen from the primary data obtained, was a higher wall overheating (see Figure 11) compared with the saturated boiling (see Figure 10). Figure 12 shows the data obtained in the current study for the most requested range of low and moderate mass flow rates G = 200-1000 kg/(m 2 s). Generalization was performed using Formulas (16)(17)(18)(19). The calculation results were in good agreement with the experimental data for x > 0, and the mean absolute error is 16%. The graphs show that the calculated values were in good agreement with the experimental data. One of the features of the subcooled flow boiling, as can be seen from the primary data obtained, was a higher wall overheating (see Figure 11) compared with the saturated boiling (see Figure 10). Figure 12 shows the data obtained in the current study for the most requested range of low and moderate mass flow rates G = 200-1000 kg/(m 2 s). Generalization was performed using Formulas (16)(17)(18)(19). The calculation results were in good agreement with the experimental data for x > 0, and the mean absolute error is 16%.
Conclusions
This paper has presented an experimental setup along with the results of an investigation of the heat transfer and pressure drop during flow boiling of R125 in two vertical channels with diameters 1.0 and 1.1 mm and lengths 51 mm each under different combinations of high reduced pressure, mass flow rate, and heat flux. These parameters were varied within the following ranges: reduced pressure pr ≈ 0.4-0.6, mass flow rate G = 200-2000 kg/(m 2 s), and heat flux q from boiling onset to crisis.
The most popular methods in the literature for calculating pressure drop and heat transfer during flow boiling in mini-channels have been analyzed. The analysis shows a practical lack of researches with experiments at high reduced pressures.
Generalization of own data on pressure drop and heat transfer has been performed. Based on the literature review, calculation methods of [2,3,9] were chosen for the generalization of the pressure drop data. From the analysis of the generalization, it was concluded that the homogeneous model was more suited for high reduced pressures, and the split flow model showed a good result at lower reduced pressures. Thus, at a higher reduced pressure, the flow regime was more similar to the homogeneous model, whereas at a lower pressure, the flow structure was more similar to the split flow model. No such effect of the mass flow rate on flow structure was observed.
The methods for calculating pressure drop during flow boiling require further elaboration. It is necessary to establish the limits of applicability of various types of models for calculating pressure drop, depending on the reduced pressure and the degree of saturation of fluid flow.
To generalize the data on heat transfer, the previously approved method [7] was used with the division of the calculation of heat flux into convection and nucleate boiling heat flux. The presented calculation method, which are based on Formulas (16)(17)(18)(19), satisfied the obtained experimental results of heat transfer with 16% of mean absolute error. This method can be applied in the most requested range of mass flow rates G = 200-1000 kg/m 2 s and x > 0.
Conclusions
This paper has presented an experimental setup along with the results of an investigation of the heat transfer and pressure drop during flow boiling of R125 in two vertical channels with diameters 1.0 and 1.1 mm and lengths 51 mm each under different combinations of high reduced pressure, mass flow rate, and heat flux. These parameters were varied within the following ranges: reduced pressure p r ≈ 0.4-0.6, mass flow rate G = 200-2000 kg/(m 2 s), and heat flux q from boiling onset to crisis.
The most popular methods in the literature for calculating pressure drop and heat transfer during flow boiling in mini-channels have been analyzed. The analysis shows a practical lack of researches with experiments at high reduced pressures.
Generalization of own data on pressure drop and heat transfer has been performed. Based on the literature review, calculation methods of [2,3,9] were chosen for the generalization of the pressure drop data. From the analysis of the generalization, it was concluded that the homogeneous model was more suited for high reduced pressures, and the split flow model showed a good result at lower reduced pressures. Thus, at a higher reduced pressure, the flow regime was more similar to the homogeneous model, whereas at a lower pressure, the flow structure was more similar to the split flow model. No such effect of the mass flow rate on flow structure was observed.
The methods for calculating pressure drop during flow boiling require further elaboration. It is necessary to establish the limits of applicability of various types of models for calculating pressure drop, depending on the reduced pressure and the degree of saturation of fluid flow.
To generalize the data on heat transfer, the previously approved method [7] was used with the division of the calculation of heat flux into convection and nucleate boiling heat flux. The presented calculation method, which are based on Formulas (16)- (19), satisfied the obtained experimental results of heat transfer with 16% of mean absolute error. This method can be applied in the most requested range of mass flow rates G = 200-1000 kg/m 2 s and x > 0. | 7,397.6 | 2021-08-20T00:00:00.000 | [
"Engineering",
"Physics"
] |
High-fidelity replication of thermoplastic microneedles with open microfluidic channels
Development of microneedles for unskilled and painless collection of blood or drug delivery addresses the quality of healthcare through early intervention at point-of-care. Microneedles with submicron to millimeter features have been fabricated from materials such as metals, silicon, and polymers by subtractive machining or etching. However, to date, large-scale manufacture of hollow microneedles has been limited by the cost and complexity of microfabrication techniques. This paper reports a novel manufacturing method that may overcome the complexity of hollow microneedle fabrication. Prototype microneedles with open microfluidic channels are fabricated by laser stereolithography. Thermoplastic replicas are manufactured from these templates by soft-embossing with high fidelity at submicron resolution. The manufacturing advantages are (a) direct printing from computer-aided design (CAD) drawing without the constraints imposed by subtractive machining or etching processes, (b) high-fidelity replication of prototype geometries with multiple reuses of elastomeric molds, (c) shorter manufacturing time compared to three-dimensional stereolithography, and (d) integration of microneedles with open-channel microfluidics. Future work will address development of open-channel microfluidics for drug delivery, fluid sampling and analysis.
INTRODUCTION
Microneedle devices offer an alternative to the hypodermic needle for blood extraction and injection of drugs. Microneedles are designed to penetrate skin and capillaries without causing pain or the need for medical expertise, so that diagnosis and treatment can be administered at point-of-care 1 . Undoubtedly, the most challenging problem for this field is development of low cost manufacturing methods that will lead to clinical translation of microneedle technology. The manufacturing processes commonly utilized for microneedles fabrication are injection molding 2 , reactive ion etching 3 , chemical wet etching 4 , micromolding 5 , and two-photon polymerization 6 . The choice of fabrication method depends on the manufacturing material, access to manufacturing technology, and the intended application (drug delivery or fluid sampling). Polymeric materials are receiving some interest from the medical industry because of their ease of manufacture, low cost and favorable biological and mechanical properties 2,7 . Hollow bore microneedles are designed to deliver or collect fluid across the skin. The fluid may be a drug, vaccine, blood, or interstitial fluid (ISF). Solid microneedles may only be used for drug or vaccine delivery. Solid microneedles are simpler to manufacture than hollow microneedles 8 , with elution of drug into the tissues after skin penetration. open-channel microneedles draw fluid by capillary tension and are simpler to manufacture compared to internal bore designs which are difficult to replicate by molding. Open-channel microneedles provide two-dimensional flows of fluids, which can be used for both extracting biological fluid and delivering drugs. Single in-plane open-channel microneedles with 2D features have been manufactured by numerically controlled milling and lithography 9,10 .
Interstitial fluid can be extracted from 50 μm beneath the skin surface 11 , while blood collection requires penetration to a depth of at least 400 μm to gain access to subcutaneous capillaries 12,13 . Microneedle lengths for painless blood collection are usually 400-900 μm 12,13 , though microneedles 41 mm have also been reported 9,14 . Skin indentation, dermatoglyphics (small wrinkles) and hair on the surface of skin limits the depth of penetration 15 . Compaction of the skin layer when pressure is applied may limit the hydraulic conductivity of hollow microneedles 15 . An understanding of the biomechanics of skin insertion is important for specification of microneedle tip sharpness, mechanical stability and actuators for microneedle insertion.
The manufacture of long microneedles by subtractive manufacturing methods is technically challenging 3,4,9,10 , so fabrication methods have dictated microneedle geometry rather than biomechanical and physical design considerations. However, we present a simple and versatile fabrication process directly linking three-dimensional (3D) modeling and simulation with microscale printing and replication. The process we have initiated involves microstructures fabricated by 3D stereolithography directly from CAD drawings, which were then replicated by soft embossing. Micro-computed tomography (Micro-CT) demonstrated that feature sizes were within 4% of the CAD drawing specification. The mechanical stability of microneedles was determined by finite element analysis (FEA) and physical compression tests. These data were used to predict if microneedles are likely to fail during skin penetration. Surface energy and channel dimensions determine the rate of filling of microneedle open channels and reservoirs with aqueous solutions. A 4 × 4 array of open-channel microneedles connected to individual two nanolitre reservoirs was tested in vivo. Multiphoton fluorescence microscopy was used to demonstrate delivery of fluorescein tracer into the skin of excised rabbit ears. These initial proof-of-concept studies demonstrate the potential application of 3D microneedle prototyping and embossing for drug delivery and point-of-care diagnostics.
MATERIALS AND METHODS
Open-channel microneedle design Microneedles presented in this paper comprise a cylindrical body, an ultra-sharp pointed tip to penetrate tissue, an open channel extending along a side of the body from the tip to the base of the microneedle, and a flange-shaped base connecting the needle to the back plane, with an open channel connecting the shaft open channel to a through hole and/or reservoir. The design was based on solid and fluid mechanics considerations. Fluid flow channels had to be partially open so that the microneedle array can demold from an elastomeric negative mold.
The geometric details of these designs are based upon an understanding of the material properties of the material and fluid dynamic properties of open-channel flow. The microneedles are 700 μm in height and the internal diameter of microneedles open channels are 30 μm. This diameter needs to be large enough to ensure the flow of cells especially larger cells such as leukocytes (white blood cells). Other parameters that determine flow in a microchannel include blood viscosity, hydrodynamic diameter, contact angle, and driving forces such as surface tension.
Fabrication of master microneedles
Polymeric master microneedles with the geometric designs presented in this paper were fabricated by 3D laser lithography using the Photonic Professional GT system (Nanoscribe GmbH, Karlsruhe, Germany). The direct laser writing (DLW) technique also known as two-photon polymerization (TPP) or 3D laser lithography is a nonlinear optical process based on two-photon absorption (TPA) theory. The Nanoscribe system is equipped with a pulsed erbium-doped femtosecond (frequency-doubled) fiber laser source with a center wavelength of 780 nm for the exposure of the photoresist. At the pulse length of 100-200 femtosecond the laser power ranges between 50-150 mW (Ref. 16). For fabrication of several types of microneedles CAD models were generated by SolidWorks software (Dassault Systems SolidWorks Corporation, Concord, NH, USA) in stereolithography (STL) file format and imported to the software package Describe (Nanoscribe GmbH, Germany) for scripting of writing parameters. The laser beam was focused into the negative-tone photoresist, IP-S (Nanoscribe GmbH, Karlsruhe, Germany), using a Dip-in laser lithography (DiLL) objective with × 25 magnifications and NA = 0.8.
In this process, the objective lens is directly dipped into the liquid and uncured photoresist acts as both photosensitive and immersion medium in an inverted fabrication manner. The refractive index of the photoresist defines the focal intensity distribution. For the DiLL process the objective working distance does not limit the height of the sample; therefore, structures with millimeters heights can be fabricated. A drop of resist was cast on the silicon substrate; IP-S exhibited good adhesion on the silicon substrate, and loaded onto the system. Microneedle arrays were written in galvo scan mode (XY) and piezo Z offsetting mode. The arrays were split into blocks of 317 μm × 312 μm × 20 μm (XYZ) within the working range of the galvo scan mode and stitched together. The Laser power was 100 mW, scan speed 6 cm s − 1 , minimum and maximum slicing distance 0.3 and 0.5 μm, respectively, were chosen after process optimization. After exposure, the structures were developed in propylene glycol monomethyl ether acetate (PGMEA) bath for 30 min plus a 3 min isopropyl alcohol (IPA) rinse followed by 20 min flood exposure through a UV light source with 16 mW cm − 2 intensity to further crosslink the photosensitive material (See Supplementary Discussion for process optimization).
Casting of negative elastomeric mold A 'soft' negative impression of the masters was cast using silicone elastomer polydimethylsiloxane (PDMS) (SYLGARD 184 Silicone Elastomer Kit, Dow Corning, Midland, MI, USA) with a base/curing agent ratio of 10:1 in a Petri dish. The mixture was degassed in a vacuum chamber for 60 min to suppress formation of air bubbles during the subsequent curing stage in a standard laboratory oven at 60°C overnight. The cured PDMS molds were peeled off the master prototypes to be used as negative molds for microneedles replication.
Embossing thermoplastic materials using negative elastomeric molds Thermoplastic microneedle replicas were created by a soft embossing process, which was performed on a rheometer (Kinexus Rheometer, Malvern Instruments Ltd., Worcestershire, UK) using the PDMS-negative molds. 'Soft' negative impressions of the master prototype microneedles were cast using the silicone elastomer, PDMS. One or two thermoplastic pellets (cyclo-olefin polymer, Zeonor 1060R) were loaded onto each cavity of PDMSnegative molds and placed between two 20 mm diameter stainless steel plates. The upper plate was lowered until the plates were in contact and heated up to 160°C, 60°C above the glass transition temperature of the thermoplastic (T g = 100°C). This molding temperature decreases the viscosity of the molten thermoplastic so that it easily penetrates the negative mold cavities. The upper plate was then lowered further as the thermoplastic melted, until a specified target force was reached. On average, a maximum force of 19.52 ± 0.64 N (mean ± standard deviation) was applied during this embossing process. In order to achieve consistent and uniform embossing, the molding temperature was fixed at 160°C for around 15 min throughout the embossing process 17 , while the desired gap between the plates was achieved by applying a calibrated force.
Then the mold and molten polymer were cooled down to 10-15°C which was maintained for 10-15 min with constant force (1.6 N) before demolding. Solidified thermoplastic microneedle arrays were separated from the PDMS elastomeric mold without fracture or defect. The molds were used many times (420 cycles).
Micro-CT imaging
The 3D dimensions of master microneedle array, microneedle array replica, and the PDMS mold were measured on the custombuild high resolution micro-CT facility at UNSW with voxel sizes of 1.36, 1.9, and 1.95 μm, respectively. The X-ray beam energy was set to 30-45 KeV and acquisitions took between 6 and 10 h with 65.7 GB projection data image collected by a high-quality 3072 × 3072 pixel flatbed detector (Varian 4343CB). Image reconstruction was carried out on the NCI supercomputing facility in Canberra utilizing 192 CPUs and 768 GB RAM via the 3D back projection based reconstruction algorithm 18 , producing about 16 GB of data per tomogram. The reconstructed 3D images were then filtered with an anisotropic diffusion filter for edgepreserving image de-noising 19 . A combination of watershed and active contour methods for segmentation of the gray-scale data were performed with the software called Mango as simple thresholding often does not produce correct surfaces as well as volumetric 20 . All visualizations were carried out using the open source software Drishti (version 2.6) 21 . Metrology was performed on the master microneedle array, the negative PDMS mold and Zeonor 1060R replica.
Mechanical compression testing of thermoplastic microneedles
The mechanical strength of thermoplastic microneedles fabricated by soft embossing were measured and analyzed through a series of axial compression tests. Compression tests were performed on (1) single microneedles with a connecting reservoir, and (2) a microneedle patch consisting of 16 microneedles with connected reservoirs using a rheometer (Kinexus Rheometer, Malvern Instruments Ltd., Worcestershire, UK), the same instrument used to perform soft embossing. Throughout the experiment the lower platen of the rheometer was fixed and the upper platen approached microneedle tips with a range of velocities and displacements along the longitudinal axis of microneedles. Force and displacement during compression tests were continuously recorded so that force displacement curves could be plotted to determine the yield strength of microneedles. These were destructive tests so one sample was used per test.
For test (1) three single side-opened thermoplastic microneedles connected to a reservoir were tested. All the samples were fabricated by soft embossing using the same elastomeric mold. The microneedles had a 700 μm height, 150 μm tip length, and a reservoir depth of 180 μm. Single microneedles were attached by double sided tape to the lower platen of the rheometer. The upper platen of the rheometer was initially located 460 μm above the microneedle tip and programmed to approach the specimen with speeds of either 25, 35, or 45 μm s − 1 . The maximum displacement was fixed so that the upper platen stopped 300 μm above the lower platen. In test (2) the upper platen was initially 1.3 mm above the lower platen and traveled with a constant speed of 35 μm s − 1 for 500 μm along the axial direction of a microneedle patch consisting of 16 microneedles with connected reservoirs. During compression, force and displacement of the upper platen were recorded for each sample.
FEA of microneedle shaft design
The overall strength of an individual microneedle and its tendency to fail through buckling during skin penetration will depend on materials and overall shape in a way that can be modeled using FEA.
Microneedle geometric design will affect the distribution of stress when the microneedle is loaded. The microneedle is more likely to fail at locations where stress is concentrated, specifically at sharp corners where the microneedle is joined to the back plane of the microneedle patch. Cracks that initiate microneedle failure will also propagate from regions of maximum tensile stress. Another feature of the microneedle design presented in this paper is a curved flange at the base of microneedle shafts to diffuse stress and strengthen the connection of the shaft to the microneedle back plane. The distribution of stress on loading was simulated by finite element analysis (COMSOL Multiphysics, COMSOL AB, Sweden, v4.3a) using the material properties of the thermoplastic. The material (Zeonor 1060R) was considered as a linear elastic material with Young's modulus of 2100 MPa and Poisson ratio of 0.49. In this analysis the base connected to the microneedle was completely fixed where all other degrees of freedom of the microneedle were enabled. Maximum and minimum mesh element sizes were defined as 129 and 16.1 μm, respectively. Two microneedle designs were examined, a single open-channel microneedle with straight side walls and a single open channel microneedle with a flange at the base of its shaft, Figure 1.
Oxygen plasma treatment
In order to facilitate filling of microneedle open channels and reservoirs by capillary pressure, the hydrophobic thermoplastic must be surface treated to reduce its contact angle to below 90°. Oxygen plasma treatment increases the free energy of the surface by creating hydrophilic, oxygen-containing groups such as carbonyl and carboxyl, esters on the surface 22,23 . Oxygen plasma treatment was performed on the thermoplastic microneedle arrays before biological experiments, using an oxygen plasma etcher (PE-250 Plasma etcher, Denton vacuum, USA) with 50 W RF power and 340 mTorr pressure for 20 min.
Contact angle measurements were executed on flat films of Zeonor 1060R based on the sessile-drop method by CAM 200 compact contact angle meter system (KSV Instruments Ltd., Helsinki, Finland) equipped with a C200-30 camera. Curve fitting software, KSV CAM Optical Contact Angle and Pendant Drop Surface Tension Software Version 3.95, was used to analyze collected images. All measurements were performed at room temperature with air as the light phase and water as the heavy phase. In order to assure the consistency and symmetry of the values, both right and left side angles of the droplet were analyzed. Deionized (DI) water 15 μL drops were placed on each sample by using a micropipette, and the image of each drop deposited on the surface was captured immediately.
Skin penetration and drug delivery
The potential for microneedle skin penetration and drug delivery was tested "in vivo" using an euthanized rabbit ear. Studies were conducted using experimental procedures approved by the University of New South Wales Animal Care and Ethics Committee (ACEC) (Ethics Number 15/22A). Microneedle arrays consisting of 16 microneedles were introduced into the rabbit ear with an insertion velocity of 0.5 m s − 1 using a commercial spring loaded applicator (Medtronic MiniMed Quick-Serter) 24 . The microneedle array was fixed to the middle of the applicator via a microscope coverslip using double sided tape. The microneedle array was dipcoated into a concentrated aqueous solution of fluorescein (sodium salt, F6377, Sigma-Aldrich Corp., St. Louis, MO, USA) at room temperature. The excess solution was carefully wicked away from the microneedle array using a tissue. Moderate pressure was applied for few seconds following microneedle patch insertion. The site of insertion was imaged following removal of the microneedle array using a Leica TCS SP5 STED confocal microscope (Leica Microsystems, Wetzlar, Germany). The penetration of fluorescein into skin was visualized by two-photon optical High-fidelity replication of thermoplastic microneedles Z Faraji Rad et al sectioning at various depths below the surface of the skin. A control experiment was required on the same tissue to quantify the rate of diffusion of fluorescein across skin without the microneedle array. A small drop of fluorescence solution was therefore placed on rabbit ear skin and imaged using the confocal microscope.
RESULTS
Design and manufacture of single microneedles and microneedle arrays with reservoirs Open-channel master microneedles fabricated by two-photon polymerization (TPP) using the Photonic Professional GT system (Nanoscribe GmbH, Germany). Figure 2 The write area of the Photonic Professional GT system using galvanic mirror scan control of the laser focal point was limited to a 250 μm × 250 μm block, so X-Y microscope stage motion with stitching using Nanoscribe's proprietary software Describe was required for printing 4 × 4 microneedle arrays (Figures 2b-d).
Stitching leaves behind a small linear artifact which is placed so that it does not interfere with critical geometric features such as the needle tip (Figure 2e). Pairs (Figures 2b and c) or rows (Figure 2d) of microneedles were interconnected by channels and reservoirs. Each microneedle had a height of 700 μm with a 150 μm flange segment at its base. The taper angle of the microneedle tip was also varied (63.4°: Figures 2b and d; 77.9°: Figure 2c). Single microneedles and microneedle arrays were accurately replicated from TPP prototypes by soft embossing the medical grade thermoplastic Zeonor 1060R (Figure 3).
Microneedles replica, master and mold were scanned using 3D micro-CT to measure feature sizes ( Figure 4). The imaging of master microneedle array verified that feature sizes were within 0.47% of the CAD drawing specification whereas microneedle array replica and PDMS mold were 3.48 and 3.44% smaller than the CAD drawing specification (Table 1).
Bending force FEA of microneedles
Microneedles are most likely to fail where stress is concentrated. 25 .
Mechanical compression testing of microneedles
Dynamic loading tests were conducted to determine the yield strength of microneedles when force was applied along the long axis of microneedles. Three geometrically identical microneedles ( Figure 3a) were tested at different velocities (35, 45 and 55 μm s − 1 ). As can be seen from Figure 6b, force increases upon the first contact of the rheometer's upper platen reaching a maxima, followed by a minima and a secondary maxima. The first maxima on each curve corresponds to the failure load that resulted in permanent deformation of the structures. The subsequent force increase corresponds to compression of the microneedle base. The lag time between the start of recording varies because different distances are traversed before the upper platen makes contact with the microneedle tip. The slope of the initial force displacement curves was identical; however, velocity was directly related to the failure load of the specimen. Figure 6c shows the force versus displacement curve for a microneedle patch array consisting of a 4 × 4 array of 700 μm microneedles and a tip taper angle of 63.4°. The displacement was linearly related to the applied force up to failure at 10 N (Figure 6c). During compressive failure the force was approximately constant over a 100 μm displacement range. A SEM of the microneedle patch following the axial compression test (Figure 6d) showed that microneedles where permanently deformed by bending and compression, without obvious fragmentation.
Delivery of fluorescein into skin
Multiphoton confocal microscopy was used to measure the depth of penetration of fluorescein solution delivered by a 4 × 4 microneedle array into a cadaveric rabbit ear. The Z image stack is shown in Figures 7a and 8b. Figure 7a shows a control experiment where a drop of solution applied to the rabbit skin without application of a microneedle patch. The fluorescein signal disappeared at 66 μm below the skin surface. Figure 7b shows microneedle patch insertion points with tracking of fluorescein to a depth of at least 160 μm. Figure 7c shows the penetration of fluorescein into the skin over 3 h. Figure 7d shows the SEM image of microneedle patch taken after insertion and removal from the rabbit ear. As can be seen from the Figure 7d, no fracture or bending of microneedles was observed, microneedle array was washed and sputter coated with gold coating prior to SEM imaging.
DISCUSSION
The methods in this study would allow production of complex microstructures. The microneedle geometries generated by TPP would be extremely difficult to achieve through subtractive manufacturing methods such as deep reactive ion etching (DRIE), laser machining or chemical wet etching. Thus, microneedle designs are not restricted by the physics of machining and etching, but based upon functional and structural criteria. Doraiswamy et al. 26 fabricated microneedles by TPP from Ormocer hybrid materials with 750 μm height and 200 μm base diameters. Ovsianikov et al. 27 also fabricated 800 μm tall microneedles by TPP with base diameters ranging from 150-300 μm. Organically modified ceramic-Ormocer US-S4 has been used for manufacturing arrays of out-of-plane and in-plane hollow microneedles.
Manufacture times were reduced to less than 20 min per patch by thermoplastic (Zeonor 1060R) replication of TPP prototypes using soft elastomeric molds. The embossing process was automated with Peltier heating above the glass transition temperature and cooling below ambient. Zeonor's low melt viscosity at 60°C above its glass transition temperature facilitates mold penetration without the need to apply high pressure. The elastomeric molds remained undamaged after at least 22 replication cycles, due to the use of very low embossing forces (~19 N). During demolding the silicone rubber deforms with low force, and does not stress or fracture the fragile microstructure. High-fidelity replication of thermoplastic microneedles Z Faraji Rad et al microneedles. For microneedles presented in this paper bending force can additionally be applied during fabrication processes. There is a high chance of microneedle failure by bending force applied when separating the microneedle master prototype from PDMS elastomeric mold after curing and when demolding soft embossed microneedles from the PDMS negative mold. The maximum stress due to bending takes place at the microneedles base at the time of skin penetration. The microneedle bending force can be determined by Euler-Bernoulli beam theory. The maximum lateral (bending) force without causing breakage in the microneedle is defined by: where, σ y (N m In addition, microneedles will fail by buckling if the shaft diameter is too small for the applied force. The critical load buckling force F Buckling for bucking is calculated by Euler's formula: Here, E (Pa) is the modulus of elasticity of the material, I(m 4 ) is the area moment of inertia, L (m) is the shaft length, and K as constant called the effective length factor which depends on the mode of column support. In this paper all the fabricated microneedle shafts are fixed at one end to a patch base, whilst the tip is free to move laterally (K = 2). Therefore, according to Equation (2) for a Zeonor 1060R microneedle with Young's modulus of 2100 MPa, 700 μm total height that includes a 150 μm flange and 150 μm tip the failure buckling force would be~0.4 N.
A needle tip will penetrate the human epidermis if it exerts tensile stress at the point of contact beyond the ultimate strength of skin which is about 27.2 ± 9.3 MPa. The ultimate strength of skin varies according to age and body location 29 . The sharper the needle tip, the more concentrated the tensile force at the point of contact. Microneedle tip sizes smaller than the stratum corneum cells (corneocytes) sizes (30 μm) can penetrate the skin more easily than larger tip sizes. This is due to the fact that in small tip sizes the penetration force rather than being applied to a large Khanna et al. 30 studied the effect of the tip sharpness of hollow microneedles on human cadaver skin penetration force. Insertion forces were inversely related to tip area, and decreased markedly from 4.75 N for a blunt tip (tip area 14 400 μm 2 ) to 0.1 N for the sharpest microneedle (tip area: 186 μm 2 ). Davis et al. 31 experimentally measured the forces required for insertion of microneedles into human skin based on microneedles tip diameters. In this study, it was concluded that for microneedles with interfacial tip areas less than 5000 μm 2 the insertion force required to penetrate human skin is less than 0.4 N (Ref. 31). Wang et al. 32 reported the required insertion force of a sharp beveled tip microneedle to be 0.275 N for insertion into excised porcine skin. This study confirms that the tip geometry and sharpness are significant factors in reducing the required force of microneedle insertion 32 . The microneedles fabricated in our study through a soft embossing process had tip diameters much smaller than 1 μm, and thus had tip areas of less than 0.75 μm 2 . Therefore, the insertion force for microneedles should be considerably smaller than 0.4 N. The minimum failure load (1.04 N) observed in Figure 6b for 35 μm s − 1 speed was much higher than the required insertion force reported in the human skin studies 30,31 . In addition, the rabbit ear insertion experiment demonstrated that the microneedle patch was mechanically robust enough to remain intact after penetration of rabbit ear (Figure 7d).
Due to the hydrophobic nature of cyclic olefin polymer Zeonor 1060R, oxygen plasma treatment was required to make the surface hydrophilic so that the microneedle could generate surface tension for passive filling of the open channel and reservoir. We observed passive filling of microneedles and reservoirs that were treated by oxygen plasma in less than 200 ms, while untreated hydrophobic devices did not fill with water. The measured contact angle of untreated Zeonor 1060R was 88.25° (Figure 8a) while the contact angle of DI Water on the surfaces of a sample after surface modification on days 1 was 21.57° (Figure 8b). In order to further explore the restoration of Zeonor 1060R hydrophobicity, contact angle measurements were performed on a sample for 9 consecutive days; 10 days following surface treatment, showing almost total restoration of the initial hydrophobicity of Zeonor 1060R within about 2 weeks (Figure 8c).
At microscale, the forces that determine filling of a capillary are surface tension and viscous drag. Gravity can be neglected because the specific gravity of fluid is negligible compared to surface tension (Bond numbero 1). Flow along a microfluidic open channel is viscous because Re o1. The flow rate is approximated by equating the viscous pressure drop along a cylinder (Hagen-Poiseuille equation: 8 μLv/r 2 ) with the surface tension drawing fluid into the hydrophilic cylinder (Young-Laplace equation: 2γ cosθ/r). The velocity of the meniscus during filling of a capillary depends on L, the length of capillary filled with fluid: where r is the capillary internal radius, θ the contact angle, and γ the surface tension. The capillary filling time, T fill , is found by integrating with respect to channel length: to a formula, which relates leakage pressure directly to contact angle, surface tension and inversely to channel width w: Microneedle side channels were designed with a 20-μm wide opening so that the calculated leakage pressure was 5.2 kPa (γ = 55 × 10 − 3 N m − 1 , θ = 70°) which is higher than blood capillary pressure (2.0-4.7 kPa) 33 . Thus, open channels are more likely to leak if their contact angle is less than or the channel opening is 420 μm.
CONCLUSION
A reliable and relatively uncomplicated method for manufacturing novel microneedle arrays integrated with open-channel microfluidics for subcutaneous fluid sampling and drug delivery was developed. Negative elastomeric molds were used to replicate TPP prototypes by hot embossing the cyclic olefin polymer Zeonor 1060R. It was possible to produce multiple polymeric replicas with almost identical geometry to the master and CAD design using the same negative elastomeric mold. Microneedle designs with open microchannels connecting microneedle flow to a reservoir were easily replicated by soft embossing. Hydrophilic openchannel microneedles fill rapidly by capillary tension, and can deliver fluorescein tracer into skin of rabbit ear. Moreover, the deflection of microneedles and stress distribution under a lateral (bending) force was investigated by FEA using COMSOL Multiphysics software. Another novel design feature, the flanged base, was added to the design to reduce stress concentration and fracture at the base of microneedles.
In conclusion, the use of 3D laser stereolithography to create master prototypes, with replication by soft-embossing is a significant advance in the field of microneedle manufacture. Microneedle design can be based primarily upon structural and functional modeling. Novel geometric features such microneedle open channels connected to microfluidic reservoirs that are directly rendered from CAD drawings go way beyond what is possible by subtractive fabrication methods. Replica molding of thermoplastic microneedles has the advantage of low material input and capital equipment costs (versus DRIE for silicon) with potential for largescale manufacture of novel microneedle designs. Future advances will focus on application of lab-on-a-chip to point-of-care healthcare delivery. | 6,956.6 | 2017-10-09T00:00:00.000 | [
"Materials Science"
] |
Obesity Resistance Promotes Mild Contractile Dysfunction Associated with Intracellular Ca2+ Handling
Background Diet-induced obesity is frequently used to demonstrate cardiac dysfunction. However, some rats, like humans, are susceptible to developing an obesity phenotype, whereas others are resistant to that. Objective To evaluate the association between obesity resistance and cardiac function, and the impact of obesity resistance on calcium handling. Methods Thirty-day-old male Wistar rats were distributed into two groups, each with 54 animals: control (C; standard diet) and obese (four palatable high-fat diets) for 15 weeks. After the experimental protocol, rats consuming the high-fat diets were classified according to the adiposity index and subdivided into obesity-prone (OP) and obesity-resistant (OR). Nutritional profile, comorbidities, and cardiac remodeling were evaluated. Cardiac function was assessed by papillary muscle evaluation at baseline and after inotropic maneuvers. Results The high-fat diets promoted increase in body fat and adiposity index in OP rats compared with C and OR rats. Glucose, lipid, and blood pressure profiles remained unchanged in OR rats. In addition, the total heart weight and the weight of the left and right ventricles in OR rats were lower than those in OP rats, but similar to those in C rats. Baseline cardiac muscle data were similar in all rats, but myocardial responsiveness to a post-rest contraction stimulus was compromised in OP and OR rats compared with C rats. Conclusion Obesity resistance promoted specific changes in the contraction phase without changes in the relaxation phase. This mild abnormality may be related to intracellular Ca2+ handling.
Introduction
Obesity is characterized by an excess of fat mass influenced by genetic and environmental factors [1][2][3] . This multifactorial disease is an independent risk factor for cardiovascular disorders such as hypertension, arteriosclerosis, and coronary heart disease 1,4 .
Elucidation of the mechanisms involved in obesity-related cardiac dysfunction requires the use of appropriate diet-induced models 4,[5][6][7][8][9] . However, it is well known that rats, like humans, show different susceptibilities to the development of diet-induced obesity, so it is possible to identify subgroups developing obesity and others maintaining a lean phenotype Considering the lack of information regarding cardiac function and the mechanisms underlying the involvement of Ca 2+ handling in obesity resistance, this study was designed to test the hypothesis that obesity resistance does not promote myocardial dysfunction or impairs Ca 2+ handling in obesity models.
Methods Animal Models and Experimental Protocol
Thirty-day-old male Wistar rats were randomly distributed into two groups: control (C, n = 54) and obese (Ob, n = 54). The C group was fed a standard diet (RC Focus 1765) and the Ob group was alternately exposed to four palatable high-fat diets (RC Focus 2413, 2414, 2415, and 2416; Agroceres, Rio Claro, Brazil) as previously described 5 .
The sample size was based on previous studies performed in our laboratory 6,16,23,29 .
BW was recorded weekly after the start of the experimental protocol. Obesity, determined according to BW gain, began to establish at week 3, as previously demonstrated 16 . At this time point, C and Ob rats were maintained on their respective diets for 15 additional consecutive weeks.
Animal Care
The animals were maintained in a controlled environment with clean air, 12 hours of light/dark cycles starting at 6 a.m., room temperature maintained at 23 ± 3°C, and relative humidity maintained at 60 ± 5%. All experiments and procedures were performed in accordance with the Guide for the Care and Use of Laboratory Animals published by the National Research Council (1996), and were approved by the Ethics Committee for the Use of Animals (UNESP, Botucatu, SP, Brazil), under number 1036.
Nutritional Profile
Food consumption, calorie intake (CI), feed efficiency (FE), and BW were recorded weekly as previously described 5 . Fifteen weeks after obesity had developed, the animals were anesthetized with an injection of ketamine (50 mg/kg) and xylazine (0.5 mg/kg). They were then decapitated and thoracotomized, and the epididymal, retroperitoneal and visceral fat depots were dissected and weighed. The adiposity index was calculated with the following formula: (total body fat/final BW) x 100. Body fat was determined from the sum of the individual weight of each fat pad according to the formula: Body fat = epididymal fat + retroperitoneal fat + visceral fat 16 .
Determination of Obesity and Obesity Resistance
A criterion based on the adiposity index was used to determine the occurrence of obesity and obesity resistance according to several authors 4,11,19 . After 15 weeks, rats consuming high-fat diets were ranked based on their adiposity indexes. Thus, in the current study, rats on the high-fat diet exhibiting the greatest adiposity indexes were referred to as OP (n = 35), whereas those exhibiting the lowest adiposity indexes were referred to as OR (n = 19). Rats that failed to present the normal characteristic of the C group while fed with a standard diet were no longer used (n = 15).
Systolic Blood Pressure (SBP)
One week before the rats were euthanized, tail SBP was measured with a tail plethysmograph. The animals were warmed in a wooden box at 40°C for 4 minutes to induce tail arterial vasodilation. A sensor coupled to an electro-sphygmomanometer attached to a computer was placed in the tail and the SBP was then measured with a specific software (Biopac Systems Inc., CA, USA).
Glucose Tolerance and Homeostatic Model Assessment of Insulin Resistance (HOMA-IR)
The experiments were performed in the C (n = 34), OP (n = 31), and OR (n = 13) rats after 15 weeks of treatment. After 4-6 hours of fasting, a blood sample was collected from the tip of their tails. The blood glucose level (baseline condition) of each animal was immediately determined using a handheld glucometer (Accu-Chek Advantage; Roche Diagnostics Co., Indianapolis, IN). Subsequently, an injection of glucose solution dissolved in water was administered intraperitoneally (Sigma-Aldrich®, St Louis, MO, USA), and blood glucose levels were measured after 15, 30, 60, 90, and 120 minutes 20 . The HOMA-IR reflects the degree of insulin resistance and was calculated with the following formula: HOMA-IR = [fasting glucose (mmol/l) X fasting insulin (mU/ml)]/22.5.
Metabolic Profile
At the end of the experimental period, the animals were fasted for 12-15 hours, then anesthetized with an intramuscular injection of ketamine (50 mg/kg) and xylazine (0.5 mg/kg), and euthanized by decapitation. Blood samples were collected and the serum was separated by centrifugation at 3,000 X g for 15 minutes at 4°C, and stored at -80°C until further analysis. The serum was analyzed for levels of glucose, triglycerides (TG), total cholesterol (T-Chol), high-density lipoprotein cholesterol (HDL), low-density lipoprotein cholesterol (LDL), insulin, and leptin. Glucose, TG, T-Chol, HDL, and LDL were measured with an automatic enzymatic analyzer system (Biochemical analyzer BS-200, Mindray, China). Leptin and insulin levels were determined by enzyme-linked immunosorbent assay (ELISA) method using commercial kits (Linco Research Inc., St. Louis, MO, USA).
Post-Death Morphological Analysis
The rats were euthanized by thoracotomy, and the hearts, ventricles, and tibia were separated, dissected, weighed, and measured. Cardiac remodeling was determined by analyzing the weight of the heart and the left (LV) and right (RV) ventricles, and their correlation with the tibial length.
Isolated Papillary Muscle
To assess the intrinsic contractile and mechanical properties of the heart, the isolated papillary procedure was employed as previously described 5 . This experiment was performed in C (n = 36), OP (n = 35), and OR (n = 18) rats. The papillary muscles were also evaluated under the baseline condition of 2.5 mM Ca 2+ and after the inotropic maneuvers of increase in extracellular Ca 2+ concentration and post-rest contraction (PRC) as previously described 5 .
Statistical Analysis
All analyses were performed using the SigmaStat 3.5 software (SYSTAT Software Inc., San Jose, CA, USA). The distribution of the variables was assessed with the Shapiro-Wilk test, and the results were reported as means ± standard deviations. Comparisons between groups were performed using one-way ANOVA for independent samples, and Tukey's post hoc test. Repeated-measures two-way ANOVA was used to evaluate glucose tolerance and myocardial Ca 2+ handling. The level of significance was determined at 5 % (α = 0.05).
General Characteristics of the Experimental Groups
There was no difference in baseline BW among the groups ( Table 1). The high-fat diet promoted a substantial increase in body fat and adiposity index in OP rats compared with C and OR rats. Specifically, OP rats had a 86.4% and 78.8% higher body fat content and 66.9% and 60.5% higher adiposity indexes than C and OR rats, respectively. In addition, epididymal, retroperitoneal, and visceral fat pads, as well as final BW were greater in OP rats compared with C and OR rats. Despite the greater amount of energy in the high-fat diet, the calorie intake was similar in both groups due to a reduced food consumption by OP and OR rats in relation to C rats. In addition, FE was higher in the OP group compared with that in the C group. Although FE values were similar in OP and OR rats, this parameter showed a trend towards a lower result in OR when compared with OP rats (p = 0.077).
Glucose, Insulin, HOMA-IR, and Metabolic Profile
There were no statistical differences in glucose and insulin levels between the groups (Figure 1 -A and B). However, the glucose profile and HOMA-IR index were significantly affected by exposure to obesity (Figure 1 -C and D). The OP rats presented higher levels of glucose at time points 60, 90, and 120 minutes compared with C rats (Figure 1 -C). In addition, there was no statistical difference in the glucose profile between C and OR rats (Figure 1 -C). The area under the curve (AUC) for glucose was higher in OP rats than C rats (Table 2). Moreover, the HOMA-IR index was higher in OP rats than C and OR rats (Figure 1 -D). HDL, LDL, T-Chol and SBP were not significantly different between the groups (Table 2). However, TG levels were higher in OP than C rats. Furthermore, OP rats exhibited higher levels of leptin when compared with C and OR rats ( Table 2).
Morphological Characteristics
The morphological characteristics of the rats are displayed in Figure 2 (A-F). The absolute heart and LV weights, as well as their correlation with tibial length, were significantly increased in OP rats compared with C and OR rats (Figure 2 -A, B, D and E). Furthermore, OP rats showed greater RV weight and correlation with tibial length than C rats (Figure 2 -C and F). Of note, tibial lengths were similar among the groups.
Analysis of Myocardial Function and Calcium (Ca 2+ ) Handling
The contraction performance of papillary muscles at baseline conditions (Ca 2+ of 2.5 mM) was similar for all parameters in the C, OP, and OR groups (Table 3). After 60 seconds of PRC, the maximum developed tension (DT) was lower in OP rats compared with C rats, (C: 7.2 ± 1.7 g/mm 2 versus OP: 6.4 ± 1.4 g/mm 2 ; Figure 3 -A). Although the significance of this effect was only observed at 60s, the myocardium from OP rats also exhibited lower values of DT in response to PRC at 30s (C: 6.8 ± 1.5 versus OP: 6.1 ± 1.3, p = 0.056). Although at baseline condition, +dT/dt values were similar between C, OP and OR rats, when subjected to PRC at 60 s, this parameter was reduced in OP and OR rats compared to C rats (Figure 3 -B). In addition, there was a trend towards lower +dT/dt in OP and OR rats during the PRC at 30 seconds, when compared with C rats (p = 0.075 and p = 0.076, respectively), but these values were not significantly different between the groups (Figure 3 -B). Figure 3 (D and E) shows that obesity resistance did not impair DT and +dT/dt after increase in extracellular Ca 2+ concentration. Furthermore, obesity and obesity resistance failed to elicit any significant effect on the peak of the negative tension derivatives (-dT/dt) at baseline and after maneuvers in the groups (Figure 3 -C and F).
Discussion
Although obesity and overweight are increasingly widespread, some individuals remain resistant to becoming obese 2 . Previous studies have shown that this resistance to obesity may be attributed to changes in nutrition and adiposity patterns 21,22 . Most humans and animals consuming high-fat diets show an increase in BW, with corresponding increase in adiposity levels 23,24 . In comparison, some animals that are fed high-fat diets present less weight gain and adiposity than others that are prone to obesity.
Few studies have evaluated and identified the cardiac characteristics of OR rats 4,8,17 . Still, the occurrence of cardiac dysfunction and its mechanisms remain unknown in this animal model. Interestingly, little information is available on the relationship between obesity resistance, cardiac function, and Ca 2+ handling. The major finding in the current study was that obesity resistance promotes mild myocardial dysfunction, and this result was related to damage in the contraction phase. We believe that this is the first study to report the role of Ca 2+ handling in the myocardium of OR rats.
Fat-enriched diets have been used for decades to model obesity and obesity resistance in rodents 17,22,24,25 . Using male rats fed a high-saturated fat diet for 20 weeks, these studies reported that 42.5% and 40% rats were classified as OP and OR, respectively. In addition, Carroll et al 4 found that 12 weeks of a moderate-fat diet identified 37.5% and 31.25% of OP and OR rats, respectively. The high-fat diet used in the present study was sufficiently intense and long to promote obesity in 64.8% of the rats (OP), whereas 35.2% of the rats did not develop obesity (OR). The literature reveals that obesity resistance is characterized by gain in BW and adiposity at a rate similar to, or lower than that of standard chow-fed rats 12,13,26,27 .
The findings of the current study show that OR animals had significantly reduced final BW and fat deposits compared with the OP group, but similar characteristics as those of C rats. In addition, the adiposity index was 60.5% lower in the OR group compared with the OP group. These results are in line with findings of several other studies 17,28 .
Previous studies have shown that obesity resistance can occur due to increased total energy expenditure as well as reduced food intake 11,13,21 . Joo et al 21 observed increased expression of some thermogenic enzymes and decreased expression of lipogenic enzymes in adipose tissues of OR rats fed a high-fat diet. Obesity resistance also showed suppression of lipogenesis and acceleration of fatty-acid oxidation in visceral fat 13 . The authors suggested that these characteristics are likely to contribute to the anti-obesity phenotype in rats. Moreover, Jackman et al 14 demonstrated that to maintain body homeostasis, OR animals tend to decrease their food intake and/or increase their energy expenditure. Many experiments have demonstrated that disorders induced in rats fed a high-fat diet resemble the human comorbidities caused by obesity, such as glucose intolerance, insulin resistance, hypertension, and dyslipidemia 4,18,[28][29][30] . In OR models, there have been controversies regarding the presence of comorbidities 10,31 . In the current study, there were no changes typically associated with obesity in OR rats, since the high-fat diet was not able to promote changes in glucose, lipid, insulin, leptin, or blood pressure profiles. Our data corroborate those of other studies in which elevation of these variables and/or presence of comorbidities were also not identified 4,10,31 . Of note, Carroll et al 4 found an increase in the HOMA-IR in OR rats compared with C rats.
Morphologic analysis indicated that obesity resistance did not induce cardiac remodeling as seen in human obesity 4 . Instead, OR rats presented lower total heart, and LV and RV weights compared with OP rats. While obesity promoted changes in cardiac structures, such as increase in LV weight (9.0%) and RV weight (21.0%) compared with C rats, OR rats only displayed a slight increase of 8.1% in RV weight, with no significant change in LV weight. Several factors have been implicated in the development of ventricular hypertrophy in obese models, including insulin and leptin 18,32,33 . Our results suggest that leptin and insulin did not increase sufficiently to promote cardiac remodeling in OR rats. The purpose of the present investigation was to study the changes in LV myocardial performance using the isolated papillary muscle preparation method. Several investigations currently use these maneuvers to identify changes in the contraction and relaxation phases which may not be observed under baseline conditions 9,19,34,35 . Along with a lack of increase in BW or fat in the OR rats, the cardiac function in these animals did not change significantly after exposure to a high-fat diet at baseline conditions. Nevertheless, the myocardial responsiveness to PRC was compromised with specific changes in the contraction phase, but without changes in the relaxation phase. Our data are in disagreement with those of Louis et al 8 who have shown that OR rats fed a high-fat diet for 17 weeks presented cardiac dysfunction during the relaxation phase. Despite the absence of cardiac dysfunction at baseline conditions, the PRC stimulation provided evidences that the impairment of myocardial contraction seen in OR rats was related to changes in intracellular Ca 2+ handling. However, there are only a few studies that have reported impaired intracellular Ca 2+ handling leading to myocardial dysfunction in OR rodents. In cardiac myocytes, Ca 2+ plays an important role in cardiac performance and physiological processes 15,36 . According to Bögeholz et al 36 , there are three main ways to modulate the contractile function of myofilaments, namely (1) alteration of cytosolic Ca 2+ concentration, (2) mechanical change in pretension, and (3) catecholaminergic stimulation.
A possible explanation for the contraction impairment mediated by +dT/dt in OR rats may be related to β-adrenergic system downregulation 37 , which was not observed in this study. Positive inotropy in response to β-stimulation involves several pathways such as a) phosphorylation of plasma membrane Ca 2+ channels by protein kinase A increasing Ca 2+ entry into the cell, b) phosphorylation of phospholamban and ryanodine receptor (RyR), increasing Ca 2+ stores and Ca 2+ release from the sarcoplasmic reticulum, respectively, and c) increase in actomyosin shortening velocity, which increases crossbridge cycling 37,38 . It has been reported that changes in the β-adrenergic system can reduce L-type Ca 2+ channels and RyR activity by regulating their phosphorylation status in obesity models 5,23,39,40 .
Conclusion
In summary, the results from this investigation demonstrate that mild myocardial function changes caused by obesity resistance are related to specific contraction impairment without changes in the relaxation phase. Future studies are necessary to evaluate the damage to intracellular Ca 2+ handling, as well as the β-adrenergic system in OR rodent models. | 4,366.6 | 2015-10-27T00:00:00.000 | [
"Biology",
"Medicine"
] |
Optically simulated universal quantum computation
Recently, classical optics based systems to emulate quantum information processing have been proposed. The analogy is based on the possibility of encoding a quantum state of a system with a 2^-dimensional Hilbert space as an image in the input of an optical system. The probability amplitude of each state of a certain basis is associated with the complex amplitude of the electromagnetic field in a given slice of the laser wavefront. Temporal evolution is represented as the change of the complex amplitude of the field when the wavefront pass through a certain optical arrangement. Different modules that represent universal gates for quantum computation have been implemented. For instance, unitary operations acting on the qbits space (or U(2) gates) are represented by means of two phase plates, two spherical lenses and a phase grating in a typical image processing set up. In this work, we present CNOT gates which are emulated by means of a cube prism that splits a pair of adjacent rays incoming from the input image. As an example of application, we present an optical module that can be used to simulate the quantum teleportation process. We also show experimental results that illustrate the validity of the analogy. Although the experimental results obtained are promising and show the capability of the system for simulate the real quantum process, we must take into account that any classical simulation of quantum phenomena, has as fundamental limitation the impossibility of representing non local entanglement. In this classical context, quantum teleportation has only an illustrative interpretation.
INTRODUCTION
Quantum information processing has received special attention not only for the variety of the problems of practical interest that it raises (quantum computation, quantum key distribution, etc) but also for many others problems that show the most counterintuitive aspects of quantum mechanics.Otherwise, the analogy between quantum mechanics and classical optics has been recently explored [1][2][3][4][5][6].The key idea is to exploit the wave nature of the electromagnetic field in order to represent the quantum state of one or more particles.In this representation, the probability amplitude of each state of a basis is associated to the amplitude of the electromagnetic field and temporal evolutions are simulated by means of the propagation of the electromagnetic field through an optical system.The possibility of performing simulations of quantum phenomena by means of classical analogies is interesting from many points of view.Quantum algorithms can be understood as a consequence of the wave nature of the evolution of quantum states.In this sense, the wave character of the electromagnetic field allows us to emulate in an elegant way how the quantum algorithms work.In this work, we present results that show how universal cuantum computation can be optically simulated.We show how quantum states can be represented as images and universal quantum gates can be represented as coherent optical processors.As an example, we show how all these elements can be combined in order to obtain an optical setup for simulating the quantum teleportation process.We present experimental results where the classical representation of the operations of one qbit is done.Finally, the illustrative character of this classical simulation is discussed.
BACKGROUND
Quantum computation and quantum information is the study of the information processing that can be accomphshed using quantum mechanical systems.The bit is the fundamental concept of classical computation.Quantum computation and quantum information are built upon an analogous concept, the quantum bit or qbit for short.A qbit is simply an state of the two-dimensional Hilbert space H2 and it can be denoted as a complex linear combination of the two states of the computational basis {|0), |1)}.An state of the 2^-dimensional space of a system composed by A^ qbits Hf^ can be denoted as the product 11).State associated to the jth qbit (we have omitted the tensor product symbol "(g)" between each factor for simphcity).
Usually, quantum computation process can be thought as a circuit whose input and output states are generally multiple qbits states.According to this circuital model of quantum computation, the input state is mapping onto the output state by a unitary operator It can be demonstrated the following universality result: Any multiple qbit logic gate may be composed from CNOT and single qbit gates [7].Single qbit gates are the unitary operators acting on one qbit states.
The CNOT gate has two input qbits, known as the control qbit and the target qbit, respectively.The action of the gate can be described as follows.If the first qbit (that is the control qbit) is set to 0, then the second qbit (that is the target qbit) is unchanged.If the control qbit is set to 1, then the target qbit is flipped.So, the action of the CNOT gate on each element of the two qbits computational basis is
OPTICAL SIMULATION
Let us briefly introduce how the quantum states can be emulated as spatial distributions of fight [4].If we divide tfie wavefront of a laser beam in 2^ portions, tfie amplitude of tfie electromagnetic field in eacfi portion can be associated witfi tfie probability amplitude of a certain computational state.For example, let us consider tfiat tfie input plane is limited to a square region, wfiicfi is splitted in two fialves.We use tfie convention tfiat tfie up and tfie down regions correspond to tfie two computational states of a single qbit: up is associated witfi tfie state |0) and down is associated witfi tfie state 11).Once tfie up and down regions (in tfie entire plane) are defined, witfiin eacfi one of tfiese regions, we can define anotfier up and down regions witfi tfie same convention, in order to represent tfie computational states of a second qbit.FoUowing tfie same idea we can represent a state of N qbits.In fact at tfie ending of tfie previous process we wiU fiave a one to one mapping between eacfi region of tfie entire plane and tfie set of N-lengtfi binary strings.It sfiould be noted tfiat tfie mapping is exponentially inefficient since in order to represent tfie state of a quantum computer witfi N qbits it is necessary to divide tfie plane in 2^ regions [8].In Fig.
(la) we sfiow an scfieme of tfie spatial organization of tfie input scene.A single slice into eacfi region of tfie plane wiU be associated witfi a certain N-lengtfi binary string.Tfie state corresponding to a well defined value for tfie binary string is one wfiere tfie electromagnetic field amplitude is zero everywfiere in tfie plane except for a single slice.More general states (complex linear combinations of tfie previous states) can be generated using a mask generated by using a medium wfiere tfie complex amplitude of tfie electromagnetic field can be controUed.Tfiis can be done by means, for example, of a programmable LCD displays as we wfll discuss later As we fiave mentioned above unitary temporal evolutions can be simulated by means of optical processors.In general any unitary evolution of N qbits may be implemented exactly by composing unitary single qbits gates and CNOT gates [7].We sfiow in tfiis section fiow to implement tfiese two kind of operators by means of classical optics.
Let us begin with the single qbit U(2) operators.We have recently proposed a combination of optical elements to obtain the Hadamard operator acting on a single qbit [5].The Hadamard operator is a particular case of a more general U(2) operators that we describe here.We will exploit the fact that the complete set of U(2) operators can be decomposed as a product of rotations generated by the Pauh Z operator (phase shifts) and rotations generated by the Pauh Y operator (ordinary SO(2) rotations) [7].The arbitrary 2x2 unitary matrix could be decomposed as: where a, ji, 7 and 5 are real-valued.The idea is very simple: Once the qbit is spatially distributed in the input, a phase plate PP(5) that introduces a retard in the phase of a certain amount 5 over the down half of the wavefront is used.Then, a convergent lens is used in order to obtain the optical Fourier transform of the image.In the Fourier plane a phase grating is placed.The frequency of the grating is selected to produce diffracted orders whose separation in the final plane is equal to the distance between the two qbit images.The phase grating is constructed in such a way that the three principal orders (-1, 0 and I) have relative intensities cos (7/2) for the zero order and sin (7/2) for the one and the minus one orders (the real parameter 7 is controlled by modifying the level of the phase modulation of the grating).
We denote the grating 6 (7) as it is shown in Fig. (lb).Following the grating, a second phase plate PP(/3) is placed.A second lens produces the inverse Fourier transform and the transformed qbit is obtained in the output plane.In Fig. (lb) the set-up for the optically simulated U(2) operator is shown.In the case of CNOT gate acting on two qbits state, the classical optics simulation can be made by using a cube prism (CP) that splits the pair of rays coming from the down half of the entire input plane as we shown in Fig. (lc).Since each one of these four incoming beams represents the complex amplitude of two qbits basis state in the same order and with the same convention that in Fig( la) the cube prism only splits the parity label of the complex amplitude (the second or target qbit) if the pair of rays come from the down half of the plane (the first or control bit in the logical 11)).
AN EXAMPLE: QUANTUM TELEPORTATION
Quantum teleportation is usually explained as follows: Two persons (Alice and Bob) were together for some time but now they live far apart.While together they generated an entangled quantum state of two qbits, then each one takes one qbit of the entangled EPR pair [9] and finally they separate themselves.It has been shown that Ahce can send a certain qbit \W) to Bob, and she can do this only sending him classical information [10][11][12][13][14]. Let us suppose that the state to be teleported is | ^P) = a 10) + /311), where a and /3 are unknown complex amplitudes.The process begins with the three qbits state \W) ^ [0)210)3.For simplicity, we have divided the full process in four stages or modules.Ahce and Bob (that are still together at this time) begin the process by interacting qbits 2 and 3 in order to create an entangled pair Two operations are necessary for doing this: The Hadamard gate (a single qbit unitary operation) that maps the basis states |0) and |1) into the superposition (|0) + |I))/A/2 and (|0) -|1))/A/2 respectively; and the CNOT gate.After a Hadamard operator on the qbit 2 and the CNOT operator on qbits 2 and 3 (with qbit 2 as control), the prepared state at/ = 0 results |O(/=0)) = I^P)!(|0)2|0)3 +|1)2|I)3)/A/2.This is the end of the first stage.At this time.Bob moves apart from Ahce.The first two qbits (1 and 2) belong to Ahce, while the third qbit goes far away with Bob.Once Alice has the qbit to be teleported together with her part of the entangled pair, she begins the second stage of the process that is usually called Bell analysis on qbits 1 and 2 [9,15].First, she sends her qbits (1 and 2) through a CNOT gate using the unknown qbit as control and her part of the entangled pair as target.
Then, she applies the Hadamard operator to her first qbit.This transformations put all three qbits into the state |0 (/ = I)) = H1CNOT1210 (/ = 0)) whose expression naturally breaks down into the following four terms: The third stage begins with the measurement of qbits 1 and 2 in the computational basis.The first term of Eq.( 3) has Alice's qbits in the state |0)j [0)2, and Bob's qbits in the state (a|0) +/311))3, which is the original state.Therefore, if Alice performs a measurement and obtains the result 00, then Bob's qbit will be in the state \W).Similarly, from Eq.(3), if the Alice's measurement is 01, 10 or II then the state of Bob's qbit becomes (a|I) +/310))3, (a|0) -/311))3 or (a 11) -/310))3 respectively.Note that in each of these three cases there is an unitary transformation that restores the In the experimental set-up shown in Fig. 2 , we can see the classical optics representation of each one of the four stages mentioned above.A laser is expanded, filtered and then coUimated with lens Zo-The coUimated beam impinges onto the mask PQ that modulates the complex amplitude of the electromagnetic field in order to encode the information contained in the first qbit.Then, a binary mask Pi is placed to generate the three qbit state |0 (/ = 0)).The second stage consists in the Bell analysis of qbits 1 and 2. CNOT gate on the first and second qbits is simulated by a cube prism CP that splits the rays coming from the down half of the entire input plane.Hadamard gate on the first qbit is simulated by using a phase plate PP(5 = n) and a phase grating G(7= ;r/2) between a pair of spherical lenses (Zi and Lj) [5].In this case the plate was constructed by deposition of a transparent film over a plane glass plate.The input plane lies on the same plane that the binary mask Pi.Lens Z1 (focal length 70 cm) allows to obtain the Fourier transform of the input plane over the phase grating G which is used to perform the Hadamard transform.The grating is constructed in such a way that the three principal orders (-1,0 and 1) have identical intensities.In our case we synthesized an holographic bleached grating with a period of 50 1/mm.A third lens Lj (focal length 70 cm) is placed in order to perform the inverse Fourier transformation.
The next stage consists in the measurement of the qbits 1 and 2 in the computational basis.As result of this measurement process we have the collapse of the quantum state into one of the four possibilities labeled with the eigenvalues of the Zi and Z2 measure operators.The classical analogy of this process is to select randomly one of the four states of the qbits 1 and 2 with the same probability.This random selection is represented by the two rays unobstmcted to the right of the plane P2 as it is shown in Fig. 2. Each of these two rays represents the amplitudes associated with the two logical states of the third qbit.
The fourth stage consists in a conditional unitary operator U(Zi ,22) acting on qbit 3 (In Bob's lab) in order to recover the original information.This last operation is controlled by the result of the measurement performed in the previous stage (in Alice's Lab).Depending on the result of this measurement (the pair of rays selected in the previous stage) we have to perform a certain unitary operation on qbit 3.In Fig. 3 we show how each unitary correcting operation U(Zi ,22) is associated with a certain reduced optical set-up.In practice we have used a pair of spherical lenses with 50 cm of focal length.The phase grating G{Y= n) was programmed in a spatial light modulator (SLM) working in a phase mostly mode.This device consists in a Sony hquid crystal display TV (LCTV) that combined with two polarizers and two wave plates, acts as a pure phase modulator [16].The final image is captured by a videocamera (CCD).In the final image, after application of correcting U(Zi,Z2) operator, we must take into account the local inversion of the coordinates system whose senses are indicated with arrows on the P3 and P4 lenses in Fig. 3.
Experimental results
In Fig. 4 we show the experimental results obtained with the set-up described in the previous section.In the left we show the obtained images and in the right we show the corresponding intensity profiles (performed by averaging rows) in arbitrary units.In Fig. 4(a) we show the input scene that represents the three qbit state at / = 0. Fig. 4(b) and a|0)+p|l) aO +3 1 Reduced optical set-up for performing the optically simulated correcting operations for the four possible results of Alice measurement.
Computational State Finally, in Fig. 4(d) we show the images after application of correcting U(Zi,Z2) operations.Each one of the correcting gates have been simulated separately but, in Fig. 4(d), we have presented the final images and profiles obtained fi'om these corrections jointly for simplicity.A more realistic situation is that one where all pairs of adjacent outcoming beams from the second stage are obstructed except one of them as we have shown in Fig. 4. In our experiment we have selected the complex amplitudes a y /3 in such a way that |j3|V|oc|^ wl/4.Asit can be observed in Fig. 4. a) to d) this rate is preserved (within the experimental error) in the output image of each operation, in a good agreement with the predicted temporal evolution.It must be considered that, the high coherence of the light source introduces speckle noise and the aberrations of the optical elements can introduce undesired phases.These effects can be observed, for instance, in Fig. 4. a) to d) where the intensity profiles are partially corrupted by noise and irregularities.However, the proposed set-up can transfer with high fidelity the information encoded in the input state to the output state.This can be observed in the final profile, where the initial rate between intensities is transferred from the initial to the final state with good accuracy by the system.
CONCLUSIONS
We have shown how to perform optical simulations of quantum information processing by using imaging architectures.Quantum states are emulated by means of images and spherical lenses, phase plates, phase gratings and cube prisms, can be used to optically simulate any U(2) gate and the CNOT gate in an optical processing architecture.As an example, we have implemented an optical setup to classically simulate the quantum teleportation algorithm.The process begins with the representation of the state to be processed as an image organized in two halves.Then, we process the input image with an optical set up composed by several modules.Each one of these modules emulates one step of the real quantum process.The obtained images observed in Fig. 4. show the capability of the system for "teleporting" the information from the full input plane to a portion of the output plane with very high fidelity.It should be emphasized that the classical interpretation of the quantum process is very simple.First the entire image was divided in two halves.After a sequence of optical operations, the information of amplitude and phase encoded in this two halves, is transferred to other portion of the final image.In our case, this is shown in the squared region in the image of Fig. 4(d).From a conceptual point of view we can say that meanwhile the realistic nonlocal quantum process has as amazing consequence the possibility of transfer one qbit of quantum information to an arbitrarily far place by only sending two bits of classical information; this (or any other) classical simulation will have necessarily a more innocent interpretation since the nonlocality nature of quantum mechanics is absent.For this particular simulation, the classical counterpart of quantum teleportation is a consequence of the spatial representation of the states and can be thought as a simple change of scale.The fact of that the information encoded in the first qbit is locally transferred to the third qbit, means that the complex amplitudes encoded in the up or the down half of the entire input plane, appears in the up or the down half of a little piece of the entire plane once ending the teleportation process.However, in spite of this hmitation that is common to all classical systems, we have obtained experimental results that simulate the teleportation process as an imaging process by means of an inexpensive equipment.In fact, our optical set up reproduces with good agreement the expected probability amplitudes in each stage of the whole teleportation process.
FIGURE 1 .
FIGURE 1.(a) Spatial organization of the input plane in order to perform the optical representation of the N qbits state.(b)Coherent optical processor with two phase plates and a phase grating in the Fourier plane as the optical single qbit gate.(c)A cube prism as the optically simulated CNOT gate.
FIGURE 4 .
FIGURE 4. Experimental results: (a) Image representation of the three qbits state \<^{t = 0)).(b) Image representation of the resulting state after CNOT operation on qbits 1 (control) and 2 (target), (c) Image representation after Hadamard operation on the first qbit of the state representd in (b).(d) Image representation of the four possible results after the correcting operations.The rectangles in (c) and (d) suggests the random selection performed by Alice measurement , in the second case, he can apply the Pauli Z operator (which leaves |0) alone but changes the sign of 11)), and in the third case, he can apply the ZX operator After performing the measurement, Ahce must send to Bob the result of her measurement outcome (two bits of classical information) through some classical channel.The fourth stage can be described as a conditional correction of the third qbit state (in Bob's lab) after Alice's measurement.Once Bob has learnt the measurement outcome, in order to restore the state |*P) he has to apply the correcting unitary operation depending on the result obtained for Alice. | 4,935 | 2008-04-24T00:00:00.000 | [
"Physics"
] |
Propensity to Pay Dividends: Evidence from US Banking Sector
Using Fama and French’s (2001) methodology, this paper attempts to shed light on dividend policy and propensity to pay cash dividends implemented by U.S. commercial banks as a possible alternative choice for dividend-seeking investors. The results show that most banks pay dividends at increasing rates, more banks have started paying dividends, while a few have stopped paying dividends. The findings also indicate that the main explanatory variables in predicting cash dividends are: the total assets, return on equity, and equity to liability ratio.
Introduction
Since the publication of the seminal paper on the irrelevance of dividend policy by Modigliani and Miller (1961), the dividend policy of firms has been one of the most important classical research topics in the finance literature.Fama and French (2001) provided empirical evidence that the relative number of dividend-paying firms has been decreasing over the last few decades.This is in part due to the changing characteristics of publicly traded firms.Start-up firms with low profitability and strong growth opportunities have developed a tendency to avoid initiating dividend payments.Regardless of this changing characteristic, a tendency has also been found for firms to be less likely to pay dividends.DeAngelo, DeAngelo, and Skinner (2004) stated that the evidence of decline in the number of dividend payers is confined to industrial firms and is not applicable to financial/utility firms.They also show that the number of dividend payers from financial/utility (industrial) firms has increased (declined) by 9.5% (58.9%) and that the banking industry accounts for 11.20% of the total market capitalization of all dividend-paying firms, and the dividends paid by them account for 14.64% of the total dividends paid by all public firms.Acharya, Gujral, and Shin (2011) have pointed out that banks continued to pay large dividends to their stockholders even after the 2008 economic crisis, "despite expecting large credit losses, breaching the principle of priority of debt over equity.This type of behavior can lead to default, and should therefore be avoided by banks".
Empirical evidence indicates that the dividend policy for banks is quite crucial.It signals quality in a banking environment that is best characterized by significant information asymmetry (Miller and Rock, 1985;Bessler and Nohel, 1996;2000;Boldin and Leggett, 1995;Slovin, Sushka and Polonchek, 1999;Cornett, Fayman, Marcus, and Tehranian, 2011).Onali (2012) discusses the multidimensional aspect of the asymmetric information problems faced by banks and bank customers, shareholders, and examiners.This problem is an important aspect in hypothesizing that banks are different.Banks' shareholders usually expect regular dividends from these financial institutions as these institutions are perceived to be highly liquid.Frequent announcements of stable or growing dividends may therefore be utilized by banks as a means for providing positive information about the bank's solvency to all stakeholders.Hence, dividends provide some positive information about the bank's current success and about the future viability of the bank and vice versa.
Despite the extended literature on the overall issue of dividend policy, most studies exclude regulated firms from their analyses.High financial leverage and tight financial sector regulation implied that financial institutions are regulated and hence excluded from the sample in most of dividend policy studies (Lintner, 1956;Rozeff, 1982;Brennam and Thakor, 1990;Alli, Khan, and Ramirez, 1993;Heineberg and Procianoy, Fama and French 2001;2000;Fenn and Liang, 2001;Grullon and Michaely, 2002).
We believe that the dividends policy of banking sector deserve to be studied in depth for several reasons.First, the large size and the growing dividends paid by banks (DeAngelo et al. (2004), andAcharya et al. (2011)).Second, and according to Baker et al., (2008), the critical role financial firms habitually play in capital markets given their large market capitalization ratios in all financial markets all over the globe.Third, because of what Slovin et al. (1999) refers to as "contagion and comparative effect" of dividends paying banks where they examined the effect of the event of dividend paying at one bank generates externalities for the banking industry.We argue that the contagion effect of dividends paying large banks may have an effect on dividend payment behavior of non-financial institutions.Finally, and given the large number of banks' stakeholders than any other institution, banks may have stronger incentives to send reliable signals through dividend payouts about future profitability.Failing to do so could lead to losing depositors' confidence in banks, which could generate widespread bank runs (DeAngelo, DeAngelo, and Skinner (2000), and Baker et al., (2008)).
The main objective of this paper is to determine underlying variables used by U.S. banks to formulate dividend policy.For achieving the objective we are employing the methodology used by Fama and French (2001).To compare the results of this paper with Fama and French's results, the time period used for analysis in this paper is 1993-2000; which is the same as the latest study of Fama and French (2001).
Previous Research
Given the real scarcity of literature directly related to this topic, we use the literature that is relevant to the research question that comes from other related research areas of corporate finance and other regulated institutions.Smith (1986) and Moyer et al (1992) examined the regulatory effect on dividend policy.Their results show that regulated firms use dividends as a means of subjecting the utility and the regulatory rate commission to market discipline.Dividend policies adopted by these firms are determined as a response to changes in policies adopted by regulatory commissions.Akhigbe, Borde, and Madura (1993) measure the effect on common stock prices in response to dividend increases for both insurance firms and financial institutions and compare the effects to unregulated firms.They find that the stock prices of insurance firms react positively to increases in dividends over a four-day interval surrounding the announcement.Their results show that the market reaction for each subsample of Insurance Corporation is greater than the market reaction for financial institutions.Their results support that the market reaction is mostly determined by industry-related, rather than firm specific, variables.Boldin et al (1995) found that there is a positive relationship between banks' dividend per share and banks quality rating, and a negative relationship between the dividend pay-out ratio and banks' quality rating, concluding that a bank's dividend policy yields information about the bank's quality.Collins, Saxena, and Wansley (1996) compared the dividend pay-out patterns of regulated firms with those of unregulated firms.Their findings don't support that the financial regulators' role is one of agency cost-reduction for equity holders.Utilities, on the other hand, are different.They alter their dividend pay-out in response to changes in insider holdings.Moreover, for a given change in insider holdings, this policy change is more significant than the change for unregulated firms.Slovin et al. (1999) examined excess returns to both announcing and rival banks.The results indicate that dividend reductions generate negative common stock returns for all announcing banks, and significant reduction in preferred stock prices of announcing banks, even though there are no concomitant changes in preferred stock dividends.Fama and French (2001) argue that firms with high profitability, good investment opportunities, and larger size tend to pay more than other firms.The three characteristics considered by Fama and French match the characteristics of banks, which are mostly large in size, highly profitable, have better investment opportunity sets, and are highly liquid.Baker et al., (2008) examine the perceptions of managers of Canadian firms listed on the Toronto Stock Exchange to determine whether views differ when partitioned into financial and non-financial firms.Their results suggest the existence of industry effects where the perceptions of managers from financial versus non-financial firms differ on the importance of various factors influencing the dividend policy of their firms.Cornett et al., (2011) document a positive relationship between a bank's performance (in terms of profitability, capital adequacy, asset quality, operating efficiency, liquidity and growth) and dividends' initiation and between dividends' initiation and both takeover likelihood and merger premium, a conclusion that supports the signaling role of dividends.In their attempt to analyze the contagion effects of dividends reductions in the US banks.Onali (2012) found that banks that are close to depleting their capital (with low capital to total assets ratio) pay more dividends to their shareholders, arguing that dividends are used to shift risk from banks' owners to taxpayers.These findings are fully consistent with those of Acharya et al. (2011).Akhigbe and Whyte (2012) investigate the link between payouts and stock incentives among financial institutions with varying degree of regulation across depositories, insurers, and securities firms.Their findings show that managerial stock ownership is inversely related to dividend payouts across the institutions showing no evidence that the relationship occurs because of regulation since all institutions, regardless of the degree of regulation, exhibit the same inverse relationship between dividend payouts and management stock ownership.Their results support Collins et al., (1996).
Data and Methodology
The data of this study consists of 759 commercial banks, drawn from Bankscope for the period 1993-2000.We began with 1993, which is the earliest possible data available from Bankscope, and we ended with 2000 to prevent any regulation/deregulation acts.Another reason to use data from 1993-2000 is to make the results comparable with Fama and French (2001) results.However, it is worth mentioning that the initial sample consisted of 1425 banks, of which several observations were dropped due to incomplete information on all the variables chosen in this study.The banks studied in this paper did not go into merger activities or perform any act that might cause structural changes.Table 1 shows the descriptive statistics of the sample.The Table shows an increasing trend for pay-out ratio and an increasing trend in percentage of the payers.In 1993, the pay-out ratio was 38% and became 55% by 2000.The percentage of payers also increased from 66% in 1993 to about 80% in 2000.
Model and Variables Definition
Using Fama and French's (2001) logit model framework, and by adopting the same procedures to quantify the propensity for paying dividends, we built the following multivariate logit model, as shown in Equation (1).
As mentioned above, we tried to keep our variable as close as possible to the variables used by Fama and French (2001).To account for the size effect, we chose the log of total assets (LOGTA).We expected that the larger the bank size, the more likely it would be to pay dividends.The profitability is represented by two variables: return on assets (ROA), and return on equity (ROE).Again, we expected the higher the profitability the higher the pay-out ratio.The investment opportunity here is represented by the loan to total assets ratio (LOANTA); we hypothesized that the higher the assets' utilization through loans, the higher the pay-out will be.We further added the safety variable to account for the effect of safety considerations in the banking sector; two ratios were chosen: equity to total assets (EQUITYTA), and equity to total liabilities (EQUITYLI).Our expectations about these variables were mixed.On one hand, we believed that manager would want to compensate the shareholders if they participated more in financing the banks' activities, but on the other, if the manager wanted to keep the coverage ratio high, then we expected that he would retain a high portion of net income and be very reluctant to pay cash dividends.To examine the characteristics of dividends payers, we used logit regression, which was performed firstly for the whole period (1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000), and then on annual bases to test how the coefficients changed over time.Furthermore, we subsampled our overall sample according to their cash dividend-paying trends; here we had four different dividend-paying groups: banks which kept paying dividends for the whole period, banks which stopped paying, banks which paid more than the net income of the spot year, and banks which had already started paying.
Empirical Results
Table 1 shows the descriptive statistics of the data set analyzed in this paper.Table 2 presents the logit regression results for the whole sample of banks for the period 1993-2000.The results show a significant intercept when using either the whole sample or another time subsample, which indicates the stickiness of the dividends policy adopted by U.S. banks.The findings also show a positive relationship with total assets, return on equity, and equity to total assets, and a negative relationship with equity to total assets.Accordingly, it can be concluded that large banks with high equity tend to pay more dividends.For the safety ratios, we have two results.For EQUITYTA, we have a significant negative slope, which means the higher the equity the less the pay-out.This is a very interesting result because it tells us that managers do not care who finances their operations.The other result is related to the EQUITYLI ratio, which indicates a positive significant relationship between the dividends paid and the coverage ratio; the more the bank is covered, the more it will pay dividends.The other results reported in Table 2 show how the coefficients change over time.In general, the size, LOGTA, looks the most persistent variable in determining the pay-out policy.ROE is still significant for some periods.To make sure that we did not miss the effect of our selected variables on the future pay-out policy, we used logit again, but this time with the lagged dummy variable (namely year t-1).The results are shown in Table 2. Table 2 confirms our results where LOGTA is still the most significant and persistent explanatory variable, followed by ROA, which still has some explanatory power.In Table 3, we utilized the logit estimates to make expectations about the next period pay-out ratio.Starting with 1994, we compared the actual pay-out ratio with the logit equation gained by using 1993 data.The results are represented in the last three columns of Table 3 which show that the actual dividends paid exceeded the expected dividends, reflecting the increasing propensity of banks to pay.This result is very interesting because Fama and French never reported such a result for the whole stage of their analysis.In general, when firms have a growing propensity to pay dividends (including banks), estimations tend to underestimate the future dividends, and vice versa.
Our next step was to repeat what we had already done, this time using several subsets.The reason for doing this was that we had already noticed that there were four distinctive trends in dividends policy.Table 4 and Table 5 report the results for the banks that had never cut paying dividends (which counts for about 70% of the total sample).The results are not much different from what those we reported in Table 2, in which this subsample counts for 70% or more of our entire sample.We still can see the significant positive effect of LOGTA.
Table 6 and Table 7 report the results of the banks that had stopped paying dividends for at least two years.Astonishingly, the size effect disappeared totally when using logit for the entire period.The only significant variable was LOANTA, which was still negative.Nothing changed in Table 7, except losing the significance of the LOANTA, The interesting part, if we assume that somebody will use these insignificant estimates, is that the expectations start exceeding the actual dividends' ratio.The explanation is that shareholders think that the bank is still paying when it has stopped doing so.
Table 8 and Table 9 reports the results for those banks which made at least two payments during the study period, exceeding their net income for those years.The results also fail to report any significant variables.We believe that the reason behind those results is the extraordinary nature of these payments.Most of the banks did this once or twice and no more.But what is unique in this case is the loss of the trend of the expectations.The last sample of our analysis is represented in Table 10 and Table 11.The subsample here consists of the banks that had already started paying dividends.This sample is the smallest in size (relative to the others).The result is the significant coefficient of LOGTA for the whole period, and from Table 10 we can see that most variables are significant except LOANTA; however, significant parameters do not exist for any single period.The last column of Table 11 shows the growing propensity to pay throughout the whole period during which the expectations never exceeded the actual dividends paid.Total Liabilities (EQUITYLI).This table shows the means (across years) of the regression intercept (C) and slopes, and the significant tstatistics for the means are given in parentheses.Using the same explanatory variables, we also used 1993 as a base period to estimate logit regressions that explain whether a bank pays dividends.Starting out expectations from 1993 logit (Expected) we compare it with the Actual numbers of 1994, and so on.
Conclusions
This paper employed Fama and French's (2001) logit model to explore an idea about the dividend-paying propensity adopted by U.S. banks and to explain dividend pay-out ratios in the U.S. banking sector.Our findings show that banks, in general, kept paying dividends at increasing rates.The findings also show that the main factors affecting dividend-paying in banks are the total assets, return on equity, and equity to liability ratio.The first two variables affect the dividend pay-out positively, and the third affects it negatively.For comparative reasons, we break down our sample into subsets according to their recent dividend-paying characteristics to account for the changes of the regression parameters for each group (never stopped paying, never paid, stopped paying, and started paying).The only group that shows real significant coefficients for all variables is the "started paying" group.
Table 1 .
Descriptive statistics of total assets, net income, dividends paid, and the percentage dividends paid for each year (Millions US $) This table provides descriptive statistics of total assets, net income, dividends paid, and the percentage dividends paid for each year (Millions US $)
Table 2 .
Logit regression for all banks to discover the trend of dividend paying propensity for each year t, for the period 1993-2000 The dependent variable is a dummy variable, 1 in year t if the bank pays dividends, 0 otherwise.The explanatory variables are Log of Total Assets (LOGTA), Return on Assets (ROA), Return on Equity (ROE), Loan to Total Assets (LOAN_TA), Equity to Total Assets (EQUITYTA), and Equity to Total Liabilities (EQUITYLI).This table shows the means (across years) of the regression intercept (C) and coefficients of independent variables, and the significant t-statistics for the means are given in parentheses.
Table 4 .
Logit Regression for banks kept paying dividend every year to follow the trend of dividends paying propensity for each year t for the banks kept paying for the whole period for the period 1993-2000 The dependent variable is 1in year t if the bank pays dividends, 0 otherwise.The explanatory variables are Log of Total Assets (LOGTA), Return on Assets (ROA), Return on Equity (ROE), Loan to Total Assets (LOAN_TA), Equity to Total Assets (EQUITYTA), and Equity to Total Liabilities (EQUITYLI).This table shows the means (across years) of the regression intercept (C) and slopes, and the significant tstatistics for the means are given in parentheses.Using the same explanatory variables, we also used 1993 as a base period to estimate logit regressions that explain whether a bank pays dividends.Starting out expectations from 1993 logit (Expected) we compare it with the Actual numbers of 1994, and so on.
Table 5 .
Logit Regression for the banks kept paying every year to follow the trend of dividends paying propensity for each lagged year t-1, for the period 1993-2000
Table 6 .
Logit Regression for banks stopped paying dividend to follow the trend of dividends paying propensity for each year t for the banks kept paying for the whole period, for the period 1993-2000 Return on Assets (ROA), Return on Equity (ROE), Loan to Total Assets (LOAN_TA), Equity to Total Assets (EQUITYTA), and Equity to Total Liabilities (EQUITYLI).This table shows the means (across years) of the reg ression intercept (C) and slopes, and the significant tstatistics for the means are given in parentheses.Using the same explanatory variables, we also used 1993 as a base period to estimate logit regressions that explain whether a bank pays dividends.Starting out expectations from 1993 logit (Expected) we compare it with the Actual numbers of 1994, and so on.
Table 7 .
Logit Regression for the banks stopped paying dividends to follow the trend of dividends paying propensity for each lagged year t-1, for the period 1993-2000The dependent variable is 1 in year t if the bank pays dividends, 0 otherwise.The explanatory variables are Log of Total Assets (LOGTA), Return on Assets (ROA), Return on Equity (ROE), Loan to Total Assets (LOAN_TA), Equity to Total Assets (EQUITYTA), and Equity to Total Liabilities (EQUITYLI).This table shows the means (across years) of the regression intercept (C) and slopes, and the significant tstatistics for the means are given in parentheses.Using the same explanatory variables, we also used 1993 as a base period to estimate logit regressions that explain whether a bank pays dividends.Starting out expectations from 1993 logit (Expected) we compare it with the Actual numbers of 1994, and so on.
Table 8 .
Logit Regression for banks paid more than 100% of their net income to follow the trend of dividends paying propensity for each year t for the banks kept paying for the whole period, for the period 1993-2000
Table 9 .
Logit Regression for banks paid more than 100% of their net income to follow the trend of dividends paying propensity for each lagged year t-1, for the period 1993-2000
Table 10 .
Logit Regression for banks started paying dividends to follow the trend of dividends paying propensity for each year t for the banks kept paying for the whole period, for the period 1993-2000 Return on Assets (ROA), Return on Equity (ROE), Loan to Total Assets (LOAN_TA), Equity to Total Assets (EQUITYTA), and Equity to Total Liabilities (EQUITYLI).This table shows the means (across years) of the regression intercept (C) and slopes, and the significant tstatistics for the means are given in parentheses.Using the same explanatory variables, we also used 1993 as a base period to estimate logit regressions that explain whether a bank pays dividends.Starting out expectations from 1993 logit (Expected) we compare it with the Actual numbers of 1994, and so on.(***, **,*, are significance levels at 1%, 5%, and 10% respectively)
Table 11 .
Logit Regression for banks started paying dividends to follow the trend of dividends paying propensity for each lagged year t-1, for the period 1993-2000The dependent variable is 1in year t if the bank pays dividends, 0 otherwise.The explanatory variables are Log of Total Assets (LOGTA), Return on Assets (ROA), Return on Equity (ROE), Loan to Total Assets (LOAN_TA), Equity to Total Assets (EQUITYTA), and Equity to | 5,181.4 | 2012-07-31T00:00:00.000 | [
"Economics",
"Business"
] |
Drug Repositioning Ketamine as a New Treatment for Bipolar Disorder Using Text Mining
: Bipolar Disorder (BD), a chronic mental illness, does not have an ideal treatment, and patients with BD have a higher chance of being diagnosed with alcohol abuse, liver disease, and diabetes. The goal of treatment is to prevent a relapse in BD episodes and find a new treatment. The research here looks at the genetics of BD and ignores environmental factors, as they are subjective. Therapy treats known environmental triggers and stressors and explores methods to reduce them. However, therapy alone cannot fully alleviate the symptoms of BD. My research employs text-mining as a primary strategy to obtain relevant genes and drugs pertaining to BD. The main gene involved is the Brain-Derived Neurotrophic Factor (BDNF). Popular drugs currently used for treatment of BD are Lithium and Carbamazepine. Using CMapPy to look at gene expression data, one sees a relationship between the two drug therapies and BDNF. Lithium fails to treat mania and Carbamazepine fails to treat depression, relatively speaking. When comparing gene expression data of Lithium and Carbamazepine with Ketamine, a newer therapy for BD, Ketamine, raises the BDNF level, keeps it elevated, and effectively controls BD episodes. Ketamine does not have the shortcomings that Lithium and Carbamazepine have. Next steps would include conducting a clinical trial with the hopeful application of Ketamine as a new treatment for BD. found were based on manual research in PubMed.
Introduction
Bipolar disorder, which affects 1 in 100 people, does not currently have any therapies that prevent relapses in patients' symptoms [1][2][3]. BD swings a person's mood as a person enters a depressive episode, a state in which one becomes lethargic and can be suicidal, unable to get out of bed to do something, to a manic episode, where one acts impulsively and unwillingly moves constantly, with time in between episodes. This inconsistent, uncontrollable lifestyle needs proper treatment to alleviate the stress, pain, and to prevent relapses. Current treatments do not prevent new episodes from occurring, and existing therapies cannot identify new triggers. Additionally, BD patients tend to abuse alcohol, which can lead to increased incidents of liver disease, often present in people with BD [4][5][6]. These diseases can worsen the severity of the episodes and could lead to further complications.
The Wingless-related Integration Site (Wnt) is the pathway involved that causes episodes [7,8]. Dysregulation of the Wnt signaling pathway is caused by a mutation in the Brain Derived Neurotrophic Factor (BDNF) gene [9]. Due to the mutation, as the pathway moves forward, the b-catenin protein becomes downregulated, leading to downregulated gene transcription in the DNA, causing the antidepressant effect (when someone goes into a manic or depressive state-one of their episodes) ( Figure 1). There are some major flaws with current drug treatments (Table 1). Although they treat some aspects of BD, drugs do not treat all aspects. The drug predominantly used, Lithium, does not target manic episodes nor rapid cycling (switching from manic to depressive episodes without a gap). Carbamazepine, despite targeting the manic episodes, also does not alleviate rapid cycling. In addition, drugs do not always work with depressive states despite being the therapy's target. Drugs also have side effects (e.g., seizures, memory loss, ambulatory and balance issues, etc.) which can lead to worsened BD episodes or symptoms. Therapy treatments, There are some major flaws with current drug treatments (Table 1). Although they treat some aspects of BD, drugs do not treat all aspects. The drug predominantly used, Lithium, does not target manic episodes nor rapid cycling (switching from manic to depressive episodes without a gap). Carbamazepine, despite targeting the manic episodes, also does not alleviate rapid cycling. In addition, drugs do not always work with depressive states despite being the therapy's target. Drugs also have side effects (e.g., seizures, memory loss, ambulatory and balance issues, etc.) which can lead to worsened BD episodes or symptoms. Therapy treatments, despite their efforts to give patients coping mechanisms and ways to deal with stress, fail to identify any new triggers. They only focus on known triggers and ways to calm down patients. Clearly, the commonly used drug treatments are not effective, despite being a common disorder.
Text Mining for Genes and Drugs
Current drug therapies only control symptoms of Bipolar Disorder and do not affect the actual genes that bring about BD in the first place. Both the genes that cause BD and the drug therapies that treat it need to be catalogued. To extract the information needed from PubMed, text-mining code was written in Python and divided into two routines; one generates a list of genes involved with BD and the other generates a list of drugs that have been researched as treatment options for BD. The process worked thus: using Python, the text-mining code created two separate routines. The first routine generated a list of abstract IDs (e.g., 34249777, 34207921, 34207460). For genes, the search term 'Bipolar Disorder Genes' was used and for drugs, the search term 'Bipolar Disorder Drugs' was used.
The second routine extracted information. A given list (in text file format) was used as a reference to all necessary genes and drugs that are worthwhile to look at. This was taken from the HGNB and DGIDb. In doing so, two databases became references and the program searches for all those terms within the abstracts. In each abstract in which text mining was conducted, the punctuation was removed such that each word was its own separate entity, then searched for any of the terms in the given database (the genes for the gene program and the drugs for the drug program). The output was written to a separate text file. After organizing the data received, the most significant genes and drugs were obtained. These were validated by reading various literature regarding the genes. Since only the top 10 genes needed to be received, literature was read to ensure that the specific gene was directly related to Bipolar Disorder, in a way that affects the disease.
Libraries used to put together the programs were the pandas, NumPy, metapub, and Eutils libraries. Using the MetaPub library, the PubMedFetcher was imported and a variable was set to hold both the search query and the number of search results to be analyzed; in this case, 1,000,000 results were to be analyzed for the queries "bipolar disorder drugs" and "bipolar disorder genes." This allowed the code to analyze the material from the abstracts (which was separated earlier by obtaining the abstract IDs) using the metapub library. Then, it was written to a new file in which it compiled all the genes or drugs that have been mentioned. These results were then sorted using Excel.
If other scientists were to conduct similar experiments, they would have almost identical results, with varying factors including search terms, and more varying results looked at. Because of this and other factors, the specific way the code is written can vary, in addition to there being slightly different ways to achieve the same goal.
Using CMapPy to Obtain Gene Expression Data
In order to find new treatment, gene expression data were used to compare how drugs affect a specific gene. After putting all the data together, the drug expressing the gene that is highest for a BD patient would become the selected drug for a new treatment.
After getting information from the previous programs and organizing and validating the data, the next steps were to relate the genes and the drugs and gene expression data. Using the data, one can look at current treatments along with possible new ones to identify the best possible treatment options for BD patients.
CMapPy is a library that allows Python programmers to look at specific file types and obtain data from them. By using this library, it is possible to compare data to determine which drugs would work best to treat the significant gene involved, in this case, BDNF. Using online tutorials from GitHub, it is possible to compile a list of necessary files and look at the data one has. As mentioned earlier, code will vary, and it is imperative that one has the files and follows tutorials as needed.
Parsing through the CMapPy files resulted in obtaining the effects on the genes associated with Bipolar Disorder. To find a new, acceptable treatment, gene expression data for both current and possible treatments allowed us to compare them to select the best.
Genes That Cause Bipolar Disorder
After running the first program to obtain the list of Abstract IDs and running the second program, the list of genes was compiled. Then, after organizing the data, the top ten genes, after validation, were the genes most associated with BD. The Brain-Derived Neurotrophic Factor was the most common one that was shown, followed by the Catechol-O-Methyltransferase ( Table 2).
As shown by Table 2, BDNF is the gene most commonly referenced in relation to Bipolar Disorder.
Drugs Currently Used to Treat Bipolar Disorder
After organizing results from the text-mining genes, the same process was completed for the drugs; however, the computing code was altered slightly to fit new parameters. After organizing all the data once both programs were complete, the top ten drugs were chosen and validated as the most common current drug treatments to look at. Lithium was by far the most common drug used in treatment, followed by Carbamazepine (Table 3). Table 3. The number of times specific drugs have been represented in relationship with Bipolar Disorder. After receiving data from the code, the data were organized, and then validated as treatment options currently used. Lithium 1442 Carbamazepine 488 Quetiapine 347 Lamotrigine 342 Risperidone 267 Aripiprazole 220 Clozapine 169 Haloperidol 152 Ziprasidone 111 Topiramate 93 As shown by Table 3, Lithium and Carbamazepine are the drugs most commonly referenced drugs in relation to Bipolar Disorder.
Which New Drug Would Work Best
Once the data were organized, the gene expression data helped to identify a drug therapy that would result in optimal treatment. The table shows how well the drugs are based on the gene expression data, allowing identification of a drug that would work best after validation, which was found to be Ketamine (Table 4). When looking at the gene expression data, they were then expressed in terms magnitude, on scale 1-10, with Ketamine having the largest magnitude, showing that it would express the BDNF gene the greatest.
Discussion
Genes such as Dopamine Receptor D2 and Dopamine Receptor D4 are involved in chemicals and other parts that affect Bipolar Disorder. The Brain-Derived Neurotrophic Factor (BDNF) is the most common gene mentioned ( Table 2). It helps with the survival and growth of neurons, neuroplasticity, communication, and memory [10,11]. BDNF affects Bipolar patients as there is a negative correlation between the severity of the episodes and the levels of BDNF [9,12]. The correlation expresses the key reason as to why the BDNF gene needs to be looked at. The takeaway from these first results is that the gene to be looking at for the remainder of the research is the BDNF gene.
Although Lithium is the chief drug used in modern-day treatment for BD, the drug has serious flaws as there is not much known about it. In addition, Lithium, despite being able to treat depression and acute mania, fails to treat intense mania and rapid cycling [13]. Additionally, taking Lithium everyday can lead to Lithium poisoning [14].
Looking at the second item on the list, Carbamazepine, it also has shortcomings as a treatment for BD. Carbamazepine does not work well with rapid cycling and does not target all parts of Bipolar [15]. The takeaway here is to look at Carbamazepine and Lithium, the genes that came up the most, and to relate them to BDNF.
Lithium and Carbamazepine are drugs that raise BDNF levels but fail to keep them constant. For Lithium, there are flaws as it does not control all aspects of BD-only the depressive and some manic symptoms. With Carbamazepine, it does not really raise the levels, it only prevents abnormal levels. In conclusion, both drugs work, but they do not seem to work as well as Ketamine would. In addition, new research supports the idea of having Ketamine as a treatment option for BD. Using the numbers received, the numbers were fitted to a scale of 1-10 so that it would be easier to understand and make sense of to determine which drug works best. The drugs looked at in this scenario are the ones most used for Bipolar treatment, as found by reading literature and using the found results from the text mining.
Conclusions
Based on data, Ketamine appears to be a more promising therapy, as it overcomes the shortcomings with previous treatments which fail to target all aspects of Bipolar Disorder and side effects leading to worsened episodes in certain cases.
Ketamine does not have any of the issues that other therapies present, despite their side effects of nausea. Ketamine raises BDNF levels and keeps them elevated, limiting any further manic or depressive episodes (Figure 2). As shown in Figure 2, Ketamine would bring the BDNF levels back to normal, preventing the mutation there, leading to a more normal B-catenin, preventing the antidepressant effect, and limiting chances of a relapse. Looking at the magnitude by which Ketamine raises BDNF, it is significantly higher than those of Lithium and Carbamazepine, which raise BDNF levels more as a by-product as they focus on abnormal activities in the brain (Table 1.) Ketamine raises BDNF levels by taking advantage of the resting-state functional connectivity of the prefrontal cortex [16]. This is also closely related to the plasma levels, which end up raising the gene levels. By raising the levels, an increase in the production of BDNF occurs, as seen in Figure 2.
A future extension would be running a clinical trial on Bipolar Disorder patients to see the further uses of Ketamine and how it can be leveraged in treatment, allowing for new treatment applications as the idea is formed further and tested. As shown in Figure 2, Ketamine would bring the BDNF levels back to normal, preventing the mutation there, leading to a more normal B-catenin, preventing the antidepressant effect, and limiting chances of a relapse. Looking at the magnitude by which Ketamine raises BDNF, it is significantly higher than those of Lithium and Carbamazepine, which raise BDNF levels more as a by-product as they focus on abnormal activities in the brain (Table 1.) Ketamine raises BDNF levels by taking advantage of the resting-state functional connectivity of the prefrontal cortex [16]. This is also closely related to the plasma levels, which end up raising the gene levels. By raising the levels, an increase in the production of BDNF occurs, as seen in Figure 2.
A future extension would be running a clinical trial on Bipolar Disorder patients to see the further uses of Ketamine and how it can be leveraged in treatment, allowing for new treatment applications as the idea is formed further and tested.
Conflicts of Interest:
The authors declare no conflict of interest. | 3,511.2 | 2021-12-31T00:00:00.000 | [
"Psychology",
"Biology"
] |
HEPATITIS DISEASES PREDICTION USING MACHINE-LEARNING TECHNIQUES
The importance of research that contributes to the early diagnosis and management of lethal diseases is critical to society, and hepatitis is one of these killer diseases. Hepatitis is a life-threatening condition that develops when the liver becomes enlarged and injured. As a result, the primary goal of this article is to analyze the hepatitis dataset in order to accurately forecast outcome accuracy and dependability. Six machine learning classification methods: Support Vector Machines, Gaussian Naive Bayes, Logistic Regression, Decision Tree, K Nearest Neighbors, and Multiplayer Perceptron were tested on hepatitis dataset and a confusion matrix was plotted for each of the classification models. The accuracy, precision, and recall criteria were used to make the comparison. For each model, the accuracy was assessed using the root mean square value and mean absolute error. The selected algorithms, particularly the Multiplayer Perceptron (87%) and Logistic Regression (87%) algorithms, showed high accuracy rates. Furthermore, with a minimal root mean error of 0.35 and a minimal mean absolute error of 0.12 and 0.13, the two algorithms are the most dependable of all the methods.
INTRODUCTION
Hepatitis is a potentially fatal disease that occurs when the liver becomes inflamed and injured. It is a viral disease that has resulted in a high death rate worldwide (Nilashi, 2019). Hepatitis is transmitted by sewage pollution or direct contact with contaminated bodily fluids (Al-Thaqafy et.al, 2013). Viruses, bacteria, medicines, or drugs can also cause this condition (Trishna et al., 2019).Tattoos and piercing, drug abuse, sexual contact with an infected person, hemodialysis, blood transfusions are also methods by which an infected person can transmit this disease (Metwally et al., 2018) hepatitis may be acute or chronic (Metwally et al., 2018). Acute hepatitis causes intense and painful symptoms at the start of the disease, making it more painful for patients, but it only lasts a month or two (Trishna et al., 2019). Consequently, there is only minor liver cell disruption and no effect on immune system function. Chronic hepatitis is a form of hepatitis that lasts more than six months and leads to cirrhosis, a condition in which the parenchymal cells of the liver are damaged (Metwally et al., 2018). Hepatitis A, B, C, D and E are 5 distinct forms of hepatitis (Ahmad et al., 2019). Hepatitis A and E are acute hepatitis, while Hepatitis B, C, and D are chronic hepatitis. Despite continuing studies into a treatment for hepatitis C, there is currently no available vaccine for the disease (Bhargav & Kumari, 2018). Early detection, as well as proper diagnosis and treatment, can cure the disease (Yarasuri et al., 2019). Also, health workers are most at risk with hepatitis disease (Polat & Günes, 2006). The cause for this is that the diagnosis of hepatitis disease is mostly by routine blood tests, which exposes medical personnel to associated risks during diagnosis. Hepatitis medical diagnosis is difficult since a specialist must weigh several aspects before performing the disease diagnosis process (Nilashi, 2019). As a result, this condition necessitates the creation of automated and reliable diagnostic systems that can aid in the identification of hepatitis for physician decision-making. Machine learning is a valuable technique that clinicians can use in this instance. Machine Learning (ML) is a technique for teaching a system to learn by finding patterns and associations in captured data using various algorithms (AtifKhan et.al, 2012). As a result, ML allows for predicting and diagnosing any illness, taking into account two essential factors: parameter collection and the method used to analyze these parameters. This study compares six related ML algorithms that are beneficial to diagnose hepatitis. Support Vector Machines (SVM), Gaussian Naive Bayes, Logistic Regression, Decision Tree, K Nearest Neighbors (KNN), and Multiplayer Perceptron (MLP) are the algorithms considered. The main objective of this paper is to analyze hepatitis dataset data and correctly predict the outcome in each dataset using the six ML methods. The study makes substantial contributions in the following areas: a) To improve the classification accuracy and reliability for predicting hepatitis diseases. b) To make a comparison of six classification algorithms for ML on the data set for hepatitis. c) Determine the most effective ML algorithm for predicting hepatitis.
Data collection
The hepatitis dataset was retrieved from the University of California, Irvine (UCI) Repository. There are 155 samples in the database, and it has 20 attributes, together with the class label attribute. To diagnose and identify hepatitis, Machine Learning Algorithms were applied to this dataset. The specifics can be found in table 1 below. The dataset was trained and tested using six machine learning algorithms. A comparison was made based on the tools' accuracy, precision, and recall. Loading data, attributing and preprocessing data, data classification, implementation of ML methodology, and disease prediction are the critical processes involved in this study. Figure 2 depicts a model method for hepatitis diagnosis, with phases of the procedure explained in the sections below:
Loading Data
The data came from the UCI register, which has 155 instances with 20 different attributes. Because machine learning learns from samples, the model requires smoothing large amount of data to produce results. Data imputation is performed on the available dataset to obtain a satisfactory amount of data.
Attributing and Preprocessing of Data
Missing data was resolved in order to obtain adequate data for preparation, validation, and testing by imputing the omitted data and substituting an individual global constant for each of the lost values. In this hepatitis data set, 75 of the 155 instances have missing values. If missing data in any field is not properly treated, it can result in error prediction and degrade the performance quality.
Classifying the Data.
The data for this analysis were divided using stratified splitting. Data were segmented into 10-fold cross-validation training and testing data sets before modeling. The scores collected for each fold are averaged out and utilized as a single score after a 10fold cross-validation repeat. This means that the model is trained with 90% of the data for each fold and evaluated against the remaining 10%. This style's cross-validation avoids the bias of training the model primarily on negative or positive data.
Using ML tool to diagnose the disease Training, forecasting, and testing are the basic three steps of the machine learning implementation. The classifier algorithm creates the model based on the training dataset during the training phase. The trained model then use to predict the hepatitis disease. The testing data set was use to validate the forecast's performance by determining the accuracy, precision, and recall of the prediction. The techniques used in this analysis were SVM, Gaussian Naive Bayes, Logistic Regression, Decision Tree, KNN, and MLP classifiers: SVM is a widely used and practical method for dealing with data classification, interpretation, and prediction issues (Saangyong et.al, 2009). SVM is used to map the input variable to n-dimensional function space. For classified training outcomes, it generates a hyperplane that divides the function space by their class, preventing overfitting (Xiao & Leedham, 2002).
KNN is one of the most fundamental classification algorithms. This algorithm prioritizes the best k nearest neighbors [It is a common machine learning algorithm for datasets due to its ability to select neighbors. We will not get the right results if we choose the lower and upper values of k. As a result, in order to obtain a particular result, we select an optimal k value for the algorithm.
Gaussian Naive Bayes implies the presence of one function in a class has no effect on the existence of any other feature. [The idea behind the term "naive" is that it reduces the difficulty of computation to a general probability multiplication. The primary advantage of GNB is its speed, as it is a simple algorithm in comparison to other classification algorithms. Due to its simplicity, this GNB algorithm is capable of efficiently processing datasets with a large number of dimensions MLP is a form of feed-forward artificial neural network that maps input data datasets to a set of suitable outputs. A MLP is made up of multiple layers of nodes in a directed graph, each layer being completely connected to the previous one. Excluding the input nodes, a unit node represents a processing unit with a nonlinear activation function. In the MLP classification dataset, back propagation is a supervised learning approach that was employed to train the network. MLP is a version of the normal linear perceptron which can classify data in datasets that are not linearly separable.
Logistic Regression is a computational method for evaluating a data set in which the result is calculated by one or more independent variables. The aim of logistic regression is to determine the optimal model that describes the relationship 4 between a collection of predictor variables and an observed dichotomous feature. Decision Tree is the most frequently used classification algorithms are decision tree algorithms (Karthikeyan & Thangaraju, 2013) (Twa eta.al,2005). A decision tree is a straightforward modeling technique that employs tree structure to construct classification or regression models. It generates a related decision tree incrementally as a data set is subdivided into smaller categories. Consequently, a tree with leaf and decision nodes is formed. A decision node with more than two branches is referred to as a leaf node, and the upmost decision node in a tree is referred to as the root node, which represents the best predictor (Soofi, & Awan, 2017).
Classification performance measures
The following are the metrics used to evaluate the classification mentioned above. It is the measure of the difference between the two continuous variables. The MAE is the average vertical distance between each actual value and the line that best matches the data. MAE is also the average horizontal distance between each data point and the best matching line. e) Root Mean Square Error (RMSE) is defined as the square root of the average squared distance between the actual score and the predicted score as shown in Equation 5: The true score for the i th data point is denoted by , and the predicted value is denoted by ŷ .
f) Area Under Curve (AUC) is the likelihood that the classifier would score a randomly chosen positive example higher than a randomly chosen negative example. The AUC is based on a plot of the false positive rate against the true positive rate and ranges between 0 and 1 which are defined as shown as shown in Equation 6 and 7. )
RESULTS AND DISCUSSION
The classification techniques was implemented with python. A number of health-related attributes are included in the dataset, as well as the class label, which corresponds to a patient's hepatitis status. The data was separated into two categories: training data and validating data. Using the training data given, we trained the six models; SVM, Gaussian Naive Bayes, Logistic Regression, Decision Tree, KNN, and MLP. The models were tested using validating data, and a confusion matrix was plotted for each of the models. The Table 2 depict the confusion matrix of the SVM, Gaussian Naive Bayes, Logistic Regression, Decision Tree, KNN, and MLP on the hepatitis dataset respectively. The confusion matrix for all the models are as shown below: The classifier's accuracy in making correct predictions is measured using the uncertainty matrix. The count value of the uncertainty matrix represents the number of accurate and inaccurate classifier predictions. The upper row of the uncertainty matrix lists predicted positive events with true positives, while the lower row lists no events with true negatives. The diagonal elements denote the number of projected target classes that are equal to the actual target class. The misclassified or wrongly predicted targets class belongs to the off-diagonal elements.
Hepatitis Diseases Prediction…
From the matrix, the true positives, true negatives, false positives, false negatives along with the true positive rate and false positive rate were utilized to calculate the recall, precision, accuracy and AUC were calculated by implementing specified modules. The recall, precision and accuracy give the performance of the various classification algorithms when applied on the Hepatitis dataset are display in the chart figure 2 and 3 of the ROC graph as shown. The following conclusion may be drawn from the findings: the MLP and Logistic Regression algorithms have the highest accuracy of 87 percent, followed by the Decision Tree Algorithm with an accuracy of 85 percent. The KNN comes next, with an ideal accuracy of 82 percent, while the Gaussian Nave Bayes Algorithm comes in third, with an ideal accuracy of 72 percent. The ROC curve reveals that the AUC for MLP model beat all other models on the validation data set, with substantially higher and steady performance.
The following figure 4 and Figure 5 describe a Mean absolute error analysis and root mean square error for all the models. The lowest mean absolute error rate of 0.13 was achieved with MLP and Logistic regression, and the root mean square error rate of 0.35. The lower the MAE and RMSE for a given model, the more closely the model can predict the actual values. Figure 4 below shows the comparison graph of the MSE for the ML tools. To further verify the accuracy of the models, the Mean Absolute Error (MAE) for each one of the models was determined. The MAE states the average difference between the actual data value and the value predicted by the models. The lower the MAE for a given model, the more closely the model can predict the actual values. The RMSE and MAE were also used to validate the algorithms' predictability.
CONCLUSION
In this paper, evaluation of performance using classification performance measures was carried out on selected Machine Learning (ML) algorithms. Accuracy, precision, and recall were used to determine if an individual has hepatitis or not from the various independent attributes. According to the results of this analysis, the selected algorithms demonstrated some good accuracy percentages, especially MLP (87%), Logistic Regression (87%), Decision Tree (85%) and KNN (82%) algorithm. These algorithms can be applied for determining whether or not hepatitis is present in a person. MLP, on the other hand, is the most dependable, with a Mean Absolute Error 0f 0.13 and a minimum Root Mean Square Error of 0.35.
In the future, the data set will be used to build the model will be increased, and this will result in more unique rules and better accuracy. Different weighing techniques are suggested to enhance the accuracy. Also, other classification methods can be employed to extend the research further. | 3,365.6 | 2021-11-01T00:00:00.000 | [
"Computer Science"
] |
Texture of GaAs Nanoparticles Deposited by Pulsed Laser Ablation in Different Atmospheres
This work analyzes the effect of nanosecond laser pulse deposition of GaAs in an inert atmosphere of Ar and He. The number of pulses and the gas pressure were varied and the effect on the nanoparticles formation was studied by scanning electron microscopy, grazing incidence small angle X-ray scattering, and atomic force microscopy. It is shown that the GaAs nanoparticle sizes and size distributions can be controlled partly by the number of laser pulses applied during their production and partly by the choice of inert gas and its pressure. Our results suggest that He is a more promising working gas producing narrower size distributions and a better size control of the grown nanoparticles.
Introduction
Pulsed laser deposition (PLD) is a simple and convenient method of producing various types of materials among which are also nanoscaled materials [1]. In such materials the quantum confinement becomes a dominant effect and it significantly modifies their properties. PLD using very short pulses is particularly interesting for the deposition of complex multielement films, preserving the stoichiometry of the parent materials [2]. Numerous experiments were carried out and rich scientific information of the ablation process was obtained. Nevertheless, several mechanisms involved in these processes are not yet completely understood. Different results cause continuing discussion about ultrafast melting [3], resolidification dynamics [4], surface structure modification [5], thermal and nonthermal mechanisms of ablation [6], and direct cluster emission. This is the reason for the continuation of extensive studies, both theoretical and experimental, on the dynamics of laser heating, melting, resolidification, and ablation of the target material irradiated by different pulse durations and wavelengths.
The effects of the background gas on the deposition process were considered in a number of papers [7]. The experimental results on ablation in the presence of ambient gas revealed the importance of the ambience gas parameters, though some of them are not of chemical nature. Previous results [8] show that the effect of the background gas depends on the combination of the ambient and laser parameters used in the PLD. In the PLD process, the nature of the phenomena taking place during the plasma expansion depends upon the gas pressure in the ablation chamber [9]. Under low pressure, from vacuum up to 0.5 mbar, atoms and ions are solely present in the plume leading to film formation and growth. At intermediate gas pressure, 1-20 mbar, the emitted species undergo collisions with gas molecules and condensation occurs in the plume leading to the nanosized particles formation. At increasing pressures higher than 100 mbar, the nanoparticles tend to aggregate and large "cauliflower" or "snowflake" structures are observed.
As the most important representative of III-V semiconductors, gallium arsenide (GaAs) is quite suitable for use in optoelectronic industry and in particular for high-efficiency 2 ISRN Nanomaterials solar cell production because of its nearly ideal band gap (1.43 eV) for single-junction solar cells [10]. Having a high optical absorption, it requires a cell only few microns-thick to absorb the sunlight. Another reason such as resistance to heat and radiation damage caused, the GaAs material is preferred for use in the space applications. Furthermore, GaAs nanoparticles with dimensions on the order of only few nanometers have an even greater potential if quantum confinement effects could be used to tune its properties. The primary scope of this work is to explore the effects of the PLD deposition parameters and/or different working gases used in the process of deposition on the morphology of GaAs nanoparticles deposited on Si substrate. The idea was to deposit and explore isolated nanoparticles instead of continuous films.
Experimental
The GaAs nanostructures were deposited on clean Si substrates in a PLD setup with the fundamental emission at 1064 nm of a Q-switched neodymium doped yttrium aluminum garnet laser with 4 ns pulse length and 5 Hz repetition rate. The substrate was p-type Si (100) wafer and the target was intrinsic GaAs, crystal orientation (100). The details of the deposition process are given in [8]. The deposition was carried out under flow of argon or helium (both of 99.99% purity) at a given pressure with a base pressure in the vacuum chamber of 8 × 10 −5 mbar. The substrate was not heated during deposition in order to avoid interface stress and deformation during cooling. The target was exposed to a laser pulse energy of 140 mJ (pulse fluence was 31 J/cm 2 ), resulting in GaAs particles formation on the Si substrate.
The GISAXS experiments were carried out at the synchrotron facility Elettra (Trieste, Italy) at the SAXS beamline, using radiation with wavelength = 0.154 nm (i.e., with a photon energy of 8 keV). A thin Al-strip was placed in front of the 2D detector to avoid its saturation in the specular plane direction where the usually much stronger surface scattering is present. The spectra were corrected for background intensity and detector response. Details about the GISAXS geometry and data analysis are given elsewhere [8].
AFM measurements were performed with a Nanoscope IIIa controller (Veeco Instruments, Santa Barbara, CA) with a vertical engagement (JV) 125 m scanner, using a silicon tip (TESP, Veeco) that had 8 nm nominal radius.
The scanning electron microscopy (SEM) images were produced using the JEOL JSM-7000F field mission SEM. Figure 1 showing SEM images of two samples, deposited with 100 (a) and 1000 (b) pulses in Ar under 1 mbar pressure, illustrates the effect of PLD deposition in two extreme cases. Well-defined, separate particles of diameters around 10-20 nm are clearly resolved in 100 pulses sample, while the number of particles is much greater after 1000 pulses, and the substrate appears to be covered completely. In addition, few bigger, agglomeration-like structures are present on both samples. As particle heights are not recoverable from this type of measurements, GISAXS and AFM are applied for full particle analysis.
Results and Discussion
The results of GISAXS measurements on PLD deposited GaAs nanoparticles on monocrystalline silicon substrate, produced by 100, 500, and 1000 laser pulses, are shown in Figure 2. The figure presents the results from He assisted deposition at the working gas pressure of 1 mbar. Due to the isotropy of the film structure, the measured scattering intensity was symmetrical with respect to the specular plane (q = 0). Making use of this symmetry we represent in the left side of images the measured intensity, while the right side shows the result of data simulation. The very good symmetry of the two parts confirms the simulation quality and the selection of parameters, as it will be discussed below.
The grazing incident angle was set to the critical angle value for GaAs, therefore enhancing the scattering from the deposited thin film, since the X-ray beam within the sample was concentrated close to the surface and the penetration depth was less than 20 nm. The reduced intensity at the central part of the measured intensity (at q = 0 nm −1 ) is due to a semitransparent absorber that was placed there in order to reduce very high intensities appearing in the specular plane. Using the absorber, the dynamic ratio of the measured intensities could be enhanced, and the detector was protected from saturation. The observed intensity maximum, spread horizontally at about q = 0.25 nm −1 , common to all samples, is due to the Yoneda peak [11]. Since the grazing angle was set to the critical angle value, the maximum (peak) intensity in z direction is at this position. This is also where the specular peak is situated (at q = 0 where the grazing angle equals half of the scattering angle). Finally, the scattering from the largest objects on the sample (i.e., on the surface) is concentrated here. The intensity area above the Yoneda peak, where the information about the film topology dominates, is further used in the analysis. Figure 2 clearly shows the changes in shape of the measured intensity distribution (left part of each image) from the top to the bottom as the number of pulses is increased. It is interesting to note that the apparent difference between 100 and 500 pulses (fivefold increment) is less pronounced than from 500 to 1000 (twofold increment). Moreover, similar patterns at the central part (at about q = 0 nm −1 ) of all images suggest that the particles are not closely packed and/or the positions of the particles are uncorrelated.
By calculating the scattering from a distribution of particles, located on the surface of a substrate, applying our own software, the numerical data fitting to the measured data was obtained. We applied a model of either cylinders or domes, with diameter and height as the parameters according to the log-normal size distribution, in the numerical reproduction of the measured intensities. The results of the simulations are displayed at right part of each image. The very good symmetry between left and right sides indicates the quality of the fit.
It is shown that the shape of the modeled scattering patterns could reproduce the measured ones properly only when appropriate particle shapes were applied. The simulation results, shown in Figure 2, were obtained applying a cylindrical shape for 100 and 500 pulses and domes for 1000 pulses.
The numerical results of the simulations are summed in Figure 3, where the results obtained by PLD in He atmosphere discussed here are compared to those obtained previously in Ar [8]. The average values obtained for particle radii and heights are plotted for the different pulse numbers, and the error bars are representing the corresponding log-normal distribution half widths. Both horizontal radii (R) and vertical heights (H) are shown for Ar (left) and He (right) working gas. There is a clear trend of size reduction as the number of pulses is increased, which is less pronounced for Ar than for He. The horizontal radius falls from 15 nm to 3 nm and the height from 12 nm to 4 nm in the He atmosphere as opposed to the reduction from 7 nm to 4 nm in radius and from 8 nm to 6 nm in height, in the case of Ar. Another remarkable difference between the two working gasses lies in the relative half width of the size distribution. Ar working gas results in broader size distributions: for example, the average horizontal radius for 500 pulses is 4.5 nm ± 4 nm, as opposed to 5 nm ± 2 nm in the He case.
We conclude that for 1000 pulses a homogeneous film has been formed, and the modeled size distribution is describing the surface topology/roughness of this film. Although the shape of the scattering pattern appears to be similar for 100 and 500 pulses (compare [8]), the extreme value of the size distribution width obtained for 500 pulses in the case of Ar background gas indicates that the change of the particle shapes begins already here. The scattering from the 100 and 500 pulses samples contains a rather strong contribution from the substrate surface, but this is mostly restricted to the specular plane and partly hidden below the abovementioned absorber and thus has been ignored in analysis. The in-plane particle-to-particle correlation was not included in the fitting function since no depletion of the intensity in the central part has been observed. Thus correlation can only play a minor role when the size distribution is very wide.
To obtain a better insight into the nanoparticles/film formation, the samples have been also characterized by AFM. Figure 4 shows AFM images of the same samples from Figure 2. A gradual increment of the number of particles as the number of pulses is increased from 100 to 1000 can be seen: in the first two samples (100 and 500 pulses) the particles are dispersed, while in the last they are packed more densely.
Agglomeration of the particles can be also seen after 500 and 1000 pulses in Ar [8]. Still it is not completely clear whether this is a result of material transport directly from the target instead of deposition from plasma or it can be ascribed to some irregularities in the deposition from the plasma itself, but this has been noted also before [12]. Nevertheless, the size of these agglomerates is typically above a few m and therefore they could not be detected in our selected GISAXS range.
The watershed algorithm was used to locate and count the particles on the surface (i.e., numerically simulating water redistribution over the surface due to surface roughness). The distributions of particle equivalent disk radius and height are determined and plotted in Figure 5. The equivalent radius of a particle is calculated from the disk area that equals the area of the particle projection on the substrate, while the height is the maximum height value in the particle, regardless of the actual shape. A log-normal size distribution fit was applied to the results and the fit parameters are displayed in Figure 6. The general trend follows the one from the GISAXS results: both the size and distribution widths are decreasing with the number of pulses. The average particle size is bigger by about 4 nm with respect to those obtained from GISAXS; only the value for the 100 pulses sample in He is slightly lower. However, AFM suggests a substantially wider size distribution for all samples. Generally, the size distribution width is narrower in samples with He working gas, both in GISAXS and AFM results. However, the particle heights obtained by AFM have much smaller values than those obtained by GISAXS: 1.5 and 2 nm for 100 and 1000 pulses, respectively, and only 500 pulses particles have the height comparable to their radius (see Figure 5).
Autocorrelation functions, calculated from the AFM data for each image, are shown as inserts in the upper right corner of the AFM images in Figure 4. We can see that the autocorrelation symmetry is of the second or fourth order, which suggests a rather strong influence of the substrate which is (100) oriented monocrystalline Si. However, the lattice missmatch between Si and GaAs is substantial, and self-organization was very limited. The samples deposited at 100 and 1000 pulses in Ar atmosphere have an autocorrelation function that displays side maxima in only one direction [8], while the autocorrelation function of the 500 pulses He sample has a symmetry close to the sixth order suggesting some self-organization that results from an isotropic surface force which is partly present.
We have further investigated the influence of the working gas pressure on deposition and formation of GaAs nanodots. For this purpose, two series of samples, using Ar and He atmosphere, have been prepared under pressures of 0.015, 0.1, 1.0, and 10 mbars, while the number of pulses was always 1000. For these, the GISAXS intensity distributions are shown in Figure 7. The left and right column correspond to deposition in Ar and He atmosphere, respectively. The pressure is increasing from top to bottom. Again, only the left side of each of the GISAXS images represents the measured data, while the right side is the result of the best fit to the measured values. Note that the third row for He working gas (1 mbar data) is a repetition of the last row of Figure 2. For the two lower pressures, the GISAXS data almost resemble those obtained for the lowest number of pulses, suggesting that the film thickness/nature is similar. Indeed, we have been able to fit the data successively using a cylindrical particle shape in our model. The 1 mbar row has already been discussed in connection with Figure 2. Here we had to assume a hemispherical shape to obtain good fit, and this also holds for the highest pressure data.
As expected, the difference between the two working gasses is minimal for the lowest pressure (0.015 mbar) and it diminishes completely with a further pressure reduction. On the other side, the GISAXS signal is more spread over the detector angular range, indicating smaller particle sizes. However, this does not hold for the 10 mbar Ar deposited sample, where the particle sizes are again larger. The overall trend for He deposited samples seems to be a smoother function of the pressure. In addition, at the two lower pressure values, the GISAXS signal exhibits oscillations in vertical direction, a clear indication of particle height uniformity over the sample surface. Also, in-plane particle-to-particle correlation was not included in the fitting function since no depletion of the central intensity part has been observed.
Apparently, at low working gas pressure, a rather homogeneous, well-defined film was formed. Actually, what we have seen by GISAXS are details of the inner film structure and columnar growth. This is approximated by cylinders rather easily during the data fitting. The particle height is a good measure of the film thickness. As the gas pressure is increased, the growth of particles of diverse size is favored, since the energy of the atoms deposited from the plasma is reduced. In this way, atoms are sticking to the surface very close to the place of arrival, and, since the rearrangement is lower, a less compact film is formed.
The results of log-normal size distribution fits versus working gas pressure are shown in Figure 8 as plot of H versus R. As well as in Figure 3, the error bars are actually the corresponding log-normal distribution half widths, and the left and right graphs display the influence of the Ar and He working gas, respectively.
The horizontal radius has similar values and trends for both working gasses: there is a small increment of the radius going from 0.015 to 0.1 mbar, and the values are minimal for 1 mbar, then the radius increases again for 10 mbar, although this increment is significant only in the case of Ar. The half width of the horizontal radius distribution is also minimal for 1 mbar pressure for both working gasses, while the maximum is reached for 0.1 mbar.
However, the most striking difference between Ar and He assisted deposition can be seen in the particle height values: in He working gas they are half the values obtained for Ar. Otherwise, the trend is similar: the values are decreasing with pressure up to 1 mbar and then they rise again to 10 mbar, which actually is the maximum value. As in the case of horizontal radii, the minimum is reached for 1 mbar ISRN Nanomaterials pressure. Height size distributions are generally narrower in He working gas (1 mbar value being the only exception), a more significant difference than in sizes: for example, for 0.1 mbar He size distribution is five times narrower than in Ar case. The results of the AFM investigation of the sample texture as a function of the working gas pressure are shown in Figure 9 where the AFM images of the same samples from is longer and the particle energy at impact to the surface is higher, allowing for a better rearrangement and more compact film growth. The particles on the surface appear to be scarcer and the surface roughness is less than a nanometer (compared with the surface profiles for the upper two rows in Figure 9). The good parts of the autocorrelation images show mostly the first order symmetry. As discussed above, the significant lattice mismatch between Si substrate and GaAs film should result in poor particle orientation relative to the substrate, and any ordering should be a result of self-assembly. However this is partly evidenced only in the 0.1 mbar He sample, and even here it is rather long ranged and therefore not seen in GISAXS. As before, the watershed algorithm was used to numerically detect all the particles on the surface, and the results for the particle radii and heights are plotted in Figures 10 and 11, respectively. Generally, all the particle radii appear to vary in a significantly broad range, but the average value varies more substantially as a function of working gas pressure. However, it is not a simple function of the pressure: it grows with pressure, but exceptions are the 0.1 mbar Ar and 10 mbar He distribution values.
For comparison, the values of the in-plane particle radii and their corresponding size distribution widths are plotted in Figure 12. We can see that the particle radius follows the pressure in a similar manner for both gasses: it grows with the pressure, but there is a drop in value at 1 mbar. However, the size distribution remains virtually constant, apart from 1 mbar value drop. As mentioned above, overall particle sizes are smaller in He working gas, while the size distribution widths are similar for both working gas cases. One can see that the AFM obtained sizes are similar to those from GISAXS (see Figure 8); only the drop in 1 mbar values is less pronounced.
The low surface roughness for the lower pressure growth has already been mentioned above, and it is evident in Figure 11: particle heights are around 1 nm for the lower two pressure values in both working gasses. For higher pressure values it increases substantially, but it does not come close to the values obtained by GISAXS.
This discrepancy can, at least partly, be attributed to the difference in probe size, namely, while the X-ray wavelength is 0.154 nm, the AFM tip radius is about 10 nm. Therefore, in some cases where the particle size falls between the two probes sizes, the particle shape is convoluted with the AFM tip shape, while its dimensions are clearly resolved by GISAXS. This discrepancy is the heaviest in the few nanometer range, in which most of the particles in our samples fall. This holds both for particle size and interparticle distance. In case of particles of only a few nm in size, distributed rather evenly throughout the surface, in a way that the particle-to-particle distance is typically a few nm, the size of the AFM tip hinders a proper height detection more heavily than that of lateral size. When a compact film, with well-defined thickness is formed, AFM is sensitive to sparse irregularities at the surface, and the probing by this method does not penetrate into the film itself. On the other hand, the GISAXS signal comes from within the film as well as from the surface itself. In our case, the penetration depth is larger than the film thickness. A very smooth surface with few irregularities gives a GISAXS signal mostly confined to the specular plane, and this was ignored in our analysis, which was concentrated only on the particle scattering.
Conclusion
In conclusion, we have shown that the structure of GaAs nanoparticles obtained by nanosecond PLD on silicon substrate using Ar or He working gas can be tuned rather widely varying the deposition parameters, that is, working gas pressure and number of pulses. The sizes as well as the size distribution of the deposited nanoparticles are influenced by the number of pulses as was found also in vacuum [8], even before a compact film was formed. A low working gas pressure results in a relatively compact film with a wide variation in the size of the columnar structures in the film, while the use of higher pressure results in an increased roughness and more spherical particles. Our results suggest that He appears to be a more promising working gas than Ar for the better and smoother control of the GaAs nanoparticles growth by nanosecond PLD. In general, using He, a narrower nanoparticle size distribution can be achieved for diverse pulse numbers and working gas pressure. This is most obvious from the formed particle height distribution under pressures lower than 1 mbar.
Keeping in mind the differences between the two methods we used to characterize the structure of the films, for example, the used wavelength of 0.154 nm for GISAXS as compared to the ∼15 nm tip radius of AFM and the probed area in GISAXS of about 50 mm 2 while it is square micrometers in AFM, we can conclude that the correspondence of the obtained results is fair. | 5,594.4 | 2013-10-31T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
Spike buffer: improve deep network performance by offset mechanism
: For a well-designed neural network model, it is difficult to further improve its performance. This study proposes an offset mechanism called spike buffer, which can effectively improve the performance of the designed convolutional neural networks. The spike buffer introduces an offset buffer-bit and a gradient spike function in the convolution channels to enhance the expression of effective features and suppresses the extraction of invalid features. Without significantly increasing the computational complexity of deep convolution neural networks, it can improve the feature selection performance of convolution neural networks and enhance the ability of non-linear mapping, and can be easily embedded into various convolution neural networks. Experiments show that the performance of convolutional neural networks with integrated spike buffer can be effectively improved.
Introduction
Deepening the convolutional network, widening the width of the convolutional neural network and completely changing the network structure for intelligent systems with convolutional neural networks may improve neural network performance to some extent. However, destroying the original convolutional neural network structure may lead to changes in the receptive field, which is obtained by the convolutional layers. It will destroy the original design of the intelligent system, which will eventually affect the performance of the entire intelligent system. Moreover, deepening the convolutional network and broadening the convolutional neural network will lead to excess computation. There are several training tricks that can improve the performance of intelligent systems, not just from structural improvements [1,2]. Therefore, we have devised a new way to further improve the performance of intelligent systems with a fixed convolutional neural network structure. The proposed offset mechanism is called a spike buffer (SB). It can be used in conjunction with many tricks that do not rely on structural improvements. SB can be easily embedded in various convolutional neural networks that will be trained or pretrained, which can effectively improve the performance of convolutional neural networks.
The classical convolutional neural networks perform feature filtering through convolutional layers and perform non-linear transformation through activation functions. The performance of feature extraction with a single convolutional layer depends on the size and numbers of convolution kernels. The random initialisation of convolution kernel parameters and the training mode of random data input lead to the inexplicability of features. It also leads to an uneven distribution of efficient convolution kernels and inefficient convolution kernels. And the non-linear characteristics affect the fitting ability of the deep networks. Traditional convolutional neural networks add a non-linear mapping to the active layers after the convolutional layers. Based on the learning mechanism of the neural networks, an effective buffer-bit is established in the convolutional layer and activated by the gradient spike function. It promotes the enhancement of high-performance convolution kernels and the suppression of low-performance convolution kernels by convolutional neural networks. SB introduces strong non-linearity and improves the expressiveness of the networks without significantly increasing the computational expense.
Spike buffer
We focus on the backpropagation mechanism of neural networks, and introduce buffer-bit and gradient spike function into neural networks to form SB. When the convolutional neural network propagates forward, the SB adapts the weights to different feature maps according to the importance of the feature map. When the convolutional neural network is back-propagating, the SB adapts the gradient of the convolution kernel according to the efficiency of convolution kernels (Fig. 1).
Buffer-bit
The convolutional layer of the convolutional neural networks generates new feature maps by performing a convolution operation on the current feature maps by the convolution kernels where j is the serial number of current convolutional layers. The number of convolution kernels is set to i, and the output of the convolutional layers with serial number j is denoted by z j . The weight matrix of the first convolution kernel of the convolution layers with serial number j is denoted by w. The output of the first convolution kernel convolves the previous feature maps of the convolution layers with serial number j, meanwhile we name the output xj 1. The bias of the convolution layers with serial number j is denoted by b j . Conv(•) is the convolution operation and f(•) is the activation function of current convolutional layers. The unit occupancy space after the per-convolution channel is called bufferbit. Buffer-bit is a unit consisting of a single value, and the bufferbits with gradient spike function are called spike buffers where Bu is the SB with i buffer-bits, and g(•) is the gradient spike function.
Gradient spike function
Gradient descent is one of the commonly used methods for unconstrained optimisation problems. When solving the optimisation problems of the loss function, it can be iteratively solved by the gradient descent method to obtain the weight value of the models where E is the loss of the convolutional layers. If SB is added, the gradient of w i will change At the same time, the gradient of the weight matrix u i of SB can be easily obtained In order to enhance effective features, suppress invalid features, and further increase the sparsity of convolutional neural networks, we specially design gradient spike function. For forward propagation, Bu i changes its value according to the importance of the features as much as possible that Bu i becomes the weight of the features. In order to achieve this goal, when the neural network uses gradient descent to learn, we can accelerate the descent of the weights with greater gradients and maintain the weights with smaller gradients. Therefore, the gradient of u i in SB becomes particularly important. We need to design an activation function g(u) that makes u i sensitive to gradient changes. We call A(u) gradient spike function where parameter 'p' is a spike multiple, which can represent the action intensity of spike function. The parameter 'c' is an intermediate variable inside the SB. And parameter 'a' is the central base point, which is used to control the translation of the spike. The width-factor of the spike is 'd', which can indicate the recovery position of the spike relative to the base point and effectively control the impact range of the spike. The transition factor from spike function to linear function is 't', which represents the positive offset of gradient relative to 1 when the spike is recovered. In order not to change the initial state of the convolutional neural networks, we initialize Bu based on Cauchy integral theorem and formula: Therefore, it is easy to find the activation function g(u) of the SB Four parameters 'p', 'a', 't' and 'd' of SB can be adjusted according to the actual use to achieve better performance. We operate on the anomaly of SB (ASB) to make the performance of SB more stable (Fig. 2) where g u i is the mean of g u i , and g am u i is an anomaly.
Balanced buffer structure (BBS)
In order to easily apply the SB to various networks, this paper designs a BBS. BBS can introduce SB in the form of offset where Conv out is the output of the current convolutional layers, and Bu a = − 1 configures the parameter 'a' of the SB to − 1. The BBS can return the initial state of the convolutional layers to zero and automatically adjust the weight of the participating operations according to the computational requirements.
Experimental results and discussion
We integrated SB on ResNet [3] and DenseNet [4] for experiments. ResNet and DenseNet represent the best network designed by human beings. And the classic CIFAR10 [5] benchmark dataset was used. The CIFAR10 dataset consists of coloured natural images with 32 × 32 pixels. In the absence of special instructions, parameter 'a' was set to 1, parameter 'd' was set to 0.5 and parameter 't' was set to 0.1. We first did not use data enhancement to evaluation the impact of performance with different SB parameter configurations on ResNet-18. We used batch size 512 for 60 epochs to train. The initial learning rate was set to 0.1 and the weight decay was set to 0.0025. When the training epoch reached 66.6 and 83.3% of the total training epochs, the learning rate was reduced to one-tenth of the original and weight decay was increased by 0.001 at the same time. The network was trained by using stochastic gradient descent (SGD) as the optimiser. As can be seen from Table 1, the network with offset mechanism has higher Top-1 accuracy than the baseline network. Top-1 accuracy is the accuracy in the traditional sense, which is the most important indicator of performance. Top-5 accuracy is the correct answer among the five predicted maximum probabilities of the model. The increase in performance caused by different parameter configurations was different. In order to evaluate the performance of the offset mechanism on a deeper network, we experimented with the network that integrated BBS at the residual junction and set parameter 'p' to 12. To further evaluate the applicability of the offset mechanism, we performed experiments under different parameter conditions. We used batch size 512 for 200 epochs to train. The initial learning rate was 1 × 10 −3 , which reduced the learning rate to 1 × 10 −4 , 1 × 10 −6 , 1 × 10 −9 and 5 × 10 −13 at 80, 120, 160 and 180 epochs, respectively. The network was trained by using Adam as an optimiser and the weight decay was set to 0.0001.
ResNet addressed the degradation problem by introducing a residual structure. Table 2 shows that deeper networks have higher performance. The integration of BBS at the residual junction not only did not bring the gradient to disappear, but also brought performance improvement. Next, we integrated the BBS on DenseNet. Specifically, we integrated BBS in front of each contact layer of DenseNet. We used batch size 64 for 200 epochs to train. The initial learning rate was 0.1, which reduced the learning rate to 0.01, 0.001 at 100 and 150 epochs. The network was trained by using SGD as an optimiser and the weight decay was set to 0.0001. The experimental results are shown in Table 3.
In the case of default parameter configurations, the Top-1 accuracy of DenseNet-40 with integrated BBS was already 0.13% higher than DenseNet-40 without integrated BBS.
In order to evaluate the amount of computation that SB brought to the convolutional neural network, we counted the training time of ResNet with integrated BBS and ResNet without integrated BBS. Table 4 lists the training time for these networks.
BBS will increase the training time of ResNet by about 10%. And it cannot significantly increase the computing time. Most of the increase in training time may be due to the balanced structure of the BBS. The balanced structure of the BBS is offset by two residual structures. And the balanced structure has one more residual connection than the original, which is the main reason for the increase in the amount of computation.
Conclusion
SB is a completely new offset mechanism. Based on the principle of back-propagation, SB gives neural networks the ability to select neurons. It is equivalent to adaptively adjusting the local learning rate to enhance the expression of effective features. And further increase the sparsity of networks. In this paper, various networks of integrated SB are evaluated under various training configurations, and the results prove the validity and applicability of SB. The presence of SB is especially important for performance boosts of fixed-structure neural networks. This paper only proposes a feasible idea, and there are still many problems that require more researches to explore. For example, how to configure the parameters more specifically can make the performance of the convolutional neural network better, what kind of offset mechanism is needed for different neural networks, and even whether the neural network can use a better learning mechanism to train. In addition, SB can improve the performance of neural networks with fixed structures. In theory, SB can be transplanted to various non-convolution neural networks, which is the goal of the next work. We will verify the feasibility of the hypothesis in future work. | 2,809.8 | 2020-04-24T00:00:00.000 | [
"Computer Science"
] |
Cell-penetrating peptide-conjugated copper complexes for redox-mediated anticancer therapy
Metal-based chemotherapeutics like cisplatin are widely employed in cancer treatment. In the last years, the design of redox-active (transition) metal complexes, such as of copper (Cu), has attracted high interest as alternatives to overcome platinum-induced side-effects. However, several challenges are still faced, including optimal aqueous solubility and efficient intracellular delivery, and strategies like the use of cell-penetrating peptides have been encouraging. In this context, we previously designed a Cu(II) scaffold that exhibited significant reactive oxygen species (ROS)-mediated cytotoxicity. Herein, we build upon the promising Cu(II) redox-active metallic core and aim to potentiate its anticancer activity by rationally tailoring it with solubility- and uptake-enhancing functionalizations that do not alter the ROS-generating Cu(II) center. To this end, sulfonate, arginine and arginine-rich cell-penetrating peptide (CPP) derivatives have been prepared and characterized, and all the resulting complexes preserved the parent Cu(II) coordination core, thereby maintaining its reported redox capabilities. Comparative in vitro assays in several cancer cell lines reveal that while specific solubility-targeting derivatizations (i.e., sulfonate or arginine) did not translate into an improved cytotoxicity, increased intracellular copper delivery via CPP-conjugation promoted an enhanced anticancer activity, already detectable at short treatment times. Additionally, immunofluorescence assays show that the Cu(II) peptide-conjugate distributed throughout the cytosol without lysosomal colocalization, suggesting potential avoidance of endosomal entrapment. Overall, the systematic exploration of the tailored modifications enables us to provide further understanding on structure-activity relationships of redox-active metal-based (Cu(II)) cytotoxic complexes, which contributes to rationalize and improve the design of more efficient redox-mediated metal-based anticancer therapy.
Introduction
Metal-based chemotherapeutics have shown high clinical relevance in cancer therapy due to the possibility of tackling diverse (sub)cellular targets and acting via multiple mechanisms of action thanks to the unique metal-ligand interplay (Anthony et al., 2020). Platinum (Pt) complexes such as cisplatin, carboplatin and oxaliplatin have already become first-line treatments for several types of cancer (Rottenberg et al., 2021). Despite their success, the high reactivity, lack of specificity and non-physiological nature of Pt often lead to undesired biological interactions and side-effects (de Luca et al., 2019;Ohmichi et al., 2005;Oun et al., 2018;Wheate et al., 2010;Zhou et al., 2020). To overcome some of these obstacles, the use of metal complexes bearing physiological metals such as copper (Cu) have arisen as promising alternatives. In addition to the enhanced biocompatibility of Cu, its complexes can induce toxicity via diverse mechanisms beyond the common DNA-binding mode of action of Pt(II) complexes, such as DNA cleavage, topoisomerase inhibition and oxidative (mitochondrial) disruption (Santini et al., 2014). In particular, the different biologically attainable oxidation states of Cu enable the preparation of what is broadly known as catalytic metallodrugs, i.e., metal complexes that mediate catalytic reactions in biological environments, including the generation of reactive oxygen species (ROS).
ROS-based chemotherapy has awakened extensive interest due to the possibility of promoting selectivity towards cancer cells, which is based on the higher basal ROS levels in cancer than in healthy cells (Liou and Storz, 2010), and of bypassing potential drug resistance mechanisms (Cui et al., 2018). Additionally the generally more reducing environment of cancer cells (Petrova et al., 2018) can favor the reduction of Cu(II), thereby triggering the catalytic cycle. Several reported Cu (and other metal) complexes have already shown promising therapeutic performance through ROS-based mechanisms of action Liu et al., 2019;Peña et al., 2019;Simunkova et al., 2019;Sîrbu et al., 2017). Nonetheless, most Cu-and other metal-related biological pathways, including catalytic processes to generate radical species, occur, to a considerable extent, at the intracellular level. Additionally, metal complexes do often face solubility issues that contribute to limit its real applicability (Savjani et al., 2012). Hence, and given that many chemotherapeutics are administered intravenously, solubility and intracellular delivery appear as two important features in the design of potential redox-active (Cu) chemotherapeutics.
To help overcome these challenges, the use of delivery systems or functional moieties that increase solubility and enhance intracellular delivery, such as viral-based vectors, vesicles, nanoparticles, or cellpenetrating peptides (CPPs) can be used (Allen and Cullis, 2004;Dhar et al., 2009;Nakase et al., 2012;Stewart et al., 2018;Tran et al., 2017;Vargason et al., 2021;Zhou & Kopeček, 2013) Concretely, the latter is one of the most common and promising strategies to deliver therapeutics inside cancer cells. CPPs comprise a class of short peptides (<30 amino acids) that have the ability to cross cellular membranes (Schmidt et al., 2010;Guidotti et al., 2017). They have acquired considerable attention due to their high transduction efficiency and low associated cytotoxicity (Guo et al., 2016;Guidotti et al., 2017;Borrelli et al., 2018). More than 80% of the known CPPs are cationic and contain more than five positively charged amino acids, being arginine (Arg)-rich CPPs the most widely studied class by far (Schmidt et al., 2010;Milletti, 2012;Raucher and Ryu, 2015). Multiple mechanisms, highly dependent on the physicochemical nature and number of amino acid residues, have been reported for CPPs, including translocation, diffusion and endocytosis (Brock, 2014). The positively charged guanidinium groups in the arginine residues have high affinity for the externally-faced negatively charged fatty acids of the cell membrane, mediating the peptide insertion into the cytosol (Rothbard et al., 2004;Herce et al., 2014). One of the key issues in efficient intracellular delivery of CPPs is the subsequent endosomal escape. Most of the reported CPPs are able to promote internalization, but the mechanisms for which they can escape from endosomes are still not completely understood. Among them, strategies encompassing the use of cationic structures (such as Arg-rich based) and/or amphiphilic structures that can disrupt endosomal membranes because of the high electrostatic charge and partial hydrophobicity have been reported (Brock et al., 2018;Najjar et al., 2017).
Taking all into account, we here aim to (1) explore feasible, rational and biologically relevant functionalization strategies that tackle the solubility and intracellular delivery issues that many (redox-active) Cu complexes face while, at the same time, (2) systematically assessing (and eventually rationalizing) the impact of these specific solubility-and uptake-promoting modifications in the activity of the complex against cancer cells. To evaluate this, we built upon a Cu(II) N,N,O-chelating salphen-like complex (C1), recently reported by our group (Peña et al., 2021). Despite exhibiting some solubility issues, C1 was observed to display a remarkable redoxmediated cytotoxicity, and a putative discrimination between cancer and healthy cells, through the generation of ROS. To this end, we established simple, modular and versatile functionalization methodologies to prepare several tailored derivatives of the parent ligand of the complex C1 (i.e., H 2 L1), with ideally no or minimal alteration of the metal-ligand interaction, known to play a key role in determining the final (re)activity.
The rationale of our work is schematically depicted in Figure 1A and the specific derivatizations shown in Figure 1B. First, two different functionalization strategies were explored with the main goal of increasing solubility in aqueous media, namely the addition of a sulfonate group (ligand H 2 L2 and complex C2), and of an arginine (Arg) residue (ligand H 2 L3 and complex C3) ( Figure 1B). Both groups are biologically relevant and introduce a charge of opposite sign at physiological pH, i.e., negative for sulfonate group and positive for Arg. Taking into account that the number of Arg residues affects the internalization ability of CPPs (being those with at least six Arg residues more efficient (Wender et al., 2000;Tünnemann et al., 2008;Raucher and Ryu, 2015;Guidotti et al., 2017)), and in order to tackle potential cellular uptake limitations as well, two additional modifications were also evaluated: a nine Arg residues peptide (R 9 , ligand H 2 L4 and complex C4) and its analogous sequence with four glycine (G) residues (G 4 -R 9 , ligand H 2 L5 and complex C5) ( Figure 1B). R 9 was selected for its well-known good cell-penetrating capabilities (Wender et al., 2000;Guidotti et al., 2017), and glycine residues were introduced to test the effect of having a linker between the CPP and the metallic core. Additionally, the cationic nature of the CPPs, in combination with the relatively more hydrophobic salphen-based metallic core, may help address possible endo-/ lysosomal entrapment issues (Brock et al., 2018). Whilst using here C1 as our model parent scaffold, we expect that the obtained data will contribute to further rationalize the role of the different functionalities in tuning the intracellular delivery and (Peña et al., 2021) and the different H 2 L1 functionalized ligands (H 2 L2-H 2 L5), where R and G in the peptides structure correspond to arginine (Arg) and glycine amino acids, respectively. The name of the Cu(II) complex corresponding to each ligand is specified in parentheses.
Frontiers in Pharmacology frontiersin.org 03 corresponding activity of analog redox-active Cu(II) complexes, overall providing a starting point for the design of future improved ROS-inducing (Cu) metallodrugs.
Synthetic protocols
Ligand H 2 L1 and its corresponding complex C1 were resynthesized and characterized as previously described (Peña et al., 2021). The synthesis of ligands H 2 L2-H 2 L5 and the corresponding copper(II) complexes (C2-C5) is described in the Supplementary Material S1. Peptides (R 9 and G 4 -R 9 ) were synthesized using a microwave assisted Biotage ® Initiator + Alstra synthesizer, following standard Fmoc solid-phase peptide synthesis (SPPS) protocols (Chan & White, 2000). They were synthesized on a Rink amide MBHA resin (100-200 mesh) in a 0.25 mmol scale (0.59 mmol/g). The amino acids (4 eq) were assembled using HBTU (3.9 eq) as coupling agent, DIEA (8 eq in N-methyl-2-pyrrolidone (NMP)) as a base and DMF as a solvent. Fmoc deprotection was carried out at room temperature with 20% piperidine in DMF for about 20 min. Couplings were carried out at 75°C for Fmoc-Gly-OH (5 min). For Fmoc-Arg (Pbf)-OH residues, double couplings were carried out at 50°C (2 × 6.5 min). Cleavage and simultaneous removal of the protecting groups were done manually with a TFA/TIS/H 2 O (95/ 2.5/2.5, (v/v/v)) mixture for 2 h at room temperature. The resin was filtered out and washed with TFA. Filtrate and TFA washes were combined and TFA removed under a nitrogen stream. The final peptides were precipitated in cold Et 2 O, recovered by centrifugation, dissolved in water and lyophilized.
Nuclear magnetic resonance (NMR) spectroscopy
NMR experiments were recorded on BRUKER DPX-250, 300, 360, 400 and 500 MHz instruments at the Servei de Ressonància Magnètica Nuclear (UAB) and Spectropole facilities (AMU). Deuterated solvents were directly purchased from commercial suppliers. All spectra have been registered at 298 K otherwise noticed. The abbreviations used to describe signal multiplicities are: s (singlet), d (doublet), dd (double doublet), t (triplet), bs (broad signal) and m (multiplet). All 13 C NMR acquired spectra are proton decoupled.
Mass spectrometry (MS)
Routine electrospray ionization (ESI)-MS measurements were recorded at the Spectropole facility (AMU) on a SYNAPT G2 HDMS (Waters) instrument with an ionization source at atmospheric pressure (API) pneumatically assisted and with a time-of-flight analyzer (TOF). Ionization conditions: electrospray voltage of 2.8 kV, capillary voltage of 20 V, dry gas flow at 100 L/h. High resolution (HR)-MS measurements were recorded after diluting the corresponding solid complexes in a MicroTOF-Q (Brucker Daltonics GmbH, Bremen, Germany) instrument equipped with an electrospray ionization source (ESI) in positive mode at the Servei d'Anàlisi Química (UAB). The nebulizer pressure was 1.5 Bar, the desolvation temperature was 180°C, dry gas at 6 L min −1 , the capillary counter-electrode voltage was 5 kV and the quadrupole ion energy, 5.0 eV.
Electron paramagnetic resonance (EPR) spectroscopy
EPR measurements were carried out on a BRUKER ELEXSYS 500 X-band CW-ESR spectrometer, equipped with a BVT 3000 digital temperature controller. The spectra were recorded at 120 K in frozen DMSO solutions otherwise Frontiers in Pharmacology frontiersin.org 04 noticed. Typical acquisition parameters were as follows: microwave power 10-20 mW, modulation frequency 100 kHz, modulation gain 3 G. Simulations were performed using the EasySpin toolbox developed for MATLAB (Stoll and Schweiger, 2006).
Ultraviolet-visible (UV-vis) spectroscopy
All the spectra were recorded at room temperature either on an Agilent HP 8453, a Varian Cary 50 Bio, a Varian Cary 60 Bio or a Perkin Elmer Lambda 650 spectrophotometer, using 1 cm quart-cuvettes. Ascorbate consumption experiments were monitored by UV-vis at the maximum absorption band of the ascorbic acid (100 µM) at 265 nm for about 45 min. CuCl 2 and the assayed complexes C1-C3 were added at a final concentration of 2 µM in 50 mM NaCl/5 mM TRIS-HCl buffer (pH 7.2), with a maximum of 5% of DMSO.
Inductively coupled plasma (ICP) optical emission spectrometry (OES) and mass spectrometry (MS)
ICP-MS was performed on an Agilent apparatus, model 7500ce. ICP-OES was carried out in a Perkin-Elmer, model Optima 4300DV. Both used standard acquisition parameters for copper content.
Stock solutions of complexes C1-C5 for biological assays
Stock solutions of the assayed complexes C1-C5 were prepared by weighing the appropriate amount of complex and diluting them in the corresponding solvent (DMSO for C1, and H 2 O for C2-C5). Quantification of the copper concentration was carried out at the Servei d'Anàlisi Química (UAB) by ICP-OES. Measurements were performed at least per duplicate.
Cell-viability assays
HeLa and MFC7 cells were seeded into a 96-well plate at a cell density of 3·10 3 cells/well in 100 µL of culture medium and allowed to grow overnight. The next day, the C1-C5 complexes were added to cells at a range concentration of 0-200 μM. Working concentrations of complexes C1-C5 (<0.1% DMSO in biological experiments, required for C1) were prepared in DMEM (for HeLa) and DMEM/F-12 (for MCF7) medium. The growth inhibitory effect of the complexes was measured at 24 h using the PrestoBlue Cell Reagent (Life Technologies) assay. Briefly, PrestoBlue (10 μL; resazurin-based solution) was added to each well. After 2 h incubation (37°C, 5% CO 2 ), the fluorescence of each well was measured at 572 nm after excitation at 531 nm with a Microplate Reader Victor3 (Perkin Elmer). Cytotoxicity was evaluated by relative cell viability (%) for each sample related to the control well. The IC 50 values were calculated from the obtained cell viability results. Each complex was tested per triplicate and averaged from three independent sets of experiments. Blank and complex controls were also considered. For experiments of C1 and C5 at 30 min and 4 h of treatment, cells were plated and treated following the same protocol. After each treatment time, the supernatant was removed, cells were washed, and fresh media was added. Cells were allowed then to evolve until a total incubation time of 72 h and the cell viability was measured with PrestoBlue ® , as beforehand detailed.
Cellular copper uptake studies
HeLa and MCF7 cells were plated, grown, and allowed to adhere overnight in a 6-well plate (2·10 5 cells/well). Cells were treated for 4 h with the different copper complexes at the desired concentration. Before analysis, medium was removed, and cells were washed with DPBS (Dulbecco's Phosphate Buffered Saline, Invitrogen) and trypsinized for 10 min. The samples were harvested by centrifugation (1,400 rpm, 5 min) and the cellular pellets were collected and digested with concentrated HNO 3 . Quantification of the intracellular copper was performed by using inductively coupled plasma mass spectrometer (ICP-MS). Measurements were performed at least per duplicate.
Intracellular ROS production assays
HeLa and MCF7 cells were plated and allowed to adhere overnight in a 96-wells plate (2·10 4 cells/well). The 2′,7′dichlorofluorescin diacetate reagent (DCFDA, 25 μM in DMSO) was then added and the cells incubated at 37°C in the dark for 30 min. The DCFDA solution was removed and cells were treated with C1 and C5 at the corresponding IC 50 values (at 24 h) and incubated for 4 and 24 h. The experiments were run in
Antibody production and titer
Rabbit Poly-Arginine (9R) antibody was produced by Davids Biotechnologie GmbH (Germany) as follows. New Zealand white rabbits were immunized with newly synthesized Acetyl-RRRRRRRRR-Amide peptide and the antiserum was further purified by affinity purification. The obtained 9R antibody (Anti-R 9 ) was tittered by means of ELISA (enzyme linked immunosorbent assay) to determine the optimal dilution for immunocytochemical detection. Briefly, MaxiSorp (Sigma-Aldrich, MO, United States) microtiter plate was coated with the Acetyl-RRRRRRRRR-Amide peptide in carbonate/bicarbonate buffer (pH 9.6) overnight at 4°C.
The plate was washed three times with PBS-BSA 0.5% between every step. The remaining binding sites were blocked with PBS-BSA 0.5% for 2 h. The antiserum samples were serial diluted at a range concentration of 1/10-1/10,000 (v/v). The samples were applied in triplicate and incubated for 1 h at room temperature. HRP (horseradish peroxidase)-conjugated secondary antibody antirabbit (Bio-Rad) diluted in blocking buffer was incubated with the samples for 1 h at room temperature. TMB (3,3′,5,5′tetramethylbenzidine, ThermoFisher) solution was added to each well and incubated for 30 min for signal detection. An equal volume of stopping solution (2 M H 2 SO 4 ) was added and the completed reaction was read at 450 nm in a microplate reader (Tecan).
Immunofluorescence assays
HeLa cells were seeded in 24-well plates at a cell density of 5·10 4 cells/well in 1 ml of culture medium and allowed to grow overnight on SCHEME 1 Synthetic procedures to obtain H 2 L1 functionalized ligands (H 2 L2-H 2 L5). (A) Condensation reaction to obtain ligands H 2 L2-H 2 L5 from the corresponding precursors 3-6. Arginine in 4, peptide sequences in 5 and 6, and H 2 L3-H 2 L5 have an amide group instead of a carboxylic group at the C-terminal site. (B) Schematic synthetic procedure to obtain salicylaldehyde functionalized precursors 4-6. Arginine side-chain in peptides 11 and 12 (R 9 *) have a 2,2,4,6,7-pentamethyldihydrobenzofuran-5-sulfonyl (Pbf) protecting group (PG). Colored in blue, the amine terminal functional group employed to form the amide bond with the carboxylic acid (-COOH, in red) following standard HBTU coupling procedure. TFA refers to trifluoroacetic acid.
Statistical analysis
Results are presented as means ± standard deviation in all figures and in Table 2. Each viability assay determination experiment was performed in three independent experiments, while each confocal microscopy and cellular uptake experiment was performed in at least two independent experiments. GraphPad Prism was used for graphs and associated calculations.
Results
Synthesis and characterization of the ligands H 2 L2-H 2 L5 and the corresponding copper(II) complexes Ligand H 2 L1 and its corresponding complex C1 were synthesized as previously reported (Peña et al., 2021). The four newly synthesized ligands H 2 L2-H 2 L5 were essentially prepared by imine bond formation between the monoprotected benzene-1,2-diamine (1) and the corresponding salicylaldehyde precursor (3-6, Scheme 1A). The synthesis of the different ligands and their respective precursors, including the corresponding characterization data, can be found in the experimental section and as part of the Supplementary Material ( Supplementary Figures S1-9).
For H 2 L2, aldehyde (3) was achieved through a para-sulfonation of the starting material 2-hydroxybenzaldehyde (2), via an electrophilic aromatic substitution (S E Ar) reaction (Clayden et al., 2012). Standard sulfonation reactions are normally performed at temperatures around 100°C, however, this step was carried out at 40°C to prevent significant oxidation of the aldehyde in such acidic conditions (Kirker, 1980). Regarding the synthesis of the salicylaldehyde precursors of the ligands H 2 L3-H 2 L5 (compounds 4-6), similar procedures were followed for all three, namely attaching the Arg residue (H 2 L3, precursor 4) or the corresponding CPPs (H 2 L4-H 2 L5, precursors 5 and 6, respectively) to the common intermediate 9 through a stable amide linkage via TABLE 1 EPR parameters from X-EPR band spectra of complexes C1-C5. Spectra were recorded at 120 K in frozen DMSO solutions. g z /A z ratio is a parameter that predicts the distortion from planarity of the structure (g z /A z > 140 cm to consider a planar distortion in the structure).
Complex
g z A z (10 −4 cm −1 ) g x/ g y A x A y (10 −4 cm −1 ) (Chan and White, 2000). A Rink amide resin was chosen in all the cases to obtain an amide functional group in the C-terminal position of the Arg or CPP scaffold, in order to avoid competition in the Cu(II) complexation step (vide infra). Deprotection and cleavage steps from the resin were performed using standard methods, and preparative reversed-phase HPLC purification rendered the pure salicylaldehyde precursors 4-6. To note, the optimized strategy to synthesize the precursors 4-6 required the use of the imine intermediate benzoic acid 9, which was obtained by condensation reaction between the MOM-hydroxyl protected compound 8 and the monoprotected diamine 1. The phenol protection with the MOM-group and the imine-bond formation (the latter as a strategy to mask the aldehyde group) were crucial to increase the yield of the coupling reactions between 9 and 10-12, and to avoid the formation of by-products.
The Cu(II) complexes C2-C5 were obtained after metalation of the respective H 2 L2-H 2 L5 ligands with copper(II) acetate following the same procedure as previously described for C1 (Peña et al., 2021). The complexes were isolated from the reaction media by precipitation. Experimental evidence of Cu(II) complexation was initially obtained for the complexes with the two simplest modifications (i.e., C2 and C3) by the appearance of metal-toligand or ligand-to-metal charge transfer bands (MLCT, λ4 20 nm), and Cu(II) d-d bands (λ~650 nm) in the ultravioletvisible (UV-vis) spectra (Supplementary Figure S10). Additionally, the electronic transitions observed in the functionalized complexes are analogous to those obtained for C1, outlining that the chemical derivatization did not affect the coordination capabilities of the ligands. The presence of the molecular peaks for C2-C5 in highresolution mass spectrometry (HR-MS, Supplementary Figure S11) corroborated the successful metal coordination in all cases.
Comparison of the EPR spectroscopic features and calculated characteristic parameters (g and A tensors) of C2-C5 with those of the parent complex C1 (Figure 2; Table 1) confirmed that the newly synthesized complexes do maintain the same Cu(II) coordination environment as C1, with the metal center located in a N 2 O 2 in-plane coordination environment (Peisach and Blumberg, 1974) and with the unpaired electron in a d x 2 −y 2 orbital (g z > g x,y > g e ), thus adopting a square-planar or square-pyramidal derived geometry. The calculated g z /A z ratio is lower than 140 cm for all of them, which is indicative of a non-or marginally-distorted structure from planarity (Sakaguchi and Addison, 1979). Consequently, and taking all the above insights together, the different functionalization strategies assayed have not altered the metallic core of C1, present in all new complexes C2-C5.
Effect of functionalization on the coppermediated ROS production capabilities
Prior to biological evaluation, and after proving that the new C2-C5 Cu(II) complexes hold the same metallic core as C1, the impact of the ligand substituents on their catalytic properties, and therefore, ROS-generating capabilities (Peña et al., 2021), were confirmed with the two simplest functionalized Cu(II) compounds (C2 and C3), as model compounds bearing oppositely-charged functional moieties. The consumption of ascorbate was measured by UV-vis spectroscopy, monitoring its characteristic absorbance at 265 nm. The correlation of ascorbate consumption with the generation of ROS has proven to be a reliable method to effectively monitor ROS production (Alies et al., 2013;Chassaing et al., 2013).
In the presence of ascorbate and in an aerobic environment, Cu(II) catalyzes the generation of ROS upon ascorbate consumption (Halliwell and Gutteridge, 1990). Without copper (DMSO control), no decrease in the absorbance at 265 nm was observed (Figure 3). In contrast, free Cu(II) ions (CuCl 2 ) produced a rapid decrease in the absorbance and, after 20 min, the ascorbate was almost totally consumed. For the sake of comparison, the ascorbate consumption capabilities of C1, C2 and C3 were examined at the same time and Confirmation of reactive oxygen species (ROS)-generation capabilities after functionalization using the ascorbate consumption experiment. Ascorbic acid (100 µM) consumption of CuCl 2 , C1, C2 and C3 (2 µM) monitored by UV-vis spectroscopy at 265 nm, in 50 mM NaCl/5 mM TRIS-HCl buffer at pH 7.2. conditions. C3 consumed ascorbate at similar rates to C1, and just slightly slower rates were observed for C2 (Figure 3), with total consumption after 25 min in all cases.
Cytotoxicity assays and intracellular copper(II) accumulation
In vitro assays with the complexes C2-C5 were carried out in HeLa and MCF7 cancer cells in order to (1) compare their activity with that of C1 (Figure 4), which we previously reported to be cytotoxic against several cell lines (Peña et al., 2021), and to (2) elucidate the effect of the different derivatizations. All the C2-C5 synthesized complexes are soluble in water (at least 5 mg/ml for C2-C3, and 20 mg/ml for C4-C5) and in biological medium, without the need of any percentage of DMSO (as it was required for the biological evaluation of C1 because of solubility issues). The dose-response curves are shown in Figure 4A, and the obtained IC 50 values are plotted in Figure 4B and summarized in Table 2. To ensure comparability, all the stock solutions of C1-C5 were standardized by ICP-MS prior to biological evaluation and results normalized based on the copper content.
As beforehand mentioned, the first derivatization strategy (sulfonate and Arg) mainly aimed at increasing the solubility of FIGURE 4 In vitro cytotoxicity and cellular internalization of C1-C5 in HeLa and MCF7 cancer cells. (A) Cell-viability assays for complexes C1-C5 in HeLa and MCF7 cancer cell lines after 24 h treatment. (B) IC 50 values for complexes C1-C5 after 24 h treatment, calculated from the corresponding cellviability curves fitting ( Figure 4A). (C) Quantification of the copper (Cu) uptake in HeLa and MCF7 cell lines after 4 h treatment with complexes C1-C5 (50 µM). (D) Correlation between the cytotoxicity (log (IC 50 )) and the Cu uptake for HeLa and MCF7, with the cell-penetrating peptide-bearing Cu(II) complexes circled. Concentrations of the complexes are normalized in all cases based on Cu concentration for comparison purposes. Statistical differences were determined using a one-way ANOVA test, with p values as: without symbol (p > 0.05, non-significant), *p ≤ 0.05, **p ≤ 0.01, ***p ≤ 0.001, ****p ≤ 0.0001. Results average at least three independent experiments (N ≥ 3) for IC 50 values, and two (N ≥ 2) for the Cu uptake. Statistical analysis and goodness-of-fit for linear regression (r 2 ) using alpha threshold of 0.05 were calculated using GraphPadPrism.
Frontiers in Pharmacology frontiersin.org the C2 and C3 complexes respect to C1. Results underline that, although improved aqueous solubility was attained, none of the two modifications contribute to enhance the cytotoxicity of the parent complex in cancer cells, with IC 50 values higher than (in HeLa), and in the range of (in MCF7) complex C1 ( Figure 4B). In contrast, functionalization with the CPPs (complexes C4 and C5) resulted in improved cytotoxicity with significantly lower IC 50 values, particularly in MCF7 (IC 50,C1 -to-IC 50,C4-C5 ratio ≥30, Table 2), thereby suggesting that the intracellular delivery plays a key role in the final activity. Thus, cellular uptake studies were carried out to deepen into the impact of the tailored modifications on the internalization of the five complexes C1-C5. ICP-MS was used to quantify the amount of Cu(II) inside cells ( Figure 4C). Intracellular copper content after treatment with complexes C1-C3 was below 150 ng/10 6 cells, with similar uptake values among the three. Importantly, while sulfonate and Arg derivatizations, with the concomitant increase in solubility of the corresponding C2 and C3 complexes, did not mediate copper uptake, functionalization with CPPs (C4 and C5) significantly promoted intracellular copper delivery ( Figure 4C). The amount of metal able to accumulate inside cells for the CPP-conjugated complexes is at least about twofold the amount observed for C1 (272 ± 15 and 315 ± 6 versus 131 ± 21 ng/10 6 cells in HeLa, and 178 ± 2 and 156 ± 22 versus 88 ± 2 ng/10 6 cells in MCF7, for C4 and C5 versus C1, respectively). Based on the data, the presence of the G 4 linker in C5, which was designed to provide higher conformational flexibility (and ideally better interaction of the peptide with the cell membrane), did not result in clear differences in uptake or cytotoxicity as compared to C4 (at least in the assayed cancer cell lines thus far).
Although there are multiple factors influencing the (re)activity of metal complexes (both from chemical and biological features), we attempted to evaluate how strongly the intracellular delivery is related to the anticancer activity of these compounds. Thus, we correlated the obtained IC 50 values (in log) with the copper uptake (intracellular copper amount). As exemplified in Figure 4D, this resulted in a very good correlation for HeLa cancer cells (with r 2 = 0.9 and p value of 0.02). For MCF7, the correlation was still fairly good (r 2 = 0.7) and, despite not being strictly significant by definition (p value of 0.08), there is a clear tendency outlining intracellular copper delivery as an important design parameter to modulate cytotoxicity in these (and, by extension, likely in similar) Cu(II) complexes.
FIGURE 5
In vitro reactive oxygen species (ROS) production. Intracellular levels of ROS, measured with the DCFDA assay in HeLa and MCF7 cancer cells for complex C1 and C5 at their IC 50, 24h values and two different times (4 and 24 h). H 2 O 2 (100 µM) was used as positive control. The values are plotted in percentage respect to cells control, which is 100%.
FIGURE 6
Cellular localization and cytotoxicity efficacy of cell-penetrating peptide-conjugated Cu(II) complex C5. (A) Cell-viability assays for complexes C1 and C5 (100 µM) in HeLa and MCF7 cell lines at 30 min or 4 h treatment times, after which the cell media was replaced by fresh medium and cells incubated for extra time until reaching a total of 72 h. Statistical differences were determined using a one-way ANOVA, with p values as: ****p ≤ 0.0001. (B) Cellular distribution of R 9 peptide (control experiment) and complex C5 after 4 h incubation in HeLa cancer cells. HeLa cells were fixed and stained with Anti-R 9 (red) and anti-LAMP-1 (green, for lysosomes) antibodies, and DAPI (blue, nuclei). Last column shows the merged images.
Frontiers in Pharmacology frontiersin.org Effect of cell-penetrating peptideconjugation: Reactive oxygen species, efficiency and cellular distribution To gain more insights into the behavior of the CPPbearing Cu(II) complexes, and taking into account the similar profile exhibited by C4 and C5 (with slightly higher toxicity for the latter), C5 was selected for further comparative evaluation with C1. We initially corroborated the in vitro ROS production capabilities of C5 using the DCFDA assay ( Figure 5). Data confirm that the CPP-conjugated Cu(II) complex at their IC 50 value was successfully able to produce significant levels of ROS in both tested cancer cell lines, reaching similar values to the positive control H 2 O 2 (especially after 24 h incubation) and, thus, reinforcing the ROS-mediated pathway as an important mechanism of action for this set of complexes. It is also worth mentioning that, especially in MCF7 cancer cells, complex C5 was able to trigger the generation of a similar level of ROS to C1 yet using 50-fold less amount of complex, in line with the observed enhanced uptake and higher cytotoxicity of C5.
We then set out to evaluate the impact of the CPP-mediated internalization in the cytotoxic efficacy of the final complex. To this end, cytotoxicity assays were performed at shorter treatment time for C1 and C5 (4 h, Figure 6A). After that specific incubation time, the supernatant of the cells was removed (containing the noninternalized complex), cells were washed, new cell culture medium added, and cells were incubated to complete 72 h of total incubation time. Data show that whilst there is almost no cell-death for C1, C5 drastically reduced the cell viability to more than 60% in HeLa and almost 100% in MCF7 after 4 h treatment ( Figure 6A). These results reveal that the presence of the CPP not only promoted intracellular delivery, but, consequently, it also mediated a faster anticancer response, understood as a significantly higher activity than the parent complex C1 at short incubation times. The increased cytotoxicity-to-time ratio observed for C5 would contribute to enhance therapeutic efficiency, also potentially minimizing the extent and impact of clearance and drug resistance pathways of tumoral cells due to the faster action.
Finally, we studied the subcellular localization of C5 in comparison to the CPP moiety alone in order to elucidate the fate of the conjugated cargo inside the cells. Several biophysical methods have been traditionally used to assess the cellular distribution of CPPs and their cargoes (e.g., fluorescence, radiolabeling, Raman spectroscopy and X-Ray scattering), being those based on fluorescence the most extensively used (Bechara and Sagan, 2013). Usually, peptides are covalently coupled to a fluorophore, and monitoring of the fluorescence allows for cellular localization with confocal microscopy. However, besides all the experimental factors that may influence the uptake mechanism and subsequent localization (e.g., cell type, incubation time and temperature), the functionalization of CPPs with dyes has been reported to alter their inherent cellular distribution. This is basically because fluorophores are generally hydrophobic and they can change the solubility, flexibility and conformation of the final CPP functionalized complexes (Szeto et al., 2005;Puckett and Barton, 2009;Mishra et al., 2011;Hirose et al., 2012). Thus, in order to avoid interference or misleading conclusions, we used immunofluorescence with an Anti-R 9 antibody (specifically produced with a R 9 peptide as antigen; Acetyl-R 9 ) to provide an initial proof-of-concept of the intracellular distribution of complex C5 in HeLa cells ( Figure 6B).
Analysis with confocal microscopy after 4 h incubation time revealed a diffused distribution throughout the cytosol compartment, with few punctual accumulations in the nuclei. These results demonstrate that C5 was able to accumulate inside cells after 4 h, which is in concordance with what was observed in the copper uptake experiments ( Figure 4C). In contrast, incubation with the Arg-rich CPP (R 9 peptide, Figure 6B), as a control, showed localization in both the cytoplasm and the nuclei, with preferential accumulation in the nuclei, where it remains most-likely retained partly due to the interaction with the negatively-charged phosphate backbone of the DNA. In order to elucidate whether the linker modulates subcellular localization, we then also carried out immunofluorescence assays with C4. As observed from the confocal microscopy images (Supplementary Figure S12), the intracellular distribution of C4 was analog to that of C5 (i.e., mostly diffusively localized in the cytosol), thus overall suggesting that the linker does not have a clear impact on intracellular distribution either. Promisingly, both C4 and C5 results show no colocalization of the CPP-conjugates in the lysosomes after 4 h incubation which, together with the diffused distribution pattern in the cytosol, could indicate potential endo-/lysosomal bypass and/or escape.
Discussion
Metals and their corresponding complexes have already shown remarkable contributions in oncology, both in diagnosis (e.g., gadolinium or technetium) and therapy (e.g., Pt) (Mjos and Orvig, 2014;Anthony et al., 2020). Despite the significant side-effects encountered, the three worldwide approved Ptdrugs (cisplatin, carboplatin and oxaliplatin) are still administered to at least 20% of cancer patients nowadays, and they represent first-line treatments for testicular, ovarian, bladder and colorectal cancer, among others, thus underlining the (still fully unlocked) potential of metal-based anticancer therapy (Casini et al., 2019;Rottenberg et al., 2021;Peña et al., 2022). The recent years have evidenced that complexes based on physiological (catalytic) metals that act via different mechanisms of action, like ROS generation, can offer promising therapeutic responses Liu et al., 2019;Lovejoy et al., 2011;Luo et al., 2014;Shen et al., 2020; Frontiers in Pharmacology frontiersin.org Trachootham et al., 2009;Zhang and Sadler, 2017). Additionally, several studies emphasize the relevance of ROS-based cancer therapy as one of the strategies to avoid and/or bypass Pt-induced resistance while increasing therapeutic selectivity towards cancer cells (Cui et al., 2018;Wang et al., 2021). In this context, Cu(II) complexes have exhibited potential as anticancer agents (Santini et al., 2014). They can combine both features: they encompass a (more) biocompatible metal center and the possibility of triggering ROS production, as we and others have recently reported (Zubair et al., 2013;Ng et al., 2014;Sîrbu et al., 2017;Peña et al., 2019Peña et al., , 2021. However, certain physicochemical properties can limit the anticancer activity and hamper further (pre)clinical evaluation of newly developed (metal) anticancer agents. One crucial aspect relies on the lack of adequate solubility in biological fluids (Savjani et al., 2012) (about 90% of preclinical drug candidates present low-water solubility (Kalepu and Nekkanti, 2015)), as was for instance observed for our recently reported ROS-producing cytotoxic Cu(II) complex C1 (Peña et al., 2021). A second key parameter in (metallo)drug discovery is based on the capacity to cross (cellular) membranes (which is related to properties such as lipophilicity) (Vargason et al., 2021).
Over the years medicinal chemistry has explored and established multiple strategies to overcome such limitations and efficiently deliver chemotherapeutics; many of them involving structural modifications like adding pH-sensitive groups that enhance aqueous solubility and conjugation to CPPs, among others. CPPs are typically made up of 5-30 amino acids, and they can be utilized as molecular transporters to facilitate the passage of therapeutic drugs across physiological barriers (Copolovici et al., 2014;Guidotti et al., 2017). Up to now, (cationic Arg-rich) CPPs have been widely used in many anticancer treatment strategies, successfully addressing both solubility and intracellular delivery issues of a variety of anticancer therapeutic molecules (Zhou et al., 2022).
Our goal in this work was to improve the anticancer activity of the promising ROS-producing Cu(II) complex C1 by employing solubility-and/or uptake-targeting ligand modifications like CPPconjugation. However, there is still a lack of rationalization and understanding of how structural features and chemical modifications can influence the activity of the final compound, especially in metal-containing systems. Different from purely organic-based compounds, the (re)activity of metal-based structures is not only dependent on the metal center (and its oxidation state) or on the ligand scaffold, but also (and arguably even more) on the metal-ligand interaction (Anthony et al., 2020;Peña et al., 2022). Consequently, systematically and comparatively exploring several of such relevant and widely used derivatizations, especially CPP-conjugation, enabled us to deepen, at the same time, into structure-behavior relationships of Cu-based anticancer agents from different aspects, mostly encompassing physicochemical properties like solubility, (in vitro) ROS-generation capabilities, cytotoxicity, intracellular metal delivery and subcellular distribution.
For that purpose, ligand H 2 L1 was functionalized with a sulfonate group (H 2 L2) and an Arg residue (H 2 L3), with the main goal of increasing solubility, and with two variants of a nona-Arg CPP (Guidotti et al., 2017) (without linker, H 2 L4, and with, H 2 L5), for both solubility and intracellular delivery. Our data confirmed that the ligand scaffold was successfully tailored with the different functionalities, being the two variants of CPPs (for H 2 L4 and H 2 L5) linked through an analog chemical procedure as that carried out for the single Arg residue (H 2 L3). Noteworthy, conjugating Arg and Arg-rich CPPs to the ligand scaffold via the same chemical protocols outlines the robustness and versatility of such derivatization approach, which can be then analogously extended to future tailoring of this and similar (salphen-based) ligands and metal complexes with other peptides or cancer-targeting moieties containing amine functional groups. All the resulting Cu(II) complexes (C2-C5) exhibited improved water solubility.
Given that the cytotoxic activity of the Cu(II) complex C1 is strongly linked to ROS production (Peña et al., 2021), we specifically designed the functionalization strategies to be located on the periphery positions of the ligand H 2 L1, distant to the Cu(II) coordinating atoms to minimize alterations of the metallic core and preserve its ROS generation capabilities. Cytotoxicity assays in HeLa and MCF7 cancer cells highlighted that while the functionalization with sulfonate (C2) or Arg (C3) groups (i.e., modifications only targeting solubility issues) did not represent any improvement in the final in vitro anticancer activity, the conjugation to CPPs (i.e., complexes C4 and C5) promoted higher, faster and, therefore, enhanced ROS-mediated anticancer activity respect to C1; overall in line with the increase in intracellular copper levels. The differences observed in copper uptake can be thus directly attributed to the presence of the CPPs, which actively mediate the cellular internalization of C4 and C5 and, correspondingly, impact on their cytotoxicity. Importantly, we confirmed that the presence of only one Arg residue (C3) is not sufficient to enhance cellular uptake despite improving aqueous solubility and, by extension, bioavailability. In some cases, it even represented a drawback regarding intracellular copper delivery and cytotoxicity (e.g., in HeLa cancer cells). These results highlight that specific solubility-targeting derivatizations do not correlate with an enhanced anticancer activity, and that they might indeed negatively impact other key drug properties such as cellular membrane penetrating capacity. All in all, the conjugation with (Arg-rich) CPPs can target and maximize both properties at once, thus underlining the importance of the adequate choice of chemical modifications in a more rational and holistic manner. Further systematic evaluation of these (and similar CPP-conjugated) metal complexes regarding biocompatibility and therapeutic performance in healthy primary cells and in vivo models would provide further insights into the added value of CPP-conjugation strategies in the development of metal-based anticancer agents.
Frontiers in Pharmacology frontiersin.org Finally, we set out to explore the subcellular distribution of the CPP-conjugates as compared to the free peptide. For both C4 (without linker) and C5 (with linker), the ubiquitous intracellular distribution in the cytosol, together with the absence of specific lysosomal co-localization, might suggest non-endosomal uptake or rapid endosomal entrapment followed by endosomal release (Guterstam et al., 2009;Liu et al., 2018). One of the commonly reported intracellular delivery pathways of cell-penetrating peptides relies on endocytosis-like mechanisms. This generally results in the entrapment of the peptides and their cargo, (i.e., the Cu(II) complexes), inside the endosome and, subsequently, lysosomal compartments, whose low pH (4)(5) and high enzymatic activity can degrade the compound and, hence, impair its activity (Lecher et al., 2017;Pei and Buyanova, 2019;Kondow-Mcconaghy et al., 2020). Nonetheless, in other cases (e.g., for specific sequences of Arg-rich CPPs, certain cargoes, type of cell and at determined concentrations), other mechanisms involving (passive) direct translocation or fusion pore formation have also been relevantly proposed (Brock, 2014;Allolio et al., 2018). Although it is still complex to univocally elucidate the underlying entry pathways for CPP-conjugates, the comparison of the different distribution profiles between the R9 peptide alone and both C4 and C5 corroborates that the attachment of the CPP to the Cu(II) complex has altered the intracellular biodistribution pattern. Additionally, the overall data suggest that the presence of the linker has no apparent effect in the intracellular distribution either, in line with the similar cytotoxicity and intracellular copper levels observed for both C4 and C5. It is undeniable that further elucidation of the molecular details of the entry and endosomal escape pathways, especially when involving metal-based compounds, are required to strengthen (metallodrug) structurefunction relationships (Brock, 2014).
In conclusion, our data suggest that intracellular copper delivery plays an important role in governing the final cytotoxicity of redox-active Cu(II) complexes, while only tuning solubility by adding (positively/negatively) charged groups does not translate into an increased cytotoxic activity, and may also hinder intracellular delivery and hamper crossing (cellular) membranes. Beyond the therapeutic potential shown by the CPPs-conjugated complexes (C4 and C5), we expect that the systematic evaluation and understanding about the role of the different tailored modifications contributes to (1) enhance the value of (Arg-rich) CPP-conjugation in (metallo)drug discovery, and to (2) optimize the design of these and similar (redox-active) metal complexes for enhanced and faster intracellular delivery and, consequently, improved anticancer activity.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
Author contributions
QP, PB, MC, OP, JL, and OI conceived the work. QP, SR-C, and AS performed the experiments. QP, SR-C, JL, and OI wrote the first draft of the manuscript. AS, PB, MC, and OP provided comments and corrections to the manuscript draft. All the authors have read and approved the final version. | 10,217.2 | 2022-11-15T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Nature-inspired electrocatalysts and devices for energy conversion †
The main obstacles toward further commercialization of electrochemical devices are the development of highly efficient, cost-effective and robust electrocatalysts, and the suitable integration of those catalysts within devices that optimally translate catalytic performance at the nanoscale to practically relevant length and time scales. Over the last decades, advancements in manufacturing technology, computational tools, and synthesis techniques have led to a range of sophisticated electrocatalysts, mostly based on expensive platinum group metals. To further improve their design, and to reduce overall cost, inspiration can be derived from nature on multiple levels, considering nature’s efficient, hierarchical structures that are intrinsically scaling, as well as biological catalysts that catalyze the same reactions as in electrochemical devices. In this review, we introduce the concept of nature-inspired chemical engineering (NICE), contrasting it to the narrow sense in which biomimetics is often applied, namely copying isolated features of biological organisms irrespective of the different context. In contrast, NICE provides a systematic design methodology to solve engineering problems, based on the fundamental understanding of mechanisms that underpin desired properties, but also adapting them to the context of engineering applications. The scope of the NICE approach is demonstrated via this comparative state-of-the-art review, providing examples of bio-inspired electrocatalysts for key energy conversion reactions and nature-inspired electrochemical devices. biofuel cells to
Panagiotis Trogadas is Assistant
Research Professor in the Department of Chemical Engineering at UCL since 2017, working with the Centre for Nature-Inspired Engineering. Upon completion of his diploma in Chemical Engineering at National Technical University of Athens and his doctorate in Chemical Engineering at IIT, Chicago, he has undertaken postdoctoral posts in Chemical Engineering at Georgia Institute of Technology and TU Berlin, where he has worked on projects related to hydrogen powered vehicles. His research interests lie in the exploration of nature-inspired electrochemical components, devices and systems. He has won several research grants and delivered invited lectures and keynotes.
Marc-Olivier Coppens
Marc-Olivier Coppens is Ramsay Memorial Professor and Head of Department of Chemical Engineering at UCL, since 2012, after professorships at Rensselaer and TUDelft. He founded and directs the UCL Centre for Nature Inspired Engineering, which was granted EPSRC ''Frontier Engineering'' (2013) and ''Progression'' (2019) Awards. He is most recognised for pioneering nature-inspired chemical engineering (NICE): learning from fundamental mechanisms underpinning desirable traits in nature to develop innovative solutions to engineering challenges. He is Fellow of AIChE, IChemE, Corresponding Member of the Saxon Academy of Sciences (Germany), Qiushi Professor at Zhejiang University, and has delivered 450 named lectures, plenaries and keynotes.
Introduction
The genesis of electrochemistry is related to the discovery of the movement of the muscles in a frog's leg by charge, 1 entitled ''animal electricity''. The ''animal electric fluid'' was an early example of a conducting electrolyte showcasing that the elemental composition of metallic electrodes in contact with the electrolyte is crucial in determining the electrochemical response. 2 These concepts of metallic electrode and electrolyte laid the foundations for the field of electrochemistry.
The growth of this field was slow, as it took at least 200 years to explore the relationship between electricity and chemistry. [3][4][5][6][7] Eventually, in the 20th Century, relations between interfacial properties and rates of electrochemical reactions were established. 8 The term electrocatalysis was introduced in the 1930's, 9 and has been used since to describe the dependence between the nature of electrode materials and electrochemical charge transfer reactions, as the kinetics of the latter vary significantly from one electrode to another. It is a subcategory of heterogeneous catalysis, aiming to increase the rate of electrochemical reactions occurring at the surface of electrodes. It involves the study of electrode kinetics and the determination of the current at an applied electrode potential. [10][11][12] Electrocatalysis is closely related to electrochemical energy conversion, material design and synthesis, and lies at the heart of several technologies related to environmental protection and sustainable development. 13,14 During the past decades, research on the design and synthesis of electrocatalysts for energy conversion has blossomed. Materials employed as electrocatalysts have a dual role: lower the energy barrier for electrochemical reactions, and simultaneously promote the electron charge transfer on their surface. 14-21 A plethora of highly efficient noble or non-noble electrocatalysts for electrochemical devices has been developed, including platinum group metals (PGMs) [22][23][24][25][26][27][28] or alloys, core-shell PGM alloys, 28,[50][51][52][53][54][55][56] shape controlled PGM nanocrystals, 10,35,[55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70] PGM nanoframes, [71][72][73][74][75][76][77][78][79][80] and non-precious metal electrocatalysts (NPMCs). [81][82][83][84][85][86][87][88][89][90][91][92][93][94][95][96][97][98][99][100] The activity and stability issues of PGMs and non-PGMs as well as the high cost of PGM electrocatalysts directed research to the exploration of more sophisticated designs to meet the targets of automotive and other industrial applications. These catalysts are usually supported on carbonaceous materials; as this review is not focused on support materials, readers are referred to other published reviews on this topic. 15,22,[101][102][103][104] Inspiration for the design of electrocatalysts and devices can be derived from nature, which is full of hierarchically scaling structures. This critical review demonstrates the scope of a systematic, nature-inspired chemical engineering approach (NICE) across all scales, and emphasizes the difference between bio-imitation and bio-inspiration in design. The bio-inspired approach does not aim at replicating the exact details of an enzyme's active site or the veins of a leaf by copying seemingly useful features; instead, it thoroughly investigates the structural and functional characteristics of biological organisms to generate innovative and better performing electrocatalysts and devices for the same applications. Examples of bio-inspired electrocatalysts for key energy conversion reactions, as well as nature-inspired electrochemical devices are provided in the following sections.
Nature-inspired, integral design of electrocatalysts and devices
With advances in computational tools and material synthesis techniques, the creation of electrocatalysts has become a progressively more sophisticated craft, achieving optimal sizes of nanoparticles, high surface area, and accurate control of crystal facets and the specific position of individual catalytic atoms within the crystal lattice to increase the number of active sites. 105 However, all electrocatalyst designs discussed hitherto focus on alterations at the nano-scale only. To further improve on the properties and use of these electrocatalysts in devices, a new methodology for hierarchical catalyst design and implementation, which considers all scales, is pivotal. Not only at the nanoscale, but also across larger scales, inspiration can be drawn from nature, which provides us with numerous examples of intrinsically scaling, hierarchical structures and includes bio-catalysts for the same electrochemical reactions as those employed in electrochemical devices. Thorough examination of the structure and dynamics of natural systems can reveal key mechanisms that underpin desired properties, such as efficiency, scalability and robustness. For example, the branching structure of trees, the vascular network and mammalian lungs is not arbitrary, but consists of similarly repeated divisions across many length scales; the specific geometric properties of these natural transport networks result in facile scaling, as well as in extraordinary overall thermodynamic efficiency, through the minimization of entropy production. 101,[106][107][108][109][110][111] At meso-to macroscales, hierarchical transport networks in nature obey fractal scaling relationships, some of which relate to Murray's law, [112][113][114][115] or generalizations thereof, discussed in Section 3.1. Careful observation and characterization of geometrical and dynamic patterns in nature could, therefore, help us to discover commonalities. This analysis should be carried out beyond superficial appearances, by seeking underlying physico-chemical, mechanistic principles, while being conscious of how particular features might depend on the constraints of the natural environment -constraints that could differ in several ways from the context of an engineering application. This way to study and learn from nature is the basis for the nature-inspired chemical engineering (NICE) approach. 106 One example illustrating a basis for nature-inspired design of chemical reactors is a typical tree (Fig. 1). The quest for process intensification is an important one in chemical reactor engineering, as multiphase catalytic reactors in particular are plagued by issues related to scale-up, efficiency and robustness. Addressing such problems requires integration of selective, stable catalysis and transport phenomena across scales, which trees resolve through their design. At the macro-scale, tree roots provide support, and extract water and nutrients from the soil, which are transported via its fractal network throughout its volume. The crown of the tree consists of ramified branches and twigs with increasingly smaller diameters following fractal, self-similar scaling. Twigs bear leaves employing a venation pattern at the meso-scale for efficient chemical transport, while, at the nano-scale, leaves contain molecular complexes to capture sunlight and convert CO 2 into sugars and oxygen via photosynthesis, providing the essential nutrients for the growth of the tree. All scales, and the design at and between scales, matter.
The NICE approach we advocate and widely employ at the Centre for Nature Inspired Engineering (CNIE) at UCL in areas ranging from cancer immunotherapy to water purification, bioseparations, and the built environment, allows, in the context of this review's subject, for the design of bio-inspired electrocatalysts and electrochemical devices based on two fundamental mechanisms that are particularly effective in nature, namely, hierarchical transport networks to bridge length scales, and force balancing at individual scales. It should be noted that both principles are relevant to a much broader range of applications in heterogeneous catalysis and reaction engineering.
In terms of hierarchical transport networks, we capitalize on the unique structural characteristics of the lung to design and engineer flow fields for polymer electrolyte fuel cells (PEFCs). The NICE approach to design PEFCs will be discussed in more detail in Section 5.3, but Fig. 2 summarizes the methodology. The unique features of the lung (''Nature'') derive from its ability to bridge length scales and scale-up, irrespective of size, while providing uniform distribution of oxygen into the blood cells and minimizing thermodynamic losses across its structure; this is achieved by transitioning from a fractal tree of increasingly narrowing bronchi to a more uniform channel architecture in the acini, at a bronchiole size where the Péclet number is approximately 1, corresponding to a transition between convection-driven Fig. 2 Systematic, step-by-step employment of the NICE approach for the design and engineering of lung-inspired flow fields for PEFCs. Fig. 1 A tree as an example from nature to inspire the design of chemical reactors, from its scalable, fractal architecture of transport pathways at the macroscale, via the uniform distribution of channels at the mesoscale of the porous catalyst to the structure of the active sites at the nanoscale. Image of the tree (macro-scale) and active sites during photosynthesis (nano-scale) was reprinted with permission from ref. 116. Copyright 2010 American Chemical Society. and diffusion-dominated gas transport (''Nature-inspired concept''). A mathematical model could then be developed and used in computer simulations, based on these characteristics of the lung, to calculate the required number of fractal generations in a fuel cell flow field to achieve uniform distribution of reactants (when Péclet B1) on the catalyst layer (''Nature-inspired design''), after which scalable, robust lung-inspired flow fields could be created by various manufacturing techniques, including a form of metal 3D printing, called selective laser sintering (''Nature-inspired design''). Lung-inspired flow field based PEFCs made out of stainless steel or printed circuit boards (PCBs) exhibited improved performance over commercial serpentine flow field based PEFCs (''Experimental realization'').
With regards to the catalyst itself, which this flow plate feeds, the local environment of the active site is crucial to its activity, selectivity, and stability. The NICE approach explores the nanoconfinement effects induced by local curvature, as well as the impact of chemical and geometrical heterogeneity on transport in nanoporous materials and utilizes it in the design of the electrocatalytic layers. A hierarchically porous structure with a network of broad and narrow pores is required to reduce transport limitations and increase overall rates; nature-inspired catalyst design will be discussed in Sections 4 and 5.
This review discusses a range of examples pertaining to these two fundamental mechanisms. Other ubiquitous fundamental mechanisms employed by nature to underpin attractive properties, such as resilience and adaptability, fall outside of this review and will not be discussed here, however we see these as holding great promise in future electrochemical systems engineering. These mechanisms include dynamic self-organization and self-healing properties of natural systems, and the structure of natural networks (e.g. in ecosystems and metabolic networks), control mechanisms and modularity -in the CNIE, these two additional mechanisms are already applied to address a range of other engineering challenges, and the validation of the NICE approach will help to set the stage for applications in electrochemical devices and their operation.
In summary, nature's universal, fundamental mechanisms are an excellent guide to redesign processes and catalytic materials, as they epitomize efficient, robust, and scalable hierarchical structures. However, equally important is the appreciation that almost any optimization problem, in nature and technology, is a constrained multi-objective optimization challenge; the differences in constraints need to be considered when adopting natural mechanisms to solve engineering problems. Here, we will discuss systematically when and how a thorough scientific investigation into natural processes and materials in relation to their function and structure can provide nature-inspired solutions for the design of electrocatalysts and devices.
3. Ways to connect nature to electrocatalyst design: inspiration vs. imitation Thus far, the majority of research on the development of new electrocatalysts that turns to nature for inspiration is, in fact, based on much more direct bio-imitation, which mimics isolated features of biological or non-biological structures, leading to low activity and durability issues. 101,110,111 We use the term bio-imitation to differentiate this direct approach to the systematic, stepwise, nature-inspired engineering methodology discussed in Section 2. Even though the actual mechanistic features or physical processes that govern the biological system are neglected, there are successful examples of bio-imitating electrocatalysts. 101,110,111 A series of electrocatalysts for hydrogen evolution and oxidation reactions were synthesized, following two strategies: (i) utilization of the active metal center of NiFe or FeFe hydrogenase as catalyst for electrochemical reactions (bio-imitation); 117-120 and (ii) immobilization of enzymes on carbon support (biointegration). 121 Based on the first bio-imitation approach, metallic complexes (such as FeFe or NiFe) using the same metal sites of the active center of hydrogenases were synthesized, and demonstrated activity toward the hydrogen evolution reaction. [117][118][119][120]122 One of the most effective catalysts produced by this approach was a mono-Ni bis-diphosphine complex 123 based on the characteristics of FeFe hydrogenase. Even though it could catalyze the hydrogen oxidation and evolution reaction in organic solvents 123,124 or acidic aqueous solutions (pH = 0-6), 125 its high sensitivity to oxygen hinders its use in fuel cell systems. 126 To circumvent this issue, a bio-integration approach was used with the adsorption of hydrogenase on gold and graphite electrodes. [127][128][129][130] Even though this method contributed to the understanding of the catalytic activity of the enzymes, its major drawback was the weak stability of most hydrogenases to oxygen. 131 A decrease in activity was observed over time as a result of the slow modification of the active site by residual traces of O 2 and protein reorientation at the interface. 129 Additionally, the large size of the metalloenzyme (B6 nm) dictated the development of high surface area electrodes (such as carbon nanotubes) with high loading to achieve current densities comparable to those of Pt electrocatalysts. 128 While the above examples introduce interesting catalysts and provide useful insights, it is evident that the difference in context between nature and technology, such as in size, medium, and position of active metal center, is not accounted for. Doing so could help to make further progress, because a genuinely natureinspired approach is based on the mechanistic understanding of fundamental mechanisms underlying desired traits, without necessarily copying or directly utilizing the source of inspiration; these mechanisms are incorporated into the design and synthesis of artificial systems encompassing the traits of the natural model. 101,[109][110][111] In the following sections, we provide examples of bio-inspired designs of electrocatalysts for key electrochemical reactions for energy conversion at the nanoscale (Fig. 3) and of electrochemical devices at larger scales, including our NICE approach in the design of lung-inspired flow fields for PEFCs.
Nature-inspired electrocatalysts for the oxygen reduction reaction
The overall oxygen reduction reaction (ORR) on an electrocatalyst is a multi-electron, multi-step reaction involving several reaction intermediates. The distinction between the different mechanisms of the ORR relies on the number of proton-coupled electron transfer steps preceding the breaking step of the O-O bond. 132 A four-electron (4e À ) route is desirable to directly produce H 2 O (in acidic medium) and OH À (in alkaline medium) as the final products; depending on the electrocatalytic properties, a twoelectron (2e À ) sub-route can be included producing H 2 O 2 (in acidic medium) and HO 2 À (in alkaline medium) as the intermediate species (Table 1). 27,59,[132][133][134][135][136][137][138][139][140] The fundamental understanding of the ORR mechanism is still unclear, due to the complexity of its kinetics; 132 the rate determining step, sequence of electron and proton additions, 141 as well as adsorption of intermediates are still under debate. The rate determining step is considered to be the first electron transfer from the topmost electrode surface to the adsorbed molecular oxygen (O 2ads ), 135,[142][143][144][145] while the adsorption of intermediate species, such as oxygen (O*) and hydroxyl (OH*) radicals, is suggested to be the crucial step of the ORR mechanism on metallic catalysts. 136,146,147 For metals that bind oxygen too strongly, the reaction rate is limited by the removal of the adsorbed O* and OH* species. For metal surfaces that bind oxygen too weakly, the reaction rate is limited by the dissociation of O 2 , or, more likely, the transfer of electrons and protons to adsorbed O 2 . 135,136 Successful bio-imitation approaches for the design of electrocatalysts for the oxygen reduction reaction have been employed, mimicking the active metal center of enzymes coordinated by carboxyl, amine, thiol, and imidazole groups of the amino acid side chains. 148 Metallophthalocyanines, [149][150][151][152][153][154][155][156][157][158] and metalloporphyrins 149,150,[159][160][161][162][163][164][165][166][167][168][169] have been extensively investigated as non-precious metal electrocatalysts for ORR, even though they lack long-term stability in the harsh environment of a fuel cell. 159,170,171 Iron phthalocyanines catalyze the direct reduction of oxygen to water via a four-electron pathway, promoting scission of the O-O bond. 149,150 On the contrary, the ORR activity of metalloporphyrins depends on whether their meso-positions are occupied by aryl, pyridyl, or alkyl substituents. 168,169 For example, cobalt metalloporphyrins adsorbed on graphite electrodes without meso substituents demonstrated two-electron oxygen reduction, whereas the adsorbed cobalt porphyrins with meso substituents directly oxidized oxygen to water via a four-electron pathway. 169 This catalytic behavior was attributed to the spontaneous formation of van der Waals dimers facilitating the reduction of oxygen to water via four electrons. 169 The interaction between the cobalt centers of the porphyrin dimer and the atoms of oxygen leads to a transition state where the bridging oxygen molecule is activated, accepting four electrons from the graphite electrode, which causes the scission of O-O bond. 169 Based on these attractive activity characteristics, metal organic networks employing well-defined metal surfaces decorated with porphyrins or phthalocyanines have been synthesized via supramolecular chemistry. 148,155,166,167,172,173 Iron phthalocyanines adsorbed on the surface of Au(111) enhanced its ORR activity in alkaline media (B0.030 V Tafel slope), allowing the direct reduction of oxygen to water via a four-electron route with cleavage of the O-O bond, in contrast to the observed oxygen reduction via a two-electron route in the case of bare Au (111) surface. 155 Additionally, chemisorbed cobalt porphyrin on the surface of Au(111) increased its ORR activity in acidic media, 166,167 demonstrating an increase of the reductive current. 166,167 Despite these promising preliminary results, these bio-imitating catalysts are in early development and challenges concerning long-term stability and activity, as well as the tuning of adsorbate-substrate interactions have to be resolved. 174 Recently, mononuclear iron porphyrins with distal basic residues (i.e., pyridine, aliphatic and dibenzyl aromatic amines) imitating the proton transfer pathways and hydrogen bonding groups in cytochrome c oxidase 149,175-179 demonstrated high selectivity (greater than 90%) and reactivity (rate constant greater than 10 7 M À1 s À1 ) for ORR at pH = 7 when adsorbed on an edge-plane graphitic electrode. 180 The measured rate constant of oxygen reduction was approximately two orders of magnitude higher than the heme/Cu complexes, [181][182][183][184][185][186] suggesting that the distal basic residues stabilize the Fe III -OOH intermediate species and enable heterolytic cleavage via protonation of their distal oxygen, promoting a four-electron ORR route. 180 Another promising example of a bio-imitating, porphyrinbased ORR electrocatalyst mimicking the active site of oxygenactivating heme-containing enzymes, such as cytochrome c oxidase, is an axial-imidazole coordinated iron porphyrin, covalently grafted on multi-wall carbon nanotubes (MWCNTs). 187 This electrocatalyst exhibited high activity and stability in acidic and alkaline media; its half-wave potential (E 1/2 ) in both media was B38 mV (E 1/2 B 0.88 V vs. NHE, 0.1 M HClO 4 ) and B47 mV (E 1/2 B 0.922 V vs. NHE, 0.1 M KOH) higher than the commercial Pt/C (E 1/2 B 0.842 V and B0.875 V vs. NHE in acidic and alkaline media, respectively). Koutecky-Levich plots revealed a fourelectron oxygen reduction route, with the production of a minimal amount of hydrogen peroxide (less than 1% H 2 O 2 ). The stability of this bio-imitating electrocatalyst was evaluated via US DoE's accelerated test protocol, demonstrating a B14 mV loss in E 1/2 , which was less than half of the decrease exhibited for commercial Pt/C (B37 mV loss) after 10 000 continuous cycles between 0.6 V and 1 V (vs. NHE) in an oxygen saturated acidic solution (0.1 M HClO 4 ). 187 The high activity and stability of this electrocatalyst was attributed to its low half-wave potential value for ORR and, hence, the associated low amount of H 2 O 2 produced, insufficient to deactivate and degrade this electrocatalyst. 187 Furthermore, the ORR activity of metalloporphyrins is affected by their structure and the distance between their active metal centers. 188,189 Hence, crystalline porous materials such as metal organic frameworks (MOFs) or covalent organic frameworks can be used as scaffolds to accurately control the structure of metal-metalloporphyrin frameworks, 190 resulting in efficient ORR electrocatalysts with high surface area and porosity. However, their major disadvantage is their low electronic conductivity and long-term stability, 191 and, thus, further research is needed to effectively utilize these electrocatalysts in fuel cells.
Metallocorroles have also attracted attention as electrocatalysts for the ORR due to their high activity and tunable structure. [191][192][193][194][195][196][197][198][199] Metallocoroles are tetrapyrrolic macrocyclic compounds with one less carbon atom than porphyrins. 191 Their tri-anionic charge stabilizes the metal center and affects the chemistry of the chelated transition metal ions, resulting in low valent corrole-transition metal complexes, which are much more reactive than their porphyrin analogs. 191 Several transition metal complexes with brominated corrole (b-pyrrole-brominated 5,10,15-tris-pentafluorophenyl-corrole, M(tpfcBr 8 ), where M = Mn, Fe, Co, Ni, and Cu) adsorbed on a high surface area activated carbon (BP2000) have been investigated as electrocatalysts for ORR in alkaline media. 194 Iron-and cobalt-based brominated corroles exhibited the highest activity following a four-electron pathway for the direct reduction of oxygen to water, similar to the ORR activity of commercial Pt/C (20 wt% Pt) tested under the same conditions. 194 In acidic media, the ORR activity increases in the order of Co 4 Fe 4 Ni 4 Mn 4 Cu, with brominated cobalt-corrole demonstrating the best activity with an onset potential B80 mV lower than commercial Pt/C (20 wt% Pt). 199 In addition to the metal center of the brominated corrole, its ORR activity is suggested to depend on the support, since adsorption of metallocorroles on activated carbon or carbon nanotubes decreases the reaction overpotential. 191,[199][200][201] Further research is needed to fully comprehend this phenomenon.
Despite these promising ORR activity results, the incorporation of these bio-imitating electrocatalysts into PEFCs remains a great challenge. Their activity and stability have been evaluated only at RRDE level, and not in a membrane electrode assembly under harsh fuel cell conditions. Thus far, these electrocatalysts demonstrate poor stability and improved catalyst designs would have to be introduced to meet the requirements for practical PEFC applications. 202 As this review is not focused on bio-imitating electrocatalysts, readers are referred to other published reviews for more information on this topic. 148,191,202,203 Since the reduction and evolution reactions occurring in a fuel cell comprise three phases (gas/liquid/solid), electrocatalysts with a hierarchical porous structure are desirable to promote mass transfer and improve utilization of reactant. To improve the design of electrocatalysts, the hierarchical pore networks utilized by biological organisms (such as trees, other plants and their leaves, etc.) were used as a source for inspiration. These organisms grow obeying power laws that are generalizations of Murray's law, utilizing precise diameter ratios to connect pores from macro-to meso-and micro-scale, which results in the minimization of transport resistance and fluent transfer throughout the network. 113,204 According to Murray's law, the cube of the diameter of the parent vessel (d p ) is equal to the sum of the cubes of the diameters of the daughter vessels (d i , i = 1. . .n, where n is the number of macro-, meso-, or nano-pores in each particle) at each level of bifurcation: [109][110][111] Based on eqn (1) and several assumptions presented in detail in the ESI † (Section S1), a series of new eqn (S2)-(S5) (ESI †) correlating the macro-, meso-, and micro-pores of a ''Murrayinspired'' material were suggested. 205 A layer-by-layer, evaporation-driven self-assembly process utilizing a microporous material (ZnO, in this example) as the primary building block under ambient conditions is employed for the synthesis of these so-called ''Murray materials''. 205 The synthesis procedure of these materials is a tedious, multi-step process, since various reaction conditions are utilized to produce a combined micro-, meso-, and macro-porous ZnO (Fig. 4). Microporous ZnO is prepared in inert atmosphere using a mixture of zinc acetylacetonate hydrate and oleylamine. Mesoporous ZnO nanoparticles (B30 nm size) are synthesized using the same method with additional calcination at B290 1C.
To prepare the micro-, meso-, and macro-porous material, micro-and meso-porous ZnO particles are dispersed separately in a volatile solvent (e.g. hexane, ethanol). The first microporous layer is created by drop-casting of microporous ZnO solution on a silicon wafer and evaporation of the volatile solvent. Then, mesoporous ZnO solution is drop-casted on the already formed ZnO microporous layers.
The concentration of ZnO in solution directly affects the evaporation rate of the solvent. At low ZnO concentration (B0.03 mg mL À1 ), the nanoparticles are self-assembled into isolated islands; small macropores gradually appear as the ZnO concentration is increased (B0.06 and 0.12 mg mL À1 ) due to the enhanced nucleation and restricted expansion of the holes. Further increase in ZnO concentration (B0.5 mg mL À1 ) leads to a porous network with ZnO nanoparticles forming packed structures. 205 These initial experimental results are very promising; however, the general applicability of Murray's law to the synthesis of a wide range of materials for various applications has to be thoroughly investigated. The synthesis method advocated for hierarchical Murray ZnO may not be feasible or appropriate for different materials; hence, alternative synthesis methods should be explored, such as templating, crystallization, the Kirkendall effect, etc. 101 Recently, Murray-like cobalt supported on nitrogen doped carbon (Co-N-C) electrocatalysts for oxygen reduction and evolution, and hydrogen evolution reactions in alkaline medium were synthesized using Prussian blue analogue (PBA) as a precursor. 206 Pyrolysis was used instead of the layer-by-layer drop casting method (Fig. 5). The 3D cubic porous network of PBA imitated the hierarchical structure obtained by Murray's law, while PBA acted as the carbon, nitrogen, and cobalt source, and polyvinyl pyrrolidone (PVP) as the surfactant. After thermal treatment (pyrolysis at 900 1C in inert atmosphere), a spherical porous assembly of Co nanoparticles covered by nitrogen doped graphitic carbon was produced. Acid leaching (2 M HCl solution) was employed to remove the unstable species and optimize the porosity of the material, increasing its electrochemically active surface area (ECSA) as demonstrated by double layer capacitance measurements (B8 and B7 mF cm À2 for acid and non-acid treated Co-N-C, respectively). 206 Physicochemical characterization (X-ray diffraction (XRD), Raman and X-ray photoelectron spectroscopy (XPS), scanning (SEM) and transmission (TEM) electron microscopy) of the material post acid treatment demonstrated that its structure remained intact, while the composition and valence states of Co, N, C, Zn, and O at the surface of the electrocatalyst were similar to the ones before the acid modification step.
As a result, the acid treated Murray-type Co-N-C materials exhibited similar ORR activity (diffusion-limited current density of 5.7 mA cm À2 ) and electron transfer (B4e À ) to commercial Pt/C in alkaline solution (Fig. 6a). In the case of OER and HER (Fig. 6d), acid treated Murray-type Co-N-C displayed low onset overpotentials of B150 and B350 mV, corresponding to the hydrogen evolution reaction and oxygen evolution reaction in alkaline solution (1 M KOH), respectively, highlighting its potential to be used in overall water splitting with a total splitting voltage of 1.73 V. 206 Finally, these acid treated Murray-type electrocatalysts exhibited immunity to methanol (2 M) compared to commercial Pt/C (Fig. 6b), as well as higher stability than nonacid treated Murray-type and commercial Pt/C, with B99% retention at 0.5 V (vs. RHE) after a B6 h continuous chronoamperometric measurement, compared to B93% and B84% retention for non-acid treated Murray-type Co-N-C and commercial Pt/C, respectively (Fig. 6c). 206 Hence, the ORR activity of these electrocatalysts is affected by N species, whereas the OER and HER activity is determined by the Co species, providing more sites for the adsorption of H* and OH* radicals. 59,207,208 Despite the low water splitting performance of Murray-type Co-N-C electrocatalyst compared to literature values, 209 these initial results demonstrate the potential of Murray-type electrocatalysts with optimal porous structure to be utilized in triphase electrochemical reactions. Moreover, inspiration for the design of new electrocatalysts for ORR can be derived from the active metal center of cytochrome c oxidase and laccase, which comprises iron and copper ion complexes. 210,211 However, directly mimicking the Cu 2+ or Fe 3+ complexes is not a fruitful strategy, as it leads to low activity, due to the absence of mediators for the transfer of electrons and the steric variation of the coordination structures after these complexes are attached onto the electrode. 212 As an example of effective tuning of the electron density of such an active site (Cu 2+ based), copper nanocomposites (CPG) were synthesized via the pyrolysis of a mixture of graphene oxide (GO) and Cu(phen) 2 (Cu 2+ 1,10-phenanthroline). 212 The electron density of Cu 2+ sites was tuned by the electron donation effect from Cu 0 of copper nanoparticles and nitrogen ligand incorporated into graphene. The electron transfer in CPG was increased via the electronic connection of Cu 2+ and nitrogen, as well as Cu 2+ and Cu 0 in graphene, resulting in enhanced ORR activity. 212 Pyrolysis temperature also affected the activity of CPG, with 900 1C being the optimum in this study. 212 The exhibited ORR onset potentials of B0.85 and B0.98 V (vs. RHE) and Tafel slopes of 71 and 49 mV dec À1 in 0.5 M H 2 SO 4 and 1 M KOH, respectively, were similar to commercial Pt/C, demonstrating the high activity of this catalyst. 212
Nature-inspired electrocatalysts for the oxygen evolution reaction
The oxygen evolution reaction (OER) is the reverse of the ORR: water is oxidized to oxygen via a 4e À pathway ( Table 2). [213][214][215][216][217] The mechanism of the OER is sensitive to the structure of the electrode surface.
The symbol ''*'' represents a surface with one oxygen vacancy site in the topmost layer, while the symbols ''OH 2 *'', ''OH*'', ''O*'', and ''OOH*'' represent the surface with the corresponding chemisorbed species residing in the oxygen vacancy site. 59 The complete OER mechanism in acidic medium consists of four oxidation steps, in each of which a proton is released into the electrolyte. Water is first adsorbed onto the surface of the oxygen vacancy site, forming ''OH 2 *'' species. ''OH 2 *'' then undergoes two subsequent oxidation reactions to form ''O*'', which reacts with another water molecule to form an ''OOH*'' intermediate. In the last step, oxygen is released from ''OOH*''. 59 The oxidation of water to oxygen in nature is catalyzed by the oxygen-evolving complex (OEC) in photosystem II, located within the thylakoid membranes of plants, algae, and cyanobacteria. [218][219][220] Its active site responsible for water oxidation is the oxygen-evolving complex, which consists of four manganese ions and a calcium ion (Mn 4 CaO 5 ) surrounded by a protein. [219][220][221][222] This complex has a sophisticated structure, in which a Mn 3 CaO 4 heterocubane is tethered to a fourth manganese ion via a m-oxo bridge and a corner oxo-attachment. [223][224][225] Even though the oxygen-evolving complex is still not fully understood at mechanistic level, metal-based clusters with cubane cores have already been synthesized as efficient catalysts for oxygen evolution reaction, imitating the Mn 4 CaO 5 active site of photosystem II. 221,224,[226][227][228] To improve the design of these bio-imitating electrocatalysts, fundamental understanding of the OER mechanism by oxygenic multi-metallic clusters is required, along with a significant improvement of their stability under the harsh oxidizing conditions of OER. 229,231,232 The approach that nature utilizes to circumvent the oxidative degradation of photosystem II by continuously replacing and repairing the damaged protein subunit 220,229 in its structure cannot be applied in these artificial electrocatalysts.
The state-of-the-art electrocatalysts for the OER are ruthenium (RuO 2 ) and iridium oxide (IrO 2 ), but their high cost prohibit their application at large scale. 230 Hence, extensive research is conducted on the synthesis and design of efficient and cost-effective electrocatalysts based on transition metals and layered double hydroxide materials. 213,[231][232][233][234][235][236][237][238][239][240][241][242][243][244] The design of new electrocatalysts for OER is focused on three scale levels: (i) at the atomic scale, the alteration of oxidation state, 245 coordination, 246,247 electron density, 245 and composition of metal composites 245,248 leads to an improvement of their electronic structure and an increase in OER activity; (ii) at the nano-scale, different material combinations (such as metal oxides), 249,250 chalcogenide, 249 boride, 251 nitride, 252 phosphide 250 supported on nanostructures (e.g. nanowires, 253,254 nanosheets, 251,254 nanotubes 255,256 ) have increased activity toward OER due to their high surface area and number of active sites; and (iii) at the meso-scale, the creation of a porous architecture of the support can enhance mass transport of the electrolyte, improving the activity and conductivity of the electrocatalyst. 257 The structure of a leaf has served as a source of inspiration for the design of a 2D/1D cobalt oxide (CoO x ) electrocatalyst (Fig. 7). 258 Each leaf has a 2D morphology with optimized surface area, favoring light absorption and surface reactions, while the presence of 1D, hollow tubular structures under the leaf facilitates the transport of nutrients and water to each leaf. 258 The leaf-inspired CoO x electrocatalyst consists of CoO x nanosheets (100-200 nm average diameter and 20-50 mm length) in a nanotube (200 nm average diameter, 20 mm length), which was synthesized via in situ etching of a Cu 2 O nanowire 258 Hence, the structure of the CoO x electrocatalyst is improved across different scales; at the atomic scale, the presence of Co 2+ in octahedral symmetry increases electron transfer, while the high surface area (B371 m 2 g À1 ) of CoO x nanosheets and 3D porous framework of CoO x nanotubes provide a high number of active sites and increased ion transport, respectively. 258 The leaf-inspired CoO x electrocatalyst demonstrated an onset potential of B1.46 V (vs. RHE) similar to commercial IrO 2 , a current density of B51 mA cm À2 at 1.65 V (vs. RHE) and a Tafel slope of 75 mV dec À1 . However, its stability is a concern, since a B35% loss in its current density (1.5 mA cm À2 initial current density at 1.5 V vs. RHE) was observed after 2 h of operation in an electrolysis cell. 258
Nature-inspired electrocatalysts for hydrogen oxidation and evolution reactions
The overall reactions for hydrogen oxidation (HOR) and evolution (HER) reactions involve either protons in acidic media or hydroxide ions in alkaline media (Table 3). 59,259,260 The first step of the reaction mechanism always involves the adsorption of a H intermediate on the electrode surface via a proton and electron transfer from the acid electrolyte and the surface of the electrode, respectively. In acidic media, H + is the proton source for the initial Volmer step, whereas, in alkaline media, H 2 O constitutes the proton source producing OH À after electron transfer. Recently, it was suggested that the Volmer step of the HOR/HER is the rate determining step for noble metals. 260 There are two different reaction routes for the final step, determined by the Tafel slope values from polarization curves. 259 The Heyrovsky reaction, in which the adsorbed hydrogen (H*) combines with an electron transferred from the electrode surface and a proton from the electrolyte to form hydrogen; or the Tafel reaction, in which two adsorbed hydrogen combine to form a hydrogen molecule (Table 3).
Pt based electrocatalysts are the state-of-the-art choice for hydrogen oxidation and evolution reactions in acidic media, as earth-abundant electrocatalysts cannot survive under these highly acidic conditions and exhibit similar activity to their Pt based counterparts. 261,262 Even though the HER/HOR activity of Pt based electrocatalysts is decreased in alkaline media, their ability to catalyze these reactions at moderate overpotentials makes them the most efficient choice. 261 The utilization of alkaline media for HER and HOR reactions enables the use of non-precious metal catalysts, such as heteroatom-doped carbon, transition metal oxides, sulfides, phosphides and their alloys, [263][264][265][266][267][268] and molecular complexes based on porphyrin and corrole. 150,264,269,270 Less attention has focused on non-precious metal electrocatalysts for the HOR. [271][272][273][274][275] However, these non-Pt electrocatalysts suffer from poor kinetics and stability; modification of their electronic structure, crystal facets, 265 composition variation, 273,276 and surface area is required to optimize their hydrogen adsorption free energy, a key descriptor of their HOR/HER activity, 14,[277][278][279] and improve their catalytic performance and durability.
Inspiration for the design of new electrocatalysts for the HER and HOR can be derived from nature, and from hydrogenases, specifically. Hydrogenases are metalloenzymes widely used in nature for hydrogen evolution and oxidation in living systems, since they can achieve low overpotentials and high turnover frequencies employing Fe and/or Ni in their active site. 126,280 Hydrogenases comprise three coordination spheres (Fig. 8). 281,282 The inner coordination sphere consists of the metal center and the atoms bonded directly to it. The active site of FeFe hydrogenases has two iron atoms with CN À and CO ligands attached to each of these two atoms (Fig. 9), while the active site of NiFe hydrogenases has nickel ions deposited in cysteinate ligands, which are in contact with the dicyanocarbonyl iron center (Fig. 10). [283][284][285][286][287][288][289] The most important ligand in this inner coordination sphere is azadithiolate, which plays an important role in hydrogen cleavage and proton transfer between the metal center and surface of the enzyme. [283][284][285][286][287][288][289] The second coordination sphere contains functional groups incorporated into the ligand structure interacting directly with substrates bound to the metal during a reaction, but do not interact with the metal center. In hydrogenases, pendant bases are positioned in this coordination sphere to facilitate proton transfer between the metal center and the acid or base in the solution. 281 The outer coordination sphere encompasses the remainder of the ligand structure and the solvent surrounding the catalyst. 281 The structure of hydrogenases inspired the design of costefficient molecular catalysts oxidizing or producing hydrogen. The most notable are Dubois catalysts, synthetic nickelbis-diphosphine complexes using pendant amines as a Lewis base. 124,[290][291][292] The pendant amine is positioned near the iron center functioning as a proton relay promoting the creation or scission of the H-H bond. 124,[290][291][292] As a result, these nickel complexes exhibit high turnover frequencies for H 2 production of B33 000 and 100 000 s À1 in dry acetonitrile and water, respectively. 290 However, these catalysts suffer from severe stability issues when immobilized onto an electrode; 293-295 a B25% loss in catalytic current within 6 h was observed during chronoamperometric measurements in acidic media (0.1 M HClO 4 ), while their activity toward hydrogen oxidation was lost after 10 min upon the addition of B2% O 2 in the hydrogen gas feed. 294 Incorporating amino acids or peptides directly into the P 2 N 2 ring of hydrogen oxidation catalysts [Ni(P Cy 2 N AminoAcid 2 ) 2 ] n+ (where Cy is cyclohexyl; amino acid is either glycine with n = 3, or arginine with n = 7), results in efficient Ni based complexes for hydrogen oxidation and production. 296 An arginine-based Ni complex exhibited a turnover frequency of B210 s À1 at acidic pH (0), which decreased at higher pH values, reaching B40 s À1 at pH = 7. 296 This arginine-based Ni complex was immobilized onto single-wall carbon nanotubes (SWCNTs) covalently modified with naphthoic acid groups and used as the anode (0.3 mg cm À2 loading) of an H 2 /O 2 PEFC. Pt/C was used in the cathode (1 mg cm À2 loading) and Nafion s as the polymer electrolyte in the MEA. PEFC with the arginine-based Ni complex exhibited an OCV of B0.9 V and B14 mW cm À2 power density at B0.5 V (Fig. 11), in comparison to an OCV of B1 V and 94 mW cm À2 power density for a PEFC employing Pt/C as the anode and cathode catalyst. These initial in situ PEFC performance results demonstrated the ability of an arginine-based Ni complex to operate at highly acidic conditions, delivering only five times less current or power density at very low loading compared to Pt/C. 297 These preliminary PEFC performance results are superior to the ones reported earlier utilizing arginine-based Ni bis-diphosphine complexes as the anode catalyst. 127,298,299 In the case of a PEFC with cobalt (Co) supported on nitrogen doped carbon as the cathode and arginine-based Ni bis-diphosphine complexes supported on multi-wall carbon nanotubes, an B0.74 V OCV and 23 mW cm À2 max power density was reported, whereas an OCV of B0.85 V and 70 mW cm À2 max power density was demonstrated when Pt/C was used as the cathode (operation at 60 1C, B0.45-0.65 mg cm À2 loading at each side). 300 The utilization of Ni bis-diphosphine complexes as hydrogen oxidation and evolution catalysts is not novel; it was proposed a decade earlier by incorporating these complexes into multi-wall carbon nanotubes via grafting. 127 However, the preliminary MEA testing results for hydrogen production and evolution reactions were disappointing, since a two orders of magnitude smaller current density was reported for Ni bis-diphosphine complexes based MEAs compared to Pt/C MEAs. This discrepancy was attributed to the different catalyst loadings used; Ni bisdiphosphine complexes based MEAs had a B0.06 mg cm À2 loading compared to B0.5 mg cm À2 loading for Pt/C MEAs. 127 Recently, a revisited design was reported, where ligands were covalently immobilized via amide coupling on the surface of MWCNTs and, then, the nickel metal centers were introduced with [Ni(CH 3 CN) 6 ] 2+ or [Ni(H 2 O) 6 ] 2+ . 298 The activity of these immobilized Ni bis-diphosphine complexes on MWCNTs were compared to commercial Pt/C (46% wt) MEAs with 0.05 mg cm À2 loading in a rotating disk electrode (RDE) setup using 0.5 M H 2 SO 4 solution (Fig. 12). The total amount of Pt present in the electrode was B2.5 Â 10 À7 mol cm À2 , whereas the total amount of Ni present in the electrode was 10-fold lower (B2.5 Â 10 À8 mol cm À2 ). At room temperature (B25 1C), the activity of the bio-inspired electrocatalyst was approximately one order of magnitude less than the activity of commercial Pt/C; at 85 1C operation, though, the bio-inspired catalyst was only B30% less active than Pt/C for hydrogen oxidation, while it outperformed the commercial catalyst by B20% for hydrogen production. 298 A chemically inert polymer P(GMA-BA-PEGMA) (poly(glycidyl methacrylate-co-butyl acrylate)-co-poly(ethylene glycol)methacrylate) was used to immobilize Dubois' Ni bis-diphosphine complex with cyanide and glycine [Ni(P Cy 2 N Gly 2 ) 2 ] 2+ on the surface of the electrode and prevent the electrical contact between the metal complexes within the polymer and the electrode. 280 The thick polymer film creates two distinct regions, where different reactions occur. In the region close to the surface of the electrode, hydrogen is oxidized via the Ni-complex, while, in the outer region, which is electrically disconnected from the electrode, preventing re-oxidation of the catalyst, a protonated Ni 0 complex reduces oxygen to water by using the electrons produced from the hydrogen oxidation reaction in the inner layer (Fig. 13).
The thickness of the polymer film is crucial for the effective protection of the metal center from oxygen. Even though thinner films produced high current densities for the oxidation of hydrogen, they degraded immediately upon the addition of oxygen in the gas feed (Fig. 14a). For thicker polymer films, the current density for hydrogen oxidation was not affected by the presence of oxygen in the gas feed for the first 7 h; after 24 h of continuous exposure to oxygen, though, the current density decreased to B75% of its initial value, due to the oxidation of the Ni bis-diphosphine complex and oxidation of phosphines (Fig. 14b). 280,301 3.4 Nature-inspired electrocatalysts for the carbon dioxide reduction reaction Electrochemical reduction of carbon dioxide (CO 2 ) has attracted great research interest, as it could contribute to the reduction of the greenhouse effect and generate useful chemicals. It involves multi-proton and multi-electron routes to convert CO 2 to an activated form of carbon (with the exception of carbon monoxide), which have sluggish kinetics and require high reducing potentials. 302,303 Theoretically, the formation of different carbon products depends on the applied potential, 304 even though directing the reduction of CO 2 toward a specific reaction pathway is difficult to achieve. 305 The initial transfer of a single electron to CO 2 to create CO 2,ads is considered the rate determining step, due to the high energy input required to enable this step. 305 The reactivity of CO 2,ads on the surface of metal catalyst dictates the reaction pathway that will be chosen, leading to different end products. For example, in the presence of gold or silver catalyst, CO 2,ads is converted to CO, whereas copper catalyzes the reaction of CO 2 to formic acid (HCOOH), methane (CH 4 ), and ethylene (C 2 H 4 ) (Fig. 15). 305 Several electrocatalysts for the reduction of CO 2 have been developed, 306 including metal nanoparticles and oxides, chalcogenides and carbon based materials, and their activity was optimized via modification of their defect density on their surface, 302,307-309 particle size, 310 morphology, 311,312 and electrode thickness. 313 Another important issue contributing to the sluggish kinetics of CO 2 reduction is the insufficient concentration of CO 2 on the surface of the catalyst layer. The concentration of carbon dioxide near the cathode increases with increasing metal cation size, 315 but this effect is restricted by the solubility of relevant salts. The application of large potentials can also improve the adsorption of CO 2 at the expense of increased hydrogen evolution. 316 On the contrary, nanostructured metal electrodes (Au, Pd nanoneedles) produce a high local electric field at low overpotentials, which concentrates electrolyte cations and enhances the concentration of CO 2 near the cathode catalyst layer. 317 High current densities of B22 mA cm À2 at À0.35 V vs. Ag/AgCl (gold nanoneedles) and B10 mA cm À2 at À0.2 V vs. Ag/AgCl (palladium nanoneedles) for CO and formate production were reported, respectively. 317 Hence, bio-inspired approaches have been employed to improve the activity of electrocatalysts toward reduction of CO 2 , via inspiration from biological processes or the structure of enzymes, which catalyze the same reactions. These will now be reviewed.
3.4.1 Conversion of carbon dioxide to carbon monoxide. Electrochemical reduction of carbon dioxide to carbon monoxide (CO) in the presence of oxygen is a challenging task, since oxygen reduction is thermodynamically favored over CO 2 reduction. 35 To overcome this obstacle, the stream of CO 2 is purified before entering the electrolyzer, a task which requires the addition of an additional gas separation system, increasing the overall cost of the reactor setup. 318,319 On the contrary, in nature, reduction of carbon dioxide occurs during photosynthesis, in which enzymes are located near the active sites to significantly increase the local concentration of CO 2 123,320 and, hereby, minimize the O 2 to CO 2 ratio, which enhances the conversion rate of CO 2 to glucose, despite the presence of oxygen. 321 An oxygen tolerant cathode was recently synthesized, inspired by the process leading to an increase in CO 2 concentration during photosynthesis (Fig. 16a). 322 This bio-inspired cathode consists of a carbon fiber gas diffusion electrode with a catalyst layer and a polymer with intrinsic microporosity (PIM) 323 coated on its opposite sides. 322 PIM imitates the role of enzymes; it filters oxygen and is highly permeable to CO 2 . Its size-selective pores reject oxygen molecules (B0.35 nm kinetic diameter) 324,325 and facilitate the transport of slightly smaller CO 2 molecules (B0.33 nm kinetic diameter) 324,326 effectively decreasing the O 2 to CO 2 ratio of the feed stream reaching the catalyst layer, and, hence, favoring the reduction of CO 2 to CO (Fig. 16b). 322 The incorporation of this bio-inspired electrode into the electrolyzer resulted in a B76% faradaic efficiency to CO and B27 mA cm À2 current density at À1.1 V vs. RHE (feed gas stream with 5% O 2 ). As the oxygen concentration in the gas stream increased, the faradaic efficiency of CO decreased, reaching B20% and B35 mA cm À2 current density at 35% O 2 in the gas stream, indicating that oxygen reduction dominated the electrode reaction at high O 2 concentrations in the gas stream (Fig. 16c). 322 During continuous operation over 18 h, stable operation was observed for gas streams containing 5% and 20% O 2 (Fig. 16d) Another electrode design employed to increase the concentration of CO 2 at the surface of the catalyst layer was inspired from the structure of alveoli in a mammalian lung. 327 Alveoli consist of thin epithelial cellular membranes with low water permeability and high gas diffusivity. 107,108,328 During pulmonary circulation, inlet air rapidly penetrates through bronchii to alveoli and reaches the blood cells, where haemoglobin protein binds oxygen and releases CO 2 . 327 The structure of the alveoli-inspired electrode (B20-80 nm thickness) comprises a layer of gold nanoparticles (B0.15 mg cm À2 loading) acting as the catalyst sputtered on a polyethylene membrane, whose hydrophobicity and network of interconnected fibers (B40-500 nm pore size) allow unobstructed diffusion of CO 2 toward the catalyst layer. To fully replicate the structure of a closed alveolus with macroscopic tubes (bronchioli) for gas transport to and from the lung, the flat Au coated polyethylene membrane was rolled and sealed to form a bi-layer pouch-type structure. Electrochemical reduction of CO 2 was conducted in an H-type cell using a Selemiont anion exchange membrane as separator and CO 2 saturated potassium bicarbonate (0.5 M KHCO 3 ) as the electrolyte. A faradaic efficiency of B92% for CO production and high current density of B25 mA cm À2 at À0.6 V vs. RHE were achieved. 327 An additional nature-inspired strategy for the design of electrocatalysts for the reduction of CO 2 is derived from the structure of dehydrogenases. CO-(CODH) and formate-dehydrogenases (FDH) catalyze the reversible CO 2 reduction via their single-or multi-metallic active sites composed of earth abundant metals, such as nickel-iron or molybdenum-sulfur-copper complexes in CODHs or single metals tungsten or molybdenum in FDHs. [329][330][331][332] In addition to their active metal centers, proton relays on their outer coordination sphere play an important role in the catalytic activity of these enzymes. [329][330][331][332] NiFe CODHs utilizing [NiFe 4 S 4 ] as the active site are highly active toward CO 2 to CO conversion with B12 s À1 turnover frequency at low overpotentials (less than 100 mV). [333][334][335][336] Ni acts as a Lewis base transporting an electron to the unoccupied molecular orbital of CO 2 , increasing the negative partial charges of oxygen atoms bound on the Fe center acting as a Lewis acid. [333][334][335][336] Moreover, a CODH-inspired catalyst containing cofacial Fe tetraphenylporphyrin dimer (o-Fe 2 DTPP) demonstrated a high faradaic efficiency (B95%) and turnover frequency (B4300 s À1 ) at high overpotential (B0.7 V) for the conversion of CO 2 to CO in a DMF-water solution (10 wt%). 337 The introduction of electronically different substituents to porphyrin positions can alter the overpotential and turnover frequency. Attachment of electron withdrawing groups, such as perfluorophenyl substituents to the meso-positions of the porphyrin rings of CODH-inspired catalyst decreased the overpotential to B0.3 V; however, the withdrawal of electrons also reduced the electron density of the active metal center, resulting in a decrease in turnover frequency. 337 On the contrary, incorporation of electron donating groups, such as mesityl on the porphyrin groups resulted in high turnover frequencies but high overpotentials. 337 3.4.2 Conversion of carbon dioxide to formic acid, formamide, and formate. In the case of CO 2 reduction to formic acid, a cobaltbased, CO 2 -reducing catalyst, CpCo P R 2 N R based on the structural characteristics of CODH/FDH. 338 It contained P R 2 N R 2 0 À Á diphosphine (1,5-diaza-3,7-diphosphacyclooctane) ligands, with two pendant amine molecules acting as proton relays during the reduction of CO 2 . Four different diphosphine ligands with phenyl or cyclohexyl substituents on phosphorus (P Cy or P Ph ) and benzyl or phenyl substituents on nitrogen (N Bn or N Ph ) were prepared and were active toward the electrochemical reduction of CO 2 to formic acid, exhibiting 90 AE 10% faradaic efficiencies. 338 The cobalt catalyst with (P Cy 2 N Bn 2 ) ligand demonstrated the highest activity with a turnover frequency (TOF) value higher than 1000 s À1 . However, these enzyme-inspired molecular catalysts were not stable, as a complete loss of their activity was observed after 1 h of electrolysis. 338 Furthermore, a distinct structural characteristic of iron hydrogenases was used as the source of inspiration for the design of molecular catalysts for the hydrogenation of CO 2 to formamide and formate. The ortho-hydroxypyridine present in iron-based hydrogenases is crucial toward hydrogen splitting as it promotes bond scission. 339 Hence, the hydrogenase-inspired catalyst consisted of a manganese complex with a nitrogen ligand containing ortho-hydroxypyridin, namely 6,6 0 -dihydroxy-2,2 0 -bipyridine. 340 Manganese was chosen as the metal center due to the presence of Mn hydride intermediates derived from the interaction between Mn and hydroxy-functionalized ligand, leading to increased activity toward CO 2 reduction to HCOOH. 341,342 Turnover numbers of B6250 (in the presence of DBU (1,8diazabicyclo [5.4]undec-7-ene)) and B588 (in the presence of secondary amine diethylamine) were achieved for the hydrogenation of CO 2 to formate and formamide, respectively. 340 These initial results show the potential of these enzymeinspired electrodes as CO 2 reduction catalysts, even though further research is needed to optimize their structure and improve their durability. Thorough reviews about molecular catalysts for reduction of CO 2 are available elsewhere. [343][344][345]
Connecting nano-to macro-scale: hierarchical transport networks in electrocatalysis
In an electrochemical reaction involving a porous electrocatalyst, reactants must move from the gas or fluid stream into the electrocatalyst layer and reach the active sites on its surface. A high specific, internal surface area is desirable to achieve a high concentration of active sites per unit catalyst mass, suggesting the employment of nanoporous electrocatalysts. However, the diffusion of reactants through electrocatalytic layers is greatly influenced by the geometry and organization of their pores. In non-hierarchical structures, there is high mass transfer resistance within the nanopores, resulting in decreased overall rates. 101,111 Hence, it is crucial to minimize the effects of transport limitations to increase yield; this is not a trivial problem, due to the various phases required for transport of molecular reactants and products, ions and electrons.
A nature-inspired approach to reduce diffusion limitations is the utilization of hierarchically structured porous materials with an optimized network of narrow and broad pores. Nature can provide the mechanistic basis to discover the optimal network structure of the pores, as hierarchical transport networks are widely employed in various natural systems in plants and animals, such as their circulatory and respiratory systems (Fig. 17). In such systems, convective flow dominates transport at large length scales, while diffusion takes over at small length scales. This transition corresponds to a Péclet number of unity and is accompanied by a quite sudden change in the channel architecture: the fractal distribution of channel diameters at large scales (pressure driven) transitions into a uniform distribution of narrow channels (concentration or chemical potential driven). This optimal structural characteristic is universal and witnessed in lungs, plants, and other biological transport networks. [107][108][109][110][346][347][348][349] A numerical model was developed to demonstrate the significance of the microstructure of the cathode catalyst layer (CCL) in the performance of a PEFC (Fig. 18), because transport limitations in the CCL are responsible for a sharp drop in efficiency when operating at high current density. 351 This CCL was assumed to consist of several spherical Pt/C agglomerates, each surrounded by Nafion ionomer. The composition and size of these agglomerates (B50-5000 nm) depend on the formulation of the catalyst ink, which is sprayed onto the polymer membrane to form the MEA.
This optimized CCL design leads to high Pt utilization, even at very low Pt loadings (B0.01 mg cm À2 ), as sufficient oxygen concentration is present along the entire radius of the agglomerate (Fig. 19a). On the contrary, in a conventional design, Pt resides at the center of the agglomerate and is exposed to very low oxygen concentrations, underutilizing the catalyst (Fig. 19b). When this optimized CCL design is incorporated into a lung-inspired PEFC (described in detail in Section 5.3), the DoE target for platinum utilization of B8 kW g Pt À1 is surpassed 352 at N = 4 generations of the fractal flow field and reaches a maximum of B36 kW g Pt À1 at N = 6 generations (Fig. 19c). Hence, these modeling results demonstrate the significance of a rational design based on hierarchically structured electrocatalysts, which can significantly reduce the cost of the MEA. However, the mitigation of degradation of Pt must be considered into the design, as, at such low catalyst loadings, the electrochemically active surface area can rapidly decrease leading to significant PEFC losses.
In summary, it is evident that there is a need for a new methodology for the design of highly active and cost-efficient electrocatalysts for key, energy conversion related, electrochemical reactions, which can be successfully fulfilled by the implementation of a nature-inspired approach, using the NICE methodology. Even though challenges regarding long-term stability, cost, scale-up, and fundamental understanding of kinetic and reaction barriers in key elementary steps of electrochemical reactions still need to be addressed, the nature-inspired approach is a powerful avenue towards the rapid growth of effective electrocatalyst designs.
Bio-inspired electrochemical devices
In the following section, we provide examples of the nature-inspired approach for the design of nature-inspired electrochemical devices (Fig. 20). In the case of bio-fuel cells, the term ''bio-inspired'' is limited to the different living enzymes or microbes employed as electrodes, and is, in fact, bio-integration, as biological components are explicitly used. In contrast, in the case of other types of fuel cells and batteries, the nature-inspired approach does not require biological components; in addition, the approach aims to redesign electrochemical devices across different scales, encompassing not only the electrodes but the entire structure of the battery or the flow fields of, e.g., polymer electrolyte fuel cells (PEFCs).
Biofuel cells
Biofuel cells (BFCs) are fuel cells using a biogenous fuel or catalyst, or their combination, to convert chemical energy stored in biodegradable substances directly into electricity. 354,355 Biofuel cells are categorized into enzymatic (EFCs) and microbial fuel cells (MFCs), depending on the nature of the biocatalyst employed, and offer a clean energy alternative to fossil fuels due to the utilization of cost efficient, environmentally friendly, and renewable fuels (such as sugars and components in wastewater) to produce electrical energy. However, the main obstacle toward the commercialization of this technology is its short lifetime and low power density, 355 limiting the range of applications to powering microelectronic systems, such as sensors and actuators, [356][357][358] and for environmental remediation. 359,360 The different biocatalysts incorporated in these two types of biofuel cells leads to large differences between them in volumetric size, power output and targeted applications. 355,361 EFCs and MFCs use redox enzymes and living microorganisms (e.g. bacteria, yeast cells), respectively, as biocatalysts for power generation. As a result, EFCs can produce high power density and are well-suited to miniaturization, sensors, and wearable or implantable devices fueled by endogenous substances (such as glucose in the blood stream). [362][363][364][365][366] On the contrary, MFCs are mainly used as bioreactors for purification of wastewater or as long-term power generators for small devices in remote locations. Despite the promising potential of biofuel cells, the commercialization of these systems is limited by their low power density and stability. 355 To circumvent these issues, research has been focused on the development of nanostructured materials as electrodes with tailored pore size (B40-300 nm), high surface area and porosity to significantly improve the electron transfer between the biocatalyst and the electrode, and the diffusion of mediators to the electrode. The most common materials used as electrodes in BFCs are carbon-and polymer-based. Carbonaceous materials (carbon nanotubes, graphene, carbon nanoparticles) 355,367-377 possess excellent mechanical and chemical stability, good conductivity and biocompatibility, with the ability to extract the electrons from the biocatalysts without being deactivated. Conducting polymers (polyaniline (PANI), polypyrrole (PPy), poly(ethylenimine) (PEI)) have high stability, conductivity, and surface area transforming them into attractive candidates as electrodes for BFCs. 355,[378][379][380][381] An example of carbon modified electrodes is magnesium oxide-templated mesoporous carbon (MgOC) with high surface area (B580 m 2 g À1 ) for efficient biocatalytic reactions in its mesopores and facile mass transport though its macropores. 382 The electrode was coated with a biocatalytic hydrogel composed of a conductive redox polymer d-FAD-GDH (deglycosylated flavin adenine dinucleotide-dependent glucose dehydrogenase), and a crosslinker. This MgOC modified electrode produced B30-fold higher current density (B3 mA cm À2 ) than a flat carbon electrode with the same hydrogel loading (B1 mg cm À2 ), while, after 220 days of testing, a B5% loss of the initial catalytic current was observed, indicating that immobilization of the enzyme in the mesopores of carbon support can significantly improve the enzyme's stability. 382 As these biofuel cells are bio-integrated, rather than bioinspired fuel cells, a thorough review of the electrodes employed in biofuel cells is beyond the scope of this manuscript; readers are referred to other reports in the literature. [355][356][357][358][383][384][385]
Bio-inspired batteries
Among the first bio-inspired approaches for the development of electrodes for Li-ion batteries was the utilization of viruses, such as M13. 386,387 A self-assembled layer of virus (M13)templated cobalt oxide nanowires serving as the active anode material was formed on top of polyelectrolyte multilayers acting as the battery electrolyte, and this assembly was stamped onto platinum microband current collectors. 386,387 The microbattery electrodes exhibited a similar charging/discharging curve as that of commercial cobalt oxide nanoparticles, even at a high charging rate of B255 nA, demonstrating the potential of this approach toward the design of new electrode materials. 386,387 Nowadays, the facile development of flexible and wearable electronics creates the need for batteries with high flexibility and energy density. In conventional battery designs these two characteristics are at odds with each other; 388,389 ultrathin batteries may be flexible, but have low energy density and vice versa, requiring the connection of numerous battery units in series to increase their energy density. [390][391][392] A nature-inspired approach to solve this issue is the design and manufacturing of a Li-ion battery based on the structure of the spine. The spine or vertebral column consists of 33 bones (vertebrae), which interlock with each other, providing the main support of the human body and its flexibility. To prevent friction between vertebrae, each vertebra is separated from the others by intervertebral discs.
Based on this structural characteristic of the human spine, flexible Li-ion batteries with high energy density were manufactured (Fig. 21a). 393 Each of the conventional anode/separator/ cathode/separator battery stack components were placed in different strips with multiple branches; then, each strip was wrapped around the backbone to form thick stacks of electrodes, corresponding to the vertebral column. The unwound part interconnecting these thick stacks of electrodes functions as the marrow, providing high flexibility to the battery (Fig. 21b). The spine-inspired design of the Li-ion battery demonstrated an B242 W h L À1 power density and B94% retention. At 0.5C, the discharge capacity was above 125 mA h g À1 under continuous dynamic load. Mechanical load tests revealed the flexibility of this design, since the largest strain on the interconnected joints was B0.08% compared to B1.8% and B1.1% for a prismatic and stacked pouch cell, respectively. 393 Another source of natural inspiration is cartilage. Cartilage is a tough, flexible connective tissue, serving as the center of ossification for bone growth and covering the surface of the joints, reducing the friction between adjacent bones and preventing damage (Fig. 22). 394,395 It comprises collagen fibers and a dense extracellular matrix (ECM) with a sparse distribution of cells called chondrocytes. 394 The ECM is composed of water, collagen, proteoglycans and noncollagen proteins, and retains the water within the cartilage, which is critical for its unique mechanical properties. Cartilage consists of several zones, namely, the superficial, middle, deep, and the calcified zone, distinguished by the distribution of the collagen and the orientation and shape of the chondrocytes. 394 The superficial zone contributes 10-20% to the cartilage thickness and protects the cartilage from shear, tensile, and compressive forces imposed by articulation. This zone is responsible for most of the tensile properties of cartilage and consists of collagen fibers tightly packed and aligned parallel to the articular surface. Below the superficial zone lies the middle zone, which contains thicker collagens than the superficial zone to resist compressive forces. 394 Right below the middle zone lies the deep zone, which constitutes the second layer of resistance to compressive forces. It contains thicker collagen fibers than in the previous layers perpendicular to the articular surface and chondrocytes oriented vertically to the underlying bone. Finally, the calcified zone secures the cartilage to the bone by attaching the collagen fibrils of the deep zone to subchondral bone. 394 To imitate the nanofiber networks of cartilage, aramid nanofibers were synthesized from Kevlar via layer-by-layer deposition and used as ion conducting membranes for Li-ion batteries, 396 solid electrolytes for Zn batteries, 397 and separators for redox flow batteries. 398 Ion conducting membranes are a key component of Li-ion batteries, providing high ionic mobility to Li + ions, stiffness and flexibility. The safety issues of Li-ion batteries are related to dendrite growth and anode expansion in the charged state; [399][400][401] the membranes are pierced during dendrite growth, which could lead to battery failure, short-circuiting and fire. 402 Aramid nanofibers (B1 mm length, B5-10 nm average diameter) were used as ion conducting membranes, due to their low Ohmic resistance, high mechanical flexibility, ionic conductivity and resistance to dendrite growth. 396 Investigation of dendrite growth was carried out under high current density (B10 mA cm À2 ); a copper (Cu) electrode was used to examine the expansion of Cu dendrites (growth zones of B50-100 nm and B25 nm tip diameter), 396 due to their smaller size than Li dendrites, making the suppression of Cu dendrites more challenging than those of Li. The theory of electrochemical dendrite growth 403 indicates that if the local mechanical properties of the ion conducting membrane are sufficient to prevent mechanical stress from Cu dendrites, it will also suppress Li dendrites, which are much softer.
Copper electrodes were examined via scanning electron microscopy (SEM) after a total charge of 0.006 mA h cm À2 , and dendrites with B500 nm size were formed on bare copper electrode. The size of the copper dendrites was significantly reduced to B100-200 nm after depositing a coating of aramid nanofibers on the electrode with a film thickness of B162 nm. As the thickness of the coating was increased, the suppression of copper dendrite growth was enhanced as well, and no dendrite formation was observed for coatings with thickness of B809 nm. 396 Dendrite growth also has a detrimental effect on the energy density and cyclability of zinc (Zn) batteries, as Zn dendrites can easily traverse the inter-electrode space, piercing existing separators. Solid electrolytes can provide a solution to this issue at the expense of low mobility of divalent ions, leading to low energy density. 397 A cartilage-inspired composite of aramid nanofibers, poly-ethylene oxide (PEO) and zinc trifluoromethanesulfonate (Zn(CF 3 SO 3 ) 2 ) can serve as a solid electrolyte providing facile ion transport and excellent mechanical properties. 397 PEO and Zn(CF 3 SO 3 ) 2 act as the ion transport components of the solid electrolyte; the composition of PEO/Zn(CF 3 SO 3 ) 2 /aramid nanofibers was optimized with respect to Zn 2+ conductivity and mechanical properties with an optimal ratio of 9 : 3 : 1. These composites were thinner (B10 mm) than a commercial separator for Zn or Li batteries (B30-200 mm), the interfilament distances in the aramid nanofibers network were B10-20 and B2-4 times smaller than the average diameter of the stems (B1-2 mm) and growth points of Zn dendrites (B200 nm). Their Zn 2+ conductivity was B2.5 Â 10 À5 S cm À1 , 10-fold greater than the original Li batteries (B2.5 Â 10 À6 S cm À1 ). 397 This bio-inspired composite enabled Zn batteries to be rechargeable and reformable, due to the plasticity of Zn anodes and the reconfigurability of a cartilage-inspired fiber network. After 50 cycles at 0.2C, the battery retained B96% of its highest achievable capacity (B123 mA h g À1 ) and still exceeded 90% after 100 cycles at 0.2C (Fig. 23a). 397 The plasticity characteristic of this bio-inspired composite increased battery safety, since it was less prone to mechanical damage and could withstand elastic deformation from bending and plastic deformation. It also enabled shape modification to improve the ability of the battery to carry a load; various shapes of these bio-inspired batteries were tested as load bearing and charge storage elements in small drones (Fig. 23b and c), demonstrating their promising potential in the transportation industry. 397 The feasibility of using aramid nanofiber-based films as electrically conducting separators for non-aqueous redox flow batteries has also been explored.
An B8.5 mm film was obtained via layer-by-layer assembly of 20 layers of aramid nanofibers of B425 nm thickness each. The dense network of nanofibers comprising this multilayer structure reduced the pore size of the film, while the pore network for ionic transport remained intact ( Fig. 24a and b). 398 The average pore size was approximately 5 nm, much less than the pore size of a commercial separator, such as Celgard 2325 (B25 mm thick), with B390 nm pores on its surface ( Fig. 24c and d). The small pores of aramid nanofiber-based film impeded the mobility of vanadium ions, resulting in lower ionic conductivity (B0.1 mS cm À1 ) and permeability (B0.8 Â 10 À7 cm 2 s À1 ) than Celgard 2325 (B0.6 mS cm À1 and B7 Â 10 À7 cm 2 s À1 , respectively). 398 To further decrease the permeability of an aramid nanofiber-based separator without sacrificing its conductivity, its surface was functionalized with PDDA (poly(diallyldimethylammonium chloride)) and PSS (poly(styrene sulfonate)) polyelectrolytes. The addition of these charged PDDA/PSS layers on the surface of aramid nanofiber-based separator enabled the Donnan exclusion of the positively charged vanadium ions, 404 further reducing the permeability of the functionalized aramid nanofiber-based separator to B0.003 Â 10 À7 cm 2 s À1 , while its conductivity remained constant. 398 The low permeability of these functionalized separators translated into high coulombic efficiency (B95% compared to B55% of commercial Celgard) during commercial cycling (5 h duration) and high stability, exhibiting minimal degradation after 100 h of cycling. 398 Another feature in nature that has been used as inspiration for the improvement of the stability of electrodes in Li batteries is self-healing, an important survival feature of biological organisms increasing their life expectancy. The high-capacity electrode materials, such as silicon and sulfur, suffer from facile capacity fading and a short lifetime. In a commercial silicon anode, silicon particles are surrounded by a polymer binder, which binds them to the current collector to maintain electrical contact. Upon cycling, the stress generated by the volumetric changes throughout lithiation and delithiation of silicon particles fractures the particles and polymer layer, leading to a loss of electrical contact, and, hence, a loss of capacity. Thus, if damage on these electrodes can be repaired spontaneously, the cycle life of the negative electrodes of Li batteries will be significantly increased. 405 Silicon particles of the anode of a Li-ion battery were coated with a hydrogen-bonding-directed self-healing polymer, 406 which allows for cracks to heal autonomously and repeatedly. 407 A cycle life that is ten times longer than that of commercial silicon anodes and a high capacity (B3000 mA h g À1 ) were achieved, when silicon anodes were modified with a self-healing polymer. 405 However, the same self-healing strategy is not effective in positive electrodes (such as sulfur, oxygen, carbon dioxide), since they undergo conversion electrochemistry, i.e. multi-electron and drastic phase transfer (e.g. solid sulfur to soluble polysulfides) with rapid diffusion and uncontrolled deposition of intermediate polysulfides, [408][409][410] severely altering their structure during electrochemical reactions. [408][409][410][411][412] This uncontrolled phase transfer between solid materials (sulfur and lithium disulfide) and liquid polysulfides is the main contributor to the poor cycling stability and reversibility of positive electrodes.
Hence, a different self-healing approach was utilized for the positive anodes of Li-S batteries, inspired by the fibrinolysis reaction within blood vessels (Fig. 25). 413 The uncontrolled deposition and accumulation of inactive solid products in Li-S batteries is similar to the coagulation of thrombus, which obstructs the blood flow to healthy vessels. During fibrinolysis, thrombus is transformed into soluble fibrin fragments and plasmin solubilizes these thrombi fragments. An analogue to plasmin in Li-S batteries is polysulfides, the employed selfhealing agent responsible for the transfer of solid polysulfide compounds into solution, where they re-participate into electrochemical reactions. The cycling performance of sulfur particle anodes containing B0.3 M Li 2 S 5 in self-healing polysulfides was significantly extended to 7500 cycles at 1.2 mA cm À2 . The average coulombic efficiency was above 99%, exhibiting a very low decay rate of 0.01% per cycle. 413 Thus, novel healing agents that are smart, sustainable, and rapidly responsive hold future promises. This bio-inspired approach can also be easily implemented in other high-energy electrochemical storage and conversion systems, such as metal-O 2 batteries and fuel cells.
Bio-inspired fuel cells
In terms of fuel cells, nature-inspired or bio-inspired design is utilized in flow fields to circumvent the uneven reactant distribution issue across the catalyst layer, which leads to losses in fuel cell performance. The majority of the reports in the literature are based on the imitation of apparent features of biological structures (such as leaves, veins, etc.), without providing a clear theoretical foundation that allows to capture the underlying physical phenomena. [414][415][416][417][418] As a result, the design and channel geometries of these bio-imitating flow fields cannot be systematically reproduced or scaled-up, and are prone to disappointing fuel cell performance.
A true bio-inspired design was employed to improve the design of commercial flow fields in fuel cells, whose primary role is the distribution of reactants across the catalyst layer, electron transfer, as well as water and heat management. The human lung serves a similar role in nature; air is uniformly transported through its complex architecture to the bloodstream to oxygenate the blood cells. Its architecture comprises two regions: the dense upper region (bronchi) with 14-16 generations decreases the convective gas flow rate from the bronchial to the acinar airways located in the lower region (7-9 generations), which is dominated by diffusion driven transport, resulting in the production of constant entropy across each level in both regions and, hence, in minimal overall entropy production of the entire lung. 101,[107][108][109][110][111]419 Prior to the manufacturing of lung-inspired flow fields, modeling simulations were conducted to calculate the optimum number of generations required to ensure uniform distribution of reactants and minimal entropy production, or, in other words, the number of generations required for the convection driven flow to be equal to the diffusion driven flow (Péclet = 1). A detailed model was built in COMSOL, revealing that the ideal number of generations, N, for minimum overall entropy production is equal to 4-7; for N less than 4, gas flow is dominated by convection, whereas for N higher than 8, gas transport is driven by diffusion ( Fig. 26C and D). 109,352 The same PEFC operating conditions were used in the model and in the experimental measurements, employing flow fields with 10 cm 2 surface area, constant fuel cell temperature (70 1C), and three different RH values (50%, 75%, and 100%).
These modeling results served as the basis for the engineering of lung-inspired flow fields with N = 3, 4, and 5 generations via stereolithography, a 3D printing method creating 3D objects from successive layers of sintered steel. Flow fields (10 cm 2 ) with 4 generations exhibited a B30% increase in current and power density at 50 and 75% relative humidity (RH), compared to commercial single serpentine flow fields (Fig. 27A-D). 109 The positive effect of a fractal structure was also evident in the produced pressure drop, with the lung-inspired flow fields exhibiting B75% lower values than their commercial counterparts (less than 2 kPa and B5 kPa for lung-inspired and serpentine flow fields, respectively) for each RH tested, minimizing the parasitic losses and enhancing the fuel cell performance. The same pressure drop (less than 2 kPa) was measured when larger lung-inspired flow fields were constructed with 25 cm 2 surface area, whereas the high pressure drop of B25 kPa measured in serpentine flow fields (25 cm 2 ) was detrimental to their performance. 109 Lung-inspired flow field based PEFCs with N = 3 generations demonstrated the worst performance under all experimental conditions tested, due to the large spacing between adjacent outlets, resulting in insufficiently high reactant concentrations across the catalyst layer. 109 On the contrary, lung-inspired flow field based PEFCs with N = 5 generations exhibited a lower performance than commercial single serpentine flow field based PEFCs, since their narrow channels in their final generation were prone to flooding. At high RH conditions (100%), all lunginspired flow fields were vulnerable to flooding and fuel cell performance deteriorated. 109 Their susceptibility to flooding at high humidity conditions (100% RH) was evaluated via neutron radiography of a lunginspired PEFC (N = 4) during galvanostatic operation at various current densities (0.3, 0.5, and 0.6 A cm À2 ). 420 Neutron images revealed significant water accumulation in the interdigitated outlet channels of the fractal distributor, due to limited convective water removal as a result of narrow channels and slow gas flow in the lung-inspired flow fields. Flooding was mitigated at high current densities (0.5 and 0.6 A cm À2 ), since faster gas flow and high pressure drop enhanced the rate of water removal, resulting in an instantaneous increase in the potential of over 200 mV (at 0.6 A cm À2 operating current density). 420 Such a significant increase in fuel cell performance emphasizes the importance of an unobstructed inlet structure of the lung-inspired flow fields, as any defect in the fractal channel network, and especially in the outlet channels of the fractal distributor, render the inlet channels susceptible to clogging. 420 Fig. 27 The lung-inspired flow-field based PEFCs, demonstrate improved performance at 50 and 75% RH (N = 4) compared to conventional, serpentine flow-field based PEFCs (A and B); 10 cm 2 flow field area. When scaled (25 cm 2 flow field area), similar results are obtained for fractal flow fields at 50 and 75% RH (C and D), even though the performance of serpentine flow field based PEFCs is improved, due to an order of magnitude higher pressure drop than fractal flow field based PEFCs. Reproduced from ref. 109 with permission from the Royal Society of Chemistry.
It is evident that apart from a carefully crafted internal structure, the adoption of a water management strategy is required to ensure reliable operation of lung-inspired flow field based PEFCs. Recently, we developed a novel strategy 421 for water removal in PEFCs based on the utilization of capillary arrays laser drilled in the land of the flow fields, which allow supply or removal of water depending on the demand of the electrode. A parallel flow-field modified with capillaries exhibited B95% and B7% improvement in peak power density over the conventional parallel and serpentine flow-fields, respectively. 421 This effective water management strategy ensures reliable fuel cell operation and is currently incorporated into lung-inspired flow fields.
The complex 3D structure of lung-inspired flow fields, produced from stainless steel, faces manufacturing and cost challenges, as it necessitates the employment of the expensive and time consuming, laser sintering. Most recently, printed circuit boards 422 were used instead, as an alternative, cost-efficient material for the rapid manufacturing of lung-inspired flow fields, resulting in uniform distribution of reactants across the catalyst layer, increased performance compared to single serpentine flow fields and improved water management at high RH.
Conclusions and perspective
The design of electrocatalysts with optimized electronic and ionic mobility as well as kinetics at multiphase boundaries is gaining enormous interest in the context of renewable energy and more efficient electrochemical conversions. However, to improve their properties, it is imperative to control their architecture. Nanopores are characterized by a short diffusion length and high multiphase contact, which enlarges the electrochemically active surface area, while macropores enhance the transport properties and kinetics. Hence, the identification and practical use of systematic design principles that guide the chemical properties and structure of electrocatalytic materials to increase activity and stability is crucial.
This necessitates the development of multi-scale models that properly consider geometrical, physical and chemical phenomena across all scales. Nature can be an excellent guide to rational design, as it is full of hierarchical structures and biological catalysts that are scalable, efficient, and robust. However, the majority of research that aims to learn from nature to develop new electrocatalysts and devices is based on straightforward biomimicry or ''bio-imitation'', which mimics isolated features of biological or non-biological natural structures. This risks to lead to sub-optimal activity and stability, because the difference in context between nature and technology is not accounted for and the actual physical processes that govern the biological organism or system are neglected.
On the contrary, the nature-inspired engineering approach advocated in this review is based on maximizing the fundamental mechanistic understanding of the principles that underpin desired traits, and their context, followed by their appropriate incorporation into the design of new electrocatalysts and electrochemical devices. Nano-, meso-, and macro-scale levels are considered into the design, resulting in the engineering of robust, highly efficient, and scalable electrochemical devices and the synthesis of highly active electrocatalysts with increased surface area, number of active sites, and enhanced charge and mass transport.
At the nanoscale, the structure and function of metalloenzymes is the most popular source of inspiration currently, since they catalyze the same reactions as in electrochemical devices. Even though their synthesis procedure is tedious and time-consuming, the synthesized electrocatalysts exhibit high activity and stability, with enhanced transport properties. The advocated nature-inspired approach is also utilized in the design of electrochemical devices, where both the meso-and macroscale matter. The fractal network of the human lung serves as the basis for the design of lung-inspired flow fields for polymer electrolyte fuel cells, leading to uniform reactant distribution across the catalyst layer and improved fuel cell performance, compared to commercial single serpentine flow field based PEFCs. In batteries, the thorough study of the structure of the cartilage results in the creation of hierarchical, porous electrodes with high ionic mobility, mechanical flexibility, and resistance to dendrite growth. The investigation of the fibrinolysis reaction within blood vessels serves as the template for the creation of selfhealing anodes for Li-S batteries with high faradaic efficiencies and cycling stability, using polysulfides as the healing agent. Table 4 illustrates the potential of nature-inspired engineering to transform the design of electrocatalysts and electrochemical devices. It summarizes a number of examples on nature-inspired design discussed in this review, including advantages and outstanding challenges.
All these examples demonstrate the diversity of applications of nature-inspired engineering in the electrochemical domain, and its innate ability to provide innovative solutions to engineering challenges, leveraged by parallel advances in synthesis techniques, additive manufacturing, and computational tools. Nature-inspired chemical engineering (NICE) facilitates this process through a systematic methodology for design and innovation, translating nature-inspired concepts to computationally assisted designs, prototypes and implementations.
We are only at the beginning. Because nature contains examples much ahead of current technology in terms of material properties, scalability, and efficiency, with a need to satisfy multiple objectives all at once, it is worthwhile to thoroughly investigate the underlying properties to inspire innovation, and NICE offers a systematic methodology to accomplish this goal. | 19,899.6 | 2020-04-02T00:00:00.000 | [
"Chemistry"
] |
Influence of sequencing depth on bacterial classification and abundance in bacterial communities
Microbial diversity is the most abundant form of life. Next Generation Sequencing technologies provide the capacity to study complex bacterial communities, in which the depth and the bioinformatic tools can influence the results. In this work we explored two different protocols for bacterial classification and abundance evaluation, using 10 bacterial genomes in a simulated sample at different sequencing. Protocol A consisted of metagenome assembly with Megahit and Ray Meta and taxonomic classification with Kraken2 and Centrifuge. In protocol B only taxonomic classification. In both protocols, rarefaction, relative abundance and beta diversity were analyzed. In the protocol A, Megahit had a mean contig length of 1,128 and Ray Meta de 8,893 nucleotides. The number of species correctly classified in all depth assays were 6 out of 10 for protocol A, and 9 out of 10 using protocol B. The rarefaction analysis showed an overestimation of the number of species in almost all assays regardless of the protocol, and the beta diversity analysis results indicated significant differences in all comparisons. Protocol A was more efficient for diversity analysis, while protocol B estimated a more precise relative abundance. Our results do not allow us to suggest an optimal sequencing depth at specie level.
Microbial diversity is composed of a great variety of unicellular organisms (prokaryota, 2 archaea, protozoa, fungi and viruses) and is the most abundant life forms present on the 3 planet [1]. Advances in next-generation sequencing (NGS) technologies have allowed us 4 to reach unprecedented levels of genomic analysis [2], [3], and have provided us with 5 the possibility to analyze non-culturable communities, whether they are animal tissue, 6 air or soil samples [4]. 7 8 Two sequencing methods are usually used in metagenomic studies, 16S and shotgun. 9 The first focused in the sequencing of hypervariable regions of the 16S rRNA gene and 10 the second consists of the sequencing of complete genomes from a sample [5]. Both prokaryote kingdom, its resolution is limited and it has a lower sensitivity in the 14 identification of genus and species [7], [8]. The shotgun method increases resolution, 15 sensitivity at the genus, species or bacterial strain level and bacterial co-abundance in 16 microbiome studies [9]. However, generation of precise results depends not only on the 17 sequencing platform, but also on the depth and data analysis employed [10]. It is 18 important to highlight that there is not a consensus about the right protocol for 19 processing and analysis of the sequencing data, due to the existence of different 20 approaches and to the great variety of bioinformatics tools [8], [11]. 21 22 In microbiome studies, the main bioinformatics toolsets are metagenome assemblers 23 (MetasSPAdes, Megahit, Ray Meta) and taxonomic classifiers (Kraken and Centrifuge) 24 that allow us to estimate the diversity and relative abundance of the microorganisms 25 present in a sample [12]- [14].
27
The knowledge of the precise abundance at the genus and species level is important 28 to better understand the ecological composition, the possible interactions, as well as the 29 associations between pathology and specific microorganisms [15]. The aim of this work 30 was to explore two different approaches for bacterial classification and abundance 31 evaluation, using a simulated bacterial community sample at different sequencing 32 depths.
34
Computational resources 35 The present work was carried out on a server with Linux CentOS operating system, 35 36 processors and 62 GB RAM.
38
For the "in-silico" analysis, we used a simulated reference sample composed by 10 39 different bacterial genomes belonging to the main phyla that conform the human gut 40 microbiota. Bacterial genomes were downloaded from the National Center for 41 Biotechnology Information database (NCBI). To constitute the reference sample, we 42 provided an arbitrary abundance for each one of the selected species, considering a total 43 of 25 genomes which were distributed among the 10 species. In this context, the 25 44 genomes copies microbiome corresponds to the 1X depth for the reference sample. Table 1 shows the list of selected bacterial species, the number of copies of each genome, 46 their genome size, abundance and the corresponded phylum. [4] and Ray Meta V 2.3.1 [16] programs, while bacterial classification and estimation of bacterial abundance was evaluated with Kraken2 V 2.1.2 [16] and Centrifuge V 1.0 [13]. Microbial-RefSeq and Bacteria-Archea databases were used respectively.
48
Using the reference sample, we simulated a shot-gun sequencing in silico for the 49 Illumina HiSeq2500 platform and by means of the ART program [17] we obtained Taxonomic classification, rarefaction and bacterial abundance analyses were assessed by 55 means of two sequence analysis protocols. Protocol A is divided into 4 stages: assembly, 56 taxonomic classification and rarefaction, bacterial abundance, and statistical analysis, 57 while protocol B includes all steps, but it does not consider the assembly step ( Figure 1). 58 Each de novo assembly was performed using a k-mer of 31, 51, 79 and 109 nucleotides. 59
60
The classification results were obtained after filtering the information from each 61 classifier considering only the information related to the 10 species that constitute the 62 reference sample in each depth assay (Table 1). In contrast, for the analysis of total 63 diversity, we included the whole number of species identified without any filtering step. 64 The estimated number of copies for each genome in the different depth assays was 65 calculated by multiplying 25 (number of initial copies) by the depth value on each test. 66 The relative abundance represents the number of copies of each species in relation to 67 the total number of readings and expressed in percentage, for each one of the depth 68 assays. Finally, the alpha diversity present in the reference and in each test was used for 69 the calculation of beta diversity using the Sorensen index [18]. The statistical (Table 2). The taxonomic classification in protocol A was equally efficient when Centrifuge or In all assays, 9 out of the 10 species from the reference sample were consistently well classified in all assays (Akkermansia muciniphila, Alistipes f inegoldii, Bacteroides vulgatus, Bif idobacterium bif idum, Desulf ovibrio vulgaris, Escherichia coli, F usobacterium nucleatum, Lactobacillus reuteri and Shigella f lexneri). The remaining specie was not classified (Dialister invisus). a: Centrifuge classifier. b: Kraken2 classifier.
78
species compared to the Ray Meta assembler, and this result was ascending in each 100 depth assay. In both cases, the richness was greater than that present in the reference 101 sample (Table 3 and Figure 4). Table 3.
102
On the other hand, when the data are analyzed with protocol B without including a 103 previous assembly process, the total number of species identified, considerably exceeds 104 the theoretical value of the reference sample in each depth assay, regardless of the 105 classifier used. (Table 3 and Figure 5). Rarefaction curves for total number of identified species using the protocol B. The number of species identified with protocol B is much higher than the number of species reported for protocol A. This growth pattern is maintained for each one of the depth assays and holds the same trend with any of the two classifiers. For the specific number of species identified see Table 3. Megahit assemblies differed than that present in the reference sample, these results were 111 observed in all depth assays ( Figure 6). On the other hand, classification of Ray Meta 112 assemblies generated the most imprecise relative abundances regardless of taxonomic 113 classifier ( Figure 7). Beta diversity analysis 120
Relative abundance
The beta diversity analysis at the species level between the different depth assays and 121 the reference in protocol A, showed significant differences (p<0.005). (Figures 6 and 7). 122 The same result was obtained for the protocol B (p<0.005). (Figure 8). When 123 analyzing the beta diversity between each one of the depth assays with all the others we 124 observed a significant difference (p<0.005).
126
In the protocol A, at the genus level, the use of the centrifuge classifier with Megahit 127 or Ray Meta assemblers, showed no significant difference from the 50X depth assay and 128 the results did not improve after this point (p>0.005). In contrast, using protocol B, 129 there was no significant differences from the 25X assay with Centrifuge (p>0.005).
131
This work has focused on exploring some of the tools commonly used in the 132 characterization of microbial communities using shotgun sequencing and that do not 133 require extensive knowledge or abilities in bioinformatics [19], [20]. Considering this, in 134 this work we did not explore in depth the mathematical basis or logarithms of these 135 tools. The use of a simulated sample in this work helps to eliminate the negative 136 influence of errors or low sequencing quality of a real sample [21]. In this study, we 137 determined the diversity and abundance in each assay, this gives greater certainty in the 138 results obtained.
The results of the metagenome assemblies showed that about 90 of the contigs had a 141 size below 10,000 nucleotides in the protocol A. This size is smaller than the genome of 142 Dialister invisus (1.8 Mb), specie in the sample with the smaller genome size. In this 143 work we used Megahit and Ray Meta, these are "de novo" assemblers that are both 144 based on Bruijn graphs [4], [16]. This type of graphs allows the efficient assembly of 145 short reads [22], but when divided into the length of the k-mer defined in the assembly, 146 these can be susceptible to errors [12]. During the assembly process of a mixed genome, 147 two types of errors often occur. The first is the presence of k-mer in different regions of 148 the same genome, giving rise to chimeric connections between the nodes in the Bruijn 149 graph, which are different to the real sequence. This can result in erroneous assemblies 150 and short contigs. This error increases when assembly a metagenome because a k-mer 151 can be present in different genomes. The second error occurs in regions with low 152 coverage. However, this error was excluded when a simulated sample was studied [23]. 153 The presence of short contigs in this study is the result of erroneous assemblies and the 154 use of k-mer of different lengths did not prevent the generation of these errors.
155
Currently, the use of next-generation sequencing platforms such as PacBio and 156 Nanopore, which can sequence fragments of 30-50 Kb and 100 Kb respectively [24].
157
This may be favorable in the study of complex bacterial communities, because larger 158 read sizes can generate more precise assemblies, even with shared genomic regions. with respect to other assemblers, and they gave greater relevance to avoidance of the 164 assembly of repeated reads, ramifications and the generation of longer assemblies. 165 However, these parameters are not a guaranty of their efficiency, which we observed in 166 the assemblies performed in this study [4], [12], [16]. In addition, the performance 167 evaluation was not carried with a sample that has a known composition.
169
The classification of the assemblages showed a lower number of species than the 170 number of contigs obtained in each assay, this was due to the absence of genomic 171 information necessary for their classification. In contrast, the number of species was 172 overestimated in each of the assays, and this influenced the beta diversity and relative 173 abundance, which were different from the reference. The results obtained in this work 174 related to the assemblages, show the need to discriminate the short assemblages, since 175 there may be misclassified and influence the results of rarefaction, relative abundance, 176 and beta diversity.
178
The omission of metagenome assembly and the exclusive use of taxonomic classifiers 179 is common in microbiota studies where shotgun sequencing is used [10], [25], this can be 180 explained by the high demand of computational requirements and processing time of the 181 assemblers [25]. Our results from protocol B reflect a relative abundance different to the 182 reference and overestimation of species much higher than protocol A in each of the 183 depth assays. This is of utmost importance since bacterial richness and relative 184 abundance are two of the most important results in microbiota studies.
186
The results obtained with both protocols indicate that neither of the bioinformatics 187 tools used in this study is completely accurate, but Ray Meta generated the larger size 188 assemblies and avoided the generation of short contigs. While Centrifuge had the lowest 189 number of errors, but this was influenced by the type of database, which includes 190 genomic information characteristic of bacterial species and omits redundancies [13].
191
This makes the classification of species with shared genomic regions more efficient.
192
Regarding the influence of depth, we are not able to obtain information on the 194 optimal metagenome sequencing depth to obtain reliable taxonomic results. However, at 195 the genus level, we did not observe an improvement in the results from the 50X depth 196 and using Ray Meta with Centrifuge.
198
It is important to note that the sample analyzed in this study is minimal and cannot 199 be compared to the composition of a real complex bacterial community sample. For metagenome assembly and diversity analysis, protocol A (Ray Meta and Centrifuge) 205 was more efficient. While in relative abundance, protocol B (Centrifuge) was better.
206
Our results do not allow us to suggest an optimal sequencing depth. | 3,150 | 2022-01-04T00:00:00.000 | [
"Environmental Science",
"Biology",
"Computer Science"
] |
Pulsed power network with potential gradient method for scalable power grid based on distributed generations
: The potential gradient method is proposed for system scalability of pulsed power networks. The pulsed power network is already proposed for the seamless integration of distributed generations. In this network, each power transmission is decomposed into a series of electric pulses located at specified power slots in consecutive time frames synchronized over the network. Since every power transmission path is pre-reserved in this network, distributed generations can transmit their power to individual consumers without conflictions among other paths. In the network operation with a potential gradient method, each power source selects its target consumer that has the maximum potential gradient among others. This gradient equals the division of power demand of the consumer by the distance to its location. Since each of the target consumer selection is shared by power routers within the power transmission path, the processing load of each system component is kept reasonable regardless of the network volume. In addition, a large-scale power grid is autonomously divided into soft clusters, according to the current system status. Owing to these properties, the potential gradient method brings the system scalability on pulsed power networks. Simulation results are described that confirm the performance of soft clustering.
Introduction
As one of the neo-futuristic schemes for the smart grid, a pulsed power network is already proposed [1,2]. In this scheme, each power transmission is decomposed into a series of electric pulses located at specified power slots in consecutive time frames that are synchronised over the network. The power slots are pre-reserved throughout the power transmission path from the power source to the consumer (from now on, the power transmission path is simply called power path). This reservation of power slots preceding determination of power paths is executed autonomously by individual nodes of the power source, consumer, and intermediate power routers. Their procedures follow inherent algorithms that refer to information exchanged among adjacent nodes.
In contrast, current smart grid models mainly focus on the structure of the information network covering the power system and strategy for the system control based on the information exchanges [3]. On the other hand, power transmission itself is based on a conventional scheme where continuous sinusoidal waveform conveys power. In this scheme, because power lines are always filled with sinusoidal waveform, distributed generation should adjust the phase and voltage of its reverse tide to the power line. This may become difficult because of conflict with other generations located near. Moreover, because the whole of the power system is electrically connected, partial system failure may propagate and cause a wide area power outage.
The pulsed power network is initially proposed to solve these problems in the conventional power systems. Especially, the scheme is applicable to the power systems where distributed generations are the fundamental source of power.
The advantages of pulsed power network over conventional power systems are itemised as follows: (i) The affinity with distributed generations. This means the easiness for the generations to connect with the power network. When an owner of the generation intends to sell power to some consumer, he can reserve currently vacant power slots throughout the power path to the consumer and transmit electric pulses without any confliction against other power transmissions. (ii) The high reliability of the power system. This means when a partial system failure occurs, the failure does not cause propagative troubles such as system blackout [4]. This is because first, the system is controlled with a decentralised algorithm installed to each node individually and no centre station exists. Second, the algorithm instantaneously complements the partial failure with bypassing power paths established by neighbouring power routers. (iii) Energy colouring [5,6] is possible by each consumer based on auxiliary information received from the power source. The information may include the power source classification, distance to its location, and the charge of each electric pulse. With this energy colouring, individual power trading becomes possible between any specified pair of power sources and consumers.
These advantages of pulsed power networks may be available also in the energy packet networks already proposed [7][8][9][10]. In these proposals, energy packets are composed of energy payload and additional signals for packet routing like conventional data packet structure. At each router, the routing information is extracted from the packet, and the energy itself is stored until the link to the next hop becomes vacant.
In contrast to these conventional schemes, the pulsed power network is firstly based on direct relaying in networking. Secondly, electric pulses and information signals are separately transmitted and operated individually. As no energy storing is necessary throughout the power transmission and associated simple construction of the power routers [An example of the power router construction is demonstrated in IEEE GCCE2017 [11].], low loss property in the power transmission and high reliability of the routers are obtained in the pulsed power network.
One of the problems of this scheme is that the system operation method is yet unclear. Considering the property of pulsed power network where every power path is established by a power source targeting on a consumer, the system operation method should satisfy the following requirements: (i) Every power path establishes and releases are triggered by alterations of consumer demand. (ii) Every consumer can receive power from multiple power sources simultaneously. Inversely, every power source can transmit power to multiple consumers, simultaneously. [In this context of 'simultaneous', the time resolution range equals synchronised frame length. Therefore, power transmissions by electric pulses at different power slots in each frame are recognised as simultaneous. Details are explained in the next section.] (iii) The power network with the system operation method involves system scalability.
Among these requirements, the third one: system scalability is especially important for a neo-futuristic smart grid where a large amount of distributed generations possibly be dispersed over an extended area of power system [12]. In this power system, the generations and more number of consumers form a power market based on point-to-point trading utilising energy colouring that is the third advantage of a pulsed power network.
A system operation method for the pulsed power network is already proposed [13] that satisfies the first and second requirements above itemised. However, because the method focuses on localised power systems with limited system extension, the third requirement is not satisfied.
In this paper, the potential gradient (PG) method is proposed for the scalable system operation of pulsed power networks. In this method, each power source selects its target consumer that has the maximum value of 'PG' among all ones. This gradient equals the division of the current power demand of the consumer by the length of the power path to the consumer. This target selection scheme emulates the behaviour of water. The water tends to flow along the slope with the maximum gradient at each branch point, and finally, a preferable destination is selected naturally. Similarly, in the proposed method, the target selection is done step by step at power routers along the power paths from the candidates, and therefore the network volume scarcely affects the processing load of each system component.
This property of the scheme brings the component scalability that concerns the processing load of each component. Moreover, with this scheme, the power distribution over the system performs as an aggregation of individual clusters. Each cluster transforms adaptively according to the localised status of power sources and consumers regardless of the whole network extent. This adaptive clustering (called 'soft clustering'), brings the network scalability on pulsed power networks. In this paper, the system scalability consists of component scalability and network scalability.
In Section 2, the overview of the pulsed power network is explained. In Section 3, the proposed PG method is described. In Section 4, the overall operation procedure of the pulsed power network is described including the PG method as the core element. In Section 5, soft clustering is explained first, and then the results of computer simulations are presented that confirm the performance of the pulsed power network with the PG method focusing on the soft clustering. The final section concludes this paper with residual discussions.
In the Appendix, some details are described of advancement achieved in this paper compared to previous papers [2,13] that concern the operation of the pulsed power network.
Overview of the pulsed power network
The pulsed power network is configured with power sources, power consumers, power routers, and power communication links (from now on, power consumers and power routers are called simply consumers and routers, respectively). The system operation is based on a synchronised frame structure and direct relaying of routers. The overview of these subjects is explained in this section. Fig. 1 shows an example of a pulsed power network. This network consists of end users A-I and routers J-N. Among the end users, A and G in grey are power sources that supply the system with electric power. The other users are consumers. These nodes are connected by power communication links. As shown in the figure, each of the links consists of two components: a power link and a communication link. The former conveys power itself with electric pulses and the latter transmits control signals concerning the system operation such as network routing. From now on, directly connected nodes with power communication links are called adjacent nodes.
Synchronised frame
In the pulsed power network, the time axis is equally divided into consecutive frames. These frames are synchronised over the network. [Global positioning system (GPS) time signal is one of the available standards for this time synchronisation [14].] Each synchronised frame is equally subdivided into N power slots. An example of the frame structure is shown in the upper part of Fig. 2. Fig. 2 also shows two cases of the electric pulse flow. In one case, two pulses occupy individual slots and are transmitted from power source A to consumer F (indicated by solid contours) [The pulse is shaped as a biased cosine wave for the purpose to suppress the radio noise intensity around the power link [15]. The contour of each pulse is one cycle of an inversed cosine wave. The electromagnetic field analysis shows that the field strength caused by the biased cosine wave current satisfies the weak radio signal tolerance in Japan. Details are given in [15].]. In the other case, one pulse is transmitted from G to E at another power slot (dashed contour) [The locations of these nodes are indicated in Fig. 1. As Fig. 1 shows, node E locates next to router L. The electric pulse flows are observed at router M.].
Assuming that one electric pulse conveys 100 J and frame length equals 1 s, 200 W is transmitted by two pulses in the former case, and 100 W by one pulse in the latter case [Detailed parameters of the pulses are not specified. In case that the system takes over already existing power lines for cost-saving, the parameters should follow the conventional ones including the voltage level.].
For the purpose to smooth electric pulses received and to store electric power during a short time, every consumer is assumed to be equipped with a small storage battery [Almost no power dissipation occurs from this short time power storing provided a large capacitor is adopted [16].]. Owing to this power storing, the time resolution range of power reception at each consumer expands more than the duration of the synchronised frame.
Power router
Each power path is configured through direct relaying by routers within the path. Two cases of the direct relaying in Fig case, the power path is configured from power source A to consumer F with simple relay switches indicated by solid lines. In the other case, the power path is configured from power source G to E via router L indicated by dashed lines. For these relay switches, power semiconductor is available such as power metaloxide-semiconductor field-effect transistor [17]. Focusing on the second and third power slots in the upper part of Fig. 2, each relay switch connects the specified terminals. Therefore, electric pulses are directly transmitted from power source A to consumer F at these second and third power slots. On the other hand, the relay switch of the dashed line in router M executes another relaying. This switch is associated with the power transmission from power source G to consumer E via router L. This relaying is executed at the fifth power slot in each frame as shown in the upper part of
PG method
As described in the first section, every power path should be established or released depending on the alterations of consumer power demand in pulsed power networks. For the power path establishing, the power source decides the target consumer, power path routing, and power slot selection based on the PG method.
Owing to the scalability of this method, the pulsed power network is able to contain a large amount of consumers and power sources dispersed over the network.
In this section, the details of the PG method are given. The overall operation procedure of the pulsed power network with the PG method is described in the next section.
Overview of the PG method
The PG (PG i j ) is defined at a power source or a router i in relation to a consumer j as where P j means the power demand of consumer j. D i j means the length of power path from i to j. When i is a power source, it selects the consumer m as the target provided PG im is maximum among every PG i j . This target selection process is not executed only by the power source i but is shared among many routers in the network. Owing to this process sharing, the processing load at each node is always kept reasonable regardless of the network volume. This property assures the scalability of the pulsed power network with the PG method.
The basics of target selection shared by routers are explained by Fig. 3.
In the upper part of Fig. 3, a pulsed power network is indicated with power source A, consumers B-D, and routers E and F. Power communication links are indicated by grey thick lines. The power demand of each consumer is indicated in watts. The distance between adjacent nodes is indicated in metres. In this case, the power source A selects consumer B as its target among others. This selection takes three steps as follows (PG of node Y at node X is denoted as PG xy ): In this example, the target selection process at power source A among three consumers B-D is shared by two routers E and F. Each router individually selects its target among candidates that include a target of an adjacent router or a consumer itself. In this target selection, the power demand of the consumer and distance to its location are considered evenly. Therefore, A selects close consumer B even though other ones have larger power demands.
As explained later in Section 4, this target selection repeats for every synchronised frame. With this frame progression, the number of electric pulses in the frame to the target B increases. Accordingly, the power demand of B decreases. Therefore, PG ab will fall behind PG ac at some point of time. At this critical point, power source A changes its target to C and transmits power to C afterwards. This critical point may become earlier provided other power sources exist. For example, if another source is connected to router F and has the same target B, the critical point approaches twice as fast.
PG table
Besides the target selection explained in the previous subsection, the shortest power path to the target and available power slots throughout the path are necessary to be determined in the pulsed power network operations. To manage these essential information collectively, PG table is adopted in the PG method.
Every node in the pulsed power network has its own PG table and updates the table repeatedly according to the inherent algorithm with information exchanged among adjacent nodes.
Examples of PG table are shown in Fig. 4 referring nodes A and D in Fig. 3. The upper part of the figure shows the PG table of power source A, whereas, the lower part shows the table of consumer D.
As shown in the figure, each PG table consists of six parts (i)-(vi). In case of power source A (upper part of the figure), these parts are as follows: (i) Indicates the target consumer to which the current value of PG is maximum among all ones. power path are not reserved. On the other hand, the cross means that at least one power slot at some node is reserved.
In the case of router (E or F in Fig. 3), the definitions of these contents are the same. Whereas, they are somewhat different in the case of consumer D (lower part of the figure) as follows: (i) Indicates the consumer itself. This means the target is identical to the owner node.
(ii) Indicates the power demand [kW] of the consumer itself.
(iii) Equals zero because the target is itself.
(v) Indicates only the consumer. No power path exists.
(vi) Indicates which power slots are in use at the consumer. At the power slots denoted by crosses, the consumer may currently receive electric pulses from some power sources in the network.
PG table update process
The PG table update is executed at every node in the network simultaneously. In this subsection, each update process at an individual node is described. The process is defined according to the node classification.
In the case of the consumer, it updates the contents in part (ii) and (vi) in the lower part of Fig. 4. No need to change other parts. On part (ii), its current power demand is written. On part (vi), the current status of each power slot is marked.
On the other hand, in the case of router or power source, the PG table update process is repeated until the target of every power source is determined according to the power demand of every consumer [The required repetition time for PG table update may have some relevance to the network volume. However, a detailed investigation of the network behaviour assures the network scalability with the PG method. This subject is discussed in the final section.]. One process in this repetition consists of synchronised two stages as follows.
At the first stage, every node makes a copy of its PG table (this copy is called PG buffer). Next, the node refers to all of the PG tables of adjacent nodes through the communication links. Then, based on the referred information, parts (i)-(vi) of the PG buffer are updated.
At the second stage, the node overwrites its own PG table with the updated PG buffer. At this point, the PG table update is accomplished.
Since these stages are synchronised over the network [As mentioned in Section 2, GPS time signal is available for this synchronisation [14].], and PG buffer is adopted for temporal table update, the referred information from adjacent nodes are kept stable during the first stage.
The PG buffer update in the first stage is broken down into the following three processes (the owner of the PG buffer is called the owner): First, among all adjacent nodes, the owner selects candidates and discards others. The requirements for this selection are as follows: (i) The candidate should be a consumer or a router. In contrast, power sources are discarded. (ii) The power path (v) indicated in the PG table of the candidate should not include the owner. (iii) The logical product of power slot status (vi) indicated in the PG table of the candidate and the status of the owner [This status is not part (vi) in the PG table of the owner. However, the status of the owner itself is similar to part (vi) of consumer D in Fig. 4.] should have more than one true. Here, the logical product is derived by replacing circle and cross in each slot with true and false, respectively. Therefore, derived true means the slot is reservable to the target.
Among the above requirements, the second one avoids the meaningless power path loop occurring. The third one assures the power path to the target via the candidate with at least one reservable slot.
Second, the owner calculates all of the PGs of targets in the PG table of selected candidates. At this calculation, the power path length is the addition of that indicated in the PG table and link length from the owner to the candidate (examples will be shown in Fig. 5). Among these derived PGs, one maximum is selected and the associated adjacent node is determined (this finally selected node is called next node [Since the node is the next hop to the target.] and its PG table is called the next PG table).
Finally, the owner updates its PG buffer with the next PG table as follows: (i) Parts (i) and (ii) of the PG buffer are replaced with that of the next PG table.
(ii) Part (iii) is replaced with the addition of that of the next PG table and link length from the owner to the next node. (iii) Part (iv) is replaced with the division of (ii) by (iii). (iv) Part (v) is once replaced with that of the next PG table. Then, the owner node is added as the first node of power path.
(v) Part (vi) is replaced with the logical product of that of the next PG table and the status of power slots of the owner node. Fig. 5 shows an example of a PG table update. The upper part of the figure shows a part of the network that consists of consumers F-H and routers A-E. In this network, focusing on router A, an example of its PG table update process is described [As described before, this process is executed similarly by every power source and router in the network simultaneously.] as follows:
Example of PG table update
(i) At the beginning of the update (the first synchronised stage described in the previous subsection), router A makes the PG buffer and refers to the PG tables of adjacent four nodes B, C, D, and G. These tables are shown in the lower part of Fig. 5. According to the requirements described in the previous subsection, router A discards the PG tables of C and D. In the case of C, the power path includes A itself (indicated by a dashed circle). Whereas in the case of D, the logical product of the slot status (dashed frame) and the status of A (grey) leaves no reservable slot. (ii) Since the targets of selected nodes B and G are F and G, respectively, router A calculates PG af and PG ag . Considering the distances of B and G from A (indicated in the figure), PG af = 6.4 [
Operation procedure of pulsed power network
Based on the PG method described in the previous section, the pulsed power network is operated according to power demand alterations of consumers. In this section, the total system operation procedure of the network is explained including power path establishes and releases.
The system operation procedure consists of the repetition of the preparation process and the repetition of the execution process. These processes run concurrently and their time interval is adjusted to the synchronised frame of the pulsed power network.
In the preparation process, every power path establishes and releases are planned for the execution process of the next time interval. Accordingly, every schedule of electric pulse transmission, reception, and relaying of the next interval is determined at individual nodes of power source, consumer, and router, respectively.
Whereas, in the execution process, every node executes the scheduled task determined in the previous preparation process. Therefore, actual power transmissions through the power paths, and their releases begin with this execution process.
The rough time chart of the preparation process is shown in Fig. 6 over one interval of a synchronised frame.
(i) At the beginning point (a) of the interval, every consumer examines its current power consumption and power reception. If the former exceeds the latter, the difference means the power demand [This power shortage is temporarily complemented by short-time power storage described in Section 2.]. On the other hand, if the power reception of the consumer exceeds its consumption, the difference means the power excess [This power excess may be used as the power storage charging.]. In the case of power excess, the consumer releases the excess power paths as the next paragraph. (ii) During the interval (b), every consumer with power excess releases the excess power paths. Every consumer is assumed to store information of all the power paths that the consumer currently terminates. The information includes the power source and intermediate relay nodes to the source. During the interval (b), the consumer with power excess communicates with the power sources and releases the power slots throughout the excess power paths. (iii) At point (c), every node resets its PG table. Each consumer records its current power demand [No power excess remains because the excess power paths are released during the interval (b).] and the current status of its power slots on the table. Power sources and routers clear all of the contents on their PG tables. In addition, every node reconstructs its list of adjacency that consists of indexes of adjacent nodes. This process is necessary for unstable network topology caused by unexpected node troubles including synchronisation failure or so [Power transmission breakages caused by these troubles are soon be retrieved through the reconstructed network.]. (iv) During the interval (d), every router and power source updates its PG table. They repeat synchronised two stages described before in Section 3.3. The repetition time is predetermined in relation to the network volume.
(v) At point (e), every power source decides its target consumer according to its PG table. Coincidentally, the power path to the target and reservable power slots throughout the path is determined. (vi) During interval (f), every power source reserves the power path with one power slot to the target consumer. This elemental power path is called elemental path. Through this elemental path, one electric pulse is transmitted every synchronised frame to the target. This power transmitted by a pulse every frame is called elemental power. The reservation process of the elemental path to the target begins with reservation signal transmission through the path. If the reservation fails and the error signal returns, the power source retries with another reservable slot. Details of this power path reservation are described in the previous paper [2].
Computer simulation
In this section, the results of computer simulations are presented that confirm the performance of the pulsed power network with the proposed PG method.
Among the requirements for the system operation method itemised in Section 1, the first one (consumer demand priority) is obviously satisfied in the PG method as described in Section 3. On the other hand, the second requirement (power transmission simultaneousness) is already confirmed by simulations [13] as the inherent property of pulsed power networking.
In this section, the third requirement (system scalability) is focused on and associated system performances of the pulsed power network with the PG method are confirmed by simulations.
As described in Section 3.1, the target selection sharing among routers assures the scalability of pulsed power networks. On the other hand, the PG method additionally assures the network scalability by soft clustering. In the following, first, the soft clustering and system scalability is explained. Second, the simulation model is introduced with a moderately large volume and the results of the simulations are described concerning the soft clustering.
Soft clustering
When pulsed power distribution with the PG method is applied to a large-scale power grid, the grid is divided into soft clusters autonomously where each one consists of a central power source and surrounding consumers that receive power from the centre node. This means that power transfers are almost completed within each cluster. The word 'soft' means that the circumference of each cluster adaptively modified and overlaps with neighbouring ones because of the property of the PG method.
No matter how large the power grid is designed, the power distribution over the system performs as an aggregation of individual soft clusters. Owing to this soft clustering, first, the system scalability is obtained. Second, high-system reliability is assured against partial failures within the system. Fig. 7 shows an example of a soft clustering. In this figure, three clusters A, B, and C exist as neighbours. These clusters include power source A s , B s , and C s , respectively. Other nodes D-H Within the cluster C, nearby consumers G and H of C s receive power from the centre node. Whereas, because consumer F locates intermediately between B s and C s , F becomes the target of both power sources. Therefore, clusters B and C share this node and their circumferences partially overlap each other.
The system reliability against a partial failure is explained by cluster A. The dashed circumference and arrow mean that cluster A just disappears due to the failure of A s . In this case, because power demand of consumer D increases with no power supply, D becomes the target of B s . Accordingly, the circumference of cluster B modifies and includes consumer D as shown in the figure. ]. The average P avr and deviation P dev of P i is initially set, then the power demand P i of consumer i is assigned randomly between P avr ± P dev .
Simulation model
(ii) After the simulation begins, every node in the network operates itself following the procedure described in the previous section. Accordingly, every power source decides its target consumer at point (e) in Fig. 6 and increases an elemental power to the target. As the result, the target decreases its power demand [In the simulations, only elemental path increases and associated behaviour of soft clusters are observed. Elemental path releases during (b) in Fig. 6 are not simulated.].
(iii) As the power demand of a consumer decreases, its PG at the power source also decreases. Therefore, the target of a power source may change to another consumer frequently. Owing to this reason, power demand of every consumer decreases almost uniformly and finally becomes zero. Fig. 9 shows a simulation result that indicates the increase of power paths from four power sources A s -D s to area A. The horizontal axis represents the number of synchronised frames counted from the beginning of the simulation. Actual elapsed time is derived as the product of this number and frame duration time 5 s. The vertical axis represents the number of power paths established.
Simulation results
At the beginning, P avr is set to 10. Whereas, P dev is set to 0 or 10. The results of the former and latter cases are indicated by solid and dashed lines, respectively.
In this simulation, focusing on area A only, the following properties of the soft clustering are estimated: (i) According to the PG method, where each power source concerns power demand of consumers and distant to their locations, consumers in area A receive power almost from A s , especially when the initial deviation P dev of power demand equals 0.
(ii) Even when P dev = 10, this initial deviation may be decreasing because of the property of the PG method where a consumer with large power demand tends to be supplied power first. Therefore, the influence of P dev may become insignificant gradually.
These properties are confirmed by the simulation results indicated in Fig. 9. First, as the line 'A s → area A' (closely gathered by solid and dashed lines) indicates, almost power demand in area A is satisfied by power transmissions from A s only. Since the total power demand in this area equals 350 (average 10 power demand multiplied by 35 consumers), the number of power paths does not exceeds this value. This maximum point appears when the synchronised frame count reaches 350 as shown by the dashed vertical line. This is because A s is assumed to increase the elemental path to its target every frame and the total power demand in area A is 350. This frame count 350 indicates the fulfil time when every consumer power demand is satisfied.
However, a slight deviation appears in Fig. 9 from these descriptions caused by a little contribution from other power sources. In the case of initial power demand deviation P dev = 0, power paths from B s and D s appear at about 150 on the horizontal axis and slightly increase [Owing to the geometrical symmetry in relation to area A, distinction between B s and D s is omitted.]. Whereas, in the case of P dev = 10, the appearance point moves forward to about 100. However, the difference between these two cases decreases as the simulation proceeds. This confirms the second point above itemised: the influence of P dev may become insignificant gradually. Fig. 10 shows a simulation result that confirms the system reliability based on soft clustering. In this simulation, power source A s in Fig. 8 is assumed to be failed and other sources B s -D s substituting transmit power to consumers in area A. In other words, concerning the clustering image, the failure of A s directly incurs the gradual replacing of cluster A by expanding neighbours B-D and finally the whole region of cluster A is replaced by the neighbours. In this case, though network ability of power distribution decreases by the failure of A s , and therefore the fulfil time of consumers delays, every consumer power demand in area A is finally satisfied.
Horizontal and vertical axes of Fig. 10 represent the same as in Fig. 9. Differing from the previous simulation, power transmissions to area A from B s to D s increase noticeably, and the fulfil time of consumer power demands increase up to 471 or 455 when P dev = 0 or 10, respectively. This confirms the substitution occurring of failed power source A s by other sources that initially belong to neighbouring clusters. Owing the geometrical position of C s , power transmission from this source to area A always remains lower than other sources.
Conclusion
The PG method is proposed for the system scalability of pulsed power networks. In pulsed power networks, each power source transmits power to its target consumer by a series of electric pulses located on pre-reserved power slots in synchronised frames. With the proposed PG method, each power source selects the target consumer based on the PG that equals the division of power demand of the consumer by the distance to its location. The system scalability is brought to pulsed power networks by two properties of the PG method: process sharing of target consumer determination at each power source with other nodes, and soft clustering that autonomously divides extended power grid depending on the current system status.
Simulations are executed to confirm the performance of a pulsed power network with the PG method where a moderately large simulation model is adopted that is divided equally into four areas A-D. Simulation results are as follows: (i) Consumers in area A are almost satisfied with their power demand by the central power source of the area especially when the initial deviation of power demand is set low. (ii) When the initial deviation is set high, the circumference of clusters modifies autonomously and power transmissions to area A from neighbouring areas increases. (iii) When the power source in area A fails, the cluster surrounding the failed source disappears and is divided by neighbouring other areas. As the result, consumers in area A satisfies their power demand.
The first and second results confirm the autonomy and flexibility of soft clustering. The third one confirms the reliability of the pulsed power network with the PG method.
In Section 3.3, the relevance is referred to as the network volume and the required repetition time for the PG table update. This relevance possibly impairs the scalability of pulsed power networks. However, concerning the soft clustering in actual system operations, this problem may not seriously affect the scalability. In usual network configurations where power sources are dispersed almost evenly, PG data of nodes outside a cluster scarcely arrive at the central power source. Such distant data may be discarded on the circumference of the cluster. Therefore, in this case, the repetition time for PG table update is roughly determined depending on the average cluster size or several times larger. No need to account for the network volume itself.
However, in exceptional cases such as almost of power sources are failed caused by serious disasters, and therefore a limited number of survived power sources transmit power to distant consumers, the required repetition time for PG table update possibly exceeds the pre-determined value. This problem should be investigated more in further studies. | 8,652 | 2020-10-06T00:00:00.000 | [
"Physics"
] |
MEASUREMENT TECHNIQUES USED FOR ANALYSIS OF THE GEOMETRIC STRUCTURE OF MACHINED SURFACES
Received: 5 May 2014 Abstract Accepted: 10 June 2014 The quality of machined surfaces, resulting from the manufacturing process and conditioning their functionality, is determined by the surface geometric structure (SGS). There is a close relationship between surface properties, shape, qualitative imagining of the surface topography, technique and technology employed for machining purposes [1, 2]. If a given surface is to have practical applications in engineering, the correct technological process needs to be chosen. In the paper, various techniques used for measuring the surface geometric structure were described. The results of the study, which were obtained from different measuring devices like Atomic Force Microscopy (AFM), Scanning Electron Microscopy (SEM) and Optical Interferometer (WLI), were presented. Optical Microscopy (OM) was shown as a helpful device to analyse some aspects of surface topography. Each measuring technique provided different, yet complementary data on the topography of the machined surfaces. Owing to this, a full characterization of the geometric surface structure of the machined surfaces was enabled, including surface properties resulting from the applied technological process. Based on the measurements made, the characteristics of chosen devices (measurement techniques) were defined with an indication of how they can be applied to the analysis of the surface geometric structure (SGS). The devices which are considered to give the best view of examined surfaces and allow a thorough analysis of their irregularities were then indicated.
Introduction
The surface geometric structure (SGS) is the outcome of the machining (manufacturing) process of products [3].Therefore, the quality of machined surfaces can be judged by the surface geometric structure.
The analysis of the SGS is necessary and essential for assessing the surface features.
The analysis of the SGP consists of three parts: describing measurement methods (techniques), pre-senting a surface, and conducting a parametric assessment of the surface.
The basis for the analysis of the topography of a given surface is the selection of appropriate measurement techniques that will enable proper description of this surface and subsequent evaluation of its shape based on obtained images and geometric parameters.
There are many techniques for measuring the surface geometric structure -Fig. 1.None of them, however, if used alone, can give complete description of the examined surfaces.It is advisable to employ a variety of techniques to obtain complementary information on the surface topography, which will facilitate interpretation of obtained results [6,7].
The presentation of a machined surface involves connecting the measured/scanned points so that the obtained image represents the tested surface [8].There are two ways to present a measured surface: it can be shown with the use of a contour map as well as using an isometric view created with an axonometric projection.
The assessment of a machined surface can be quantitative as well as qualitative.A quantitative assessment requires the determination of the parameters describing the measured surface.This is possible due to the developed hallmarks of the surface geometric structure (3D), which, similarly to the 2dimensional profile (2D), were divided into functions and parameters; the details were discussed, inter alia, in the following references [2,9,10].
A quality assessment is based on the analysis of images which are obtained from surface measurements taken with the use of a variety of devices (measurement techniques).
Characteristic of research materials
The surfaces of elements made of tool steel (material Type A) and oxide ceramic (material Type B) were studied.
The surfaces of the elements made of tool steel were subject to electric discharge machining (further referred to as EDM).The EDM process was performed using copper electrodes; cosmetic kerosene was used as a dielectric liquid.Pulses were delivered by a generator based on transistor control which allowed to control the energy of single discharges.
The surfaces of the elements made of oxide ceramic were subject to an abrasive process (lapping).The diamond micropowder lapping paste was used as an abrasive.During the machining process, the granulation of diamond micropowder was being changed until the desired surface had been achieved.
Methodology of research
The geometric structure of machined surfaces obtained from the machining process (erosive and abrasive one), were tested with the use of the following four research devices: atomic force microscopy (AFM), scanning electron microscopy (SEM), white light interferometer (WLI), and optical microscopy (OM).The tests were done in the Institute for Sustainable Technologies -National Research Institute (Department of Tribology) and the Institute of Metallurgy and Materials Science -Polish Academy of Sciences.
The atomic force microscopy (AFM) -Fig.2, allows to capture images of surfaces with the resolving power of the nm order, thanks to the use of the interatomic van der Waals forces.The surface is scanned by a sharp tip which is attached to the end of a flexible lever (the cantilever).In this method, the laser beam is reflected off the back of the cantilever and collected by the photodiode detector [8,11,12].AFM works in two modes: contact and non-contact.The operating principle of the AFM is based on the measurement of impact forces the cantilever has upon the tested surface while it is being scanned.
The advantage of the AFM is a very good resolution in the z-axis and the high quality of images; whereas its drawback is the small measurement range -the scanning area is no larger than 100×100 µm.
The parameters during research: non-contact mode, the scanning area 30×30 µm.
The scanning electron microscopy (SEM) -Fig.3, allows, among others, the qualitative analysis of the surface irregularities.The working principle of the SEM is the emission of secondary electrons from a sample, which is excited by the incident electron beam directed onto the tested area.The secondary electrons are formed by collisions of the incident electrons with the sample atoms which release electrons with lower energy [11,12].
A large depth of field, high resolution [12] and the quality of obtained images are the advantages of SEM.The drawbacks are the necessity of using a vacuum and a small range in the Z-axis.
The parameters during research: non-contact mode, magnification ×200 for material Type A and ×2000 for material Type B.
The optical interferometer allows to capture the surface geometric structure of an ultra-high vertical resolution, up to 10pm (regardless of the applied magnification) [8,12].Its operating principle is based on the use of one of the varieties of white light interferometry (WLI) -Fig.4, so-called scanning broadband interferometry (SBI).The advantage of the WLI is a large measuring range as compared with the aforementioned devices, great accuracy of scanning, and a good resolution.The disadvantage, however, is a relatively small measurement area.
The parameters during research: the sensitivity in the Z-axis is 0.01 nm, the scanning area 1.65×1.65 mm, the objective lens (Mirau [13]) ×10.
The optical microscopy (OM) with digital video recording allows to capture images of sample surfaces at different magnifications and directly record consecutive fields of view.
The advantage of the OM is that, compared with other techniques, it allows observation of the large areas of a surface.On the other hand, it fails to show the features of surfaces described by low roughness parameters, or machined surfaces characterized by high technological quality, which may be considered as the disadvantage of the device.
The use of different measurement devices (techniques) allowed to collect additional information on the surface characteristics (including irregularities) formed in the machining process as well as enabling the analysis and interpretation of the results.
For the purpose of a quantitative assessment of the machined surface, the sophisticated metrology software was used (TalyMap v.6.1 and Motic Images Plus v.2.0 program).
Results and discussion
The machined surfaces obtained from the manufacturing process were analyzed qualitatively and quantitatively.Selected results were shown on four figures (Figs. 5-8) and in table (Table 1).Table 1 presents the parameters describing the condition of the machined surfaces, which allows a quantitative assessment of the machined surfaces.In the images displayed on figures the differences between the machined surfaces, which result from the different treatment methods, are shown.
On the surfaces obtained from the electric discharge machining (Type A), the various types of surface irregularities can be seen.They take on the form of craters (cupped concave), cavities (empty or filled with the treatment products), remelted areas, burrs (the material elements resembling droplets), a few cracks (surface discontinuities) and spheroids (balls of material).
The surface geometric structure formed in the EDM process is the result of mutually overlapping craters (resembling the spherical bowls) and the other earlier mentioned irregularities.
On the surfaces subject to lapping (Type B), characteristic scratches left by the abrasive, hard diamond micrograins, can be seen.There are some traces of the previous processing (grinding), resulting from the short time of lapping of the samples.In addition, many crumbled bits were reported on the machined surface, which results from the material properties (high hardness, and thus increased brittleness).
During the research, it turned out that not every type of a machined surface could be measured with the use of any device.To some extent, it has to do with the topography of the machined surfaces (too rough or too smooth) and the limitations of the measuring devices.For this very reason, no measurement results have been obtained from AFM for the surface of Type A (too high surface roughness) and from OM for the surface of Type B (too smooth surface, barely visible surface defects).
In order to gather information and conduct a quantitative assessment of the machined surfaces, two types of sophisticated metrology software were used.From the data presented in Table 1 and the images shown on figures (Figs. [5][6][7][8], it can been inferred that the surfaces have a different geometric structure.
The roughness parameters of the machined surface measured with the use of the WLI were obtained with the TalyMap v.6.1 program.
On the surface measured using the WLI, we cannot see the surface irregularities which emerged on the surface when it was measured with the scanning electron microscopy SEM.Taking measurements of these irregularities (features) was possible due to the Motic Images Plus v.2.0 program.
Both white light interferometry WLI and scanning electron microscopy SEM, providing complementary information on the samples, allowed to make a comprehensive analysis of the machined surface SEM gives a real image of the measured surface with all its irregularities, which allows a qualitative assessment of the machined surfaces; whereas a quantitative assessment of these surfaces is enabled by the use of WLI and the sophisticated metrology software.
Conclusions
This paper offers a short overview of selected measurement devices (techniques) useful in the analysis of machined surfaces.Some capabilities of metrology software facilitating the analysis and assessment of SGS were shown as well.It should be noted that: • Atomic Force Microscopy (AFM): it allows to show small surface areas, providing high-quality images; it enables viewing details of a machined surface within the measured areas (see the another works of the authors [14,15]).• Scanning Electron Microscopy (SEM): allows measuring and imaging of the surface microstructure; if sophisticated metrology software is used for the analysis purposes, surface irregularities can be measured easily -small defects which failed to be captured with the use of AFM and WLI (see the another work of the authors [14]).• Optical Interferometer (WLI): allows to measure all types of surfaces; enables an accurate quantitative assessment of the measured surfaces using specialized software; defects taking the form of cavities, hills or wear products deposited on ma-chined surfaces are possible to be measured with a high accuracy (see the another works of the authors [14,15]).• Optical Microscopy (OM): allows a measurement of the surfaces characterized by large roughness; furthermore, it shows huge surface areas, thus exposing more defects, including wear products deposited on these surfaces (see the another work of the authors [16]).
We would like to gratefully thank the employees of the Tribology Department of the Institute for Sustainable Technologies -National Research Institute in Radom for all their help and guidance in carrying out the research.
Figures from Fig.
Figures from Fig.5 to Fig. 8 contain the images obtained with the use of four research devices.The results allow to evaluate the quality of the machined surfaces.
Figures from Fig.5 to Fig. 8 contain the images obtained with the use of four research devices.The results allow to evaluate the quality of the machined surfaces.
Table 1
Measurement results -metrology software. | 2,826.6 | 2014-06-01T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
2018 The design of smart notification on android gadget for academic
<EMAIL_ADDRESS>Abstract In this article, we try to design the architecture of a smart notification system using an Android gadget for academic notification in college. Academic notification in colleges now utilizes bulletin boards and online media such as websites or social media. The problem faced is the high cost and resources required to deliver the academic notification. Another problem is whether the information delivered can be right to the students who need it. We proposed the architecture of a smart notification system that can reduce the cost, and the information delivered can be right on target to the students in
, queuing-time notification systems that utilize data-sensing [3], personal activity monitoring [4] and personal security tracking [5]. Then there is also the use of notification systems in the health area [6][7][8], and those relating to moving objects [9][10]. This research is also related to the studies we have done before. In the previous study, we focused on the design of academic notification software. For this study, the focus of research lies in the design of academic notification system architecture. Some related research is the use of Android-based notification system to analyze wireless network services such as access speed and internet bandwidth [11]. Another very useful study is the use of an Android-based notification system to tell the safest and fastest route of the disaster area [12]. With so much research utilizing the Android-based notification system shows the current trend of technology usage. So we assess the research related to the use of notification system based on the Android gadget is still very relevant.
Research Method
In designing notification system architecture, we define the logical steps required to achieve the stated objectives. The steps needed to design the architecture of the academic notification system can be seen in Figure 2. The initial phase of the research starts with finding and studying related research and publications that have been done by other researchers. By conducting a literature study, we gain an understanding of the condition of the research tool, the gap that can be an opportunity or get the weakness of the research that has been done before. Understanding of the current state becomes very important as the starting position of a research started. In order to gain understanding, excellence, weakness and research opportunities, a research mechanism is needed.
The analysis is a mechanism that must be done to gain an understanding of a condition or a system. The result of the analysis is in the form of requirement and specification of the architecture design notification system that will be developed. By looking at the requirements and specification system, researchers can begin to design the system architecture, starting from basic architecture to final architecture ready to use. The addition of features and maintenance process can be done continuously, in accordance with the needs of the system and end-users.
Proposed Design
The main problem faced by the service provider of academic information notification in college is whether the academic information sent can reach the students who need it or vice versa? The accuracy of the delivery of academic information to students who need it is strongly influenced by the media used by notification service providers in the college. To overcome these problems, while maintaining the coverage that can be covered which impact on the time and cost required, we try to make the basic architecture of academic notification system. To design the architecture of academic notification system requires an understanding of the device or the component of academic notification system architecture. The components of the basic architecture of academic notification system are as Table 1. The back-end app is usually a database contained on a college server. Controlled and monitored by college admins and operators, can send and manage academic notification systems, have authorize to view, add or remove students. Cloud (Internet) An operator that provides service notification system, in accordance with the operating system used by the student. One of them is Firebase Cloud Messaging [13]. Receive a notification registration service and forward it to the back-end app. Mobile Device Platform The device used to broadcast academic notification.
Android Gadget
The device used to receive academic notification messages sent by the operator.
After knowing and understanding the required components, we started to design the basic architecture of the academic notification system. So based on the design results obtained basic architecture as in Figure 3, basic architecture consists of back-end apps in the form of databases, mobile device platforms, the cloud or Internet, and Android-gadgets, such as smartphones, smart-glass, tablet pc, netbook or smartwatch.
The flow of academic notification services that contained in the basic architecture, first, students register academic notification service through gadget device to notification service provider (cloud). Notification service here is still general. Then, notification service providers forward it to the back-end server, for approval and added to the academic notification system database in the college. Student data storage is based on unique ID, which differentiates it from other students. After the storage process is complete, the back-end app continues to forward the service confirmation through the notification service provider. To send notification of confirmation and academic information, notification service providers use the Mobile Device Platform. Mobile Device Platform sends academic confirmation and notification messages to students. So if there are many students who do the registration at the same time, confirmation messages can be broadcast simultaneously. Students receive confirmation messages, to obtain academic information notification service.
Basic architecture can be used for system notification with a small number of gadget devices because the basic architecture has some limitations to accommodate a large number of gadget devices. Some conditions that must be fulfilled is the gadget device must be homogeneous, because if there are several different types of devices, then the notification message delivery should be done in several times, and this is not effective. Need much time and effort just to send an academic notification message. Basic architecture design is needed as a first step to making better architecture, in other words, to make system integrated, we have to make smaller subsystem first [14]. To overcome the limitations of basic architecture, we try to propose better architectural designs by involving some additional components, as shown in Figure 4. On the one hand, the addition of components will add to the costs incurred, but efficient when used to broadcast academic notification messages. There are additional components in the advanced architecture design, with the following functions as shown in Table 2. However, the presence of hubs and connected gadget devices can be optional, depending on the needs and conditions in the college. For a college with a very large number of students, then the use of hubs will be very helpful. But for colleges that have few students with homogeneous gadget devices, the use of hubs can be waste. As more and more devices are connected, the greater the operational costs required for maintenance. Secure-Link ensures data traffic contain legitimate information. Not from a third party who spreads hoaxes, advertisements, or spam. Because the Internet is a realm where there is always a gap for criminal action, therefore secure-links must always be active. Secure-link may be a firewall or other security methods that can detect non-legitimate information, as we did in our previous research [15]. Hub When the operating system of connected devices does not come from a single vendor it needs a hub as a medium that can translate notification messages from mobile platforms. So notification messages can be recognized by various connected devices. Because usually in a college, the students use gadget devices from various vendors. Without a hub, it takes one mobile platform for one type of gadget device, if there are 5 types of gadget devices, then it takes 5 different types of mobile platforms, and this becomes ineffective. The existence of hubs in advanced architecture can overcome the limitations of existing mobile platform devices.
Results and Discussion
Next, we perform a simple calculation simulation to find out the proposed architectural design performance, the result is as shown in Figure 5. We simulate the performance of the notification system architecture based on the cost required to perform a single broadcast for a notification message. The calculation is done as much as 10-fold for the relevant result, by choosing the value of each variable at random. Variables used in the calculation are the number of gadget, number of messages, and cost per message. The calculation results show the total cost required to perform a one-time delivery of academic notification messages on advanced architecture more efficient than basic architecture. As the number of gadget with different types of gadgets increases, the proposed architecture can still reduce the costs required to broadcast the academic notification message.
We use values for variables that are randomly selected for each test. As in the first to 10 th simulation, with the increase in the notification message sent to student accompanied by soaring cost per message, the design that we submit can still maintain the total cost at the minimum level. As in the 3 rd simulation with 220 messages, the cost needed to broadcast is less than ten thousand IDR. Another example in the 8 th simulation the same thing happened, with 750 message submissions, the architecture that we proposed can still reduce the cost as low as possible. On the contrary, basic architecture has greater complexity. This can be seen from the Whereas in the initial conditions of the simulation, the difference in costs not too far. But along with the increasing number of messages sent, the greater the difference in the costs incurred.
Conclusion
Based on the simulation testing that has been done, the system notification architecture proposed can meet the needs of sending notification messages in massive number with low cost. By maintaining the originality of notification messages sent and can serve several different types of gadget devices. We realize that to be able to implement the architecture that has been proposed still requires more in-depth research. Because several external factors that have influence, have not been used in this study. | 2,367.4 | 2019-02-01T00:00:00.000 | [
"Computer Science"
] |
Inhibition of Human Endogenous Retrovirus-K10 Protease in Cell-free and Cell-based Assays*
A full-length and C-terminally truncated version of human endogenous retrovirus (HERV)-K10 protease were expressed in Escherichia coli and purified to ho-mogeneity.Bothversionsoftheproteaseefficientlyproc-essedHERV-K10Gagpolyproteinsubstrate.HERV-K10 Gag was also cleaved by human immunodeficiency virus, type 1 (HIV-1) protease, although at different sites. To identify compounds that could inhibit protein processing dependent on the HERV-K10 protease, a series of cyclic ureas that had previously been shown to inhibit HIV-1 protease was tested. Several symmetric bisamides acted as very potent inhibitors of both the truncated and full-length form of HERV-K10 protease, in subnanomolar or nanomolar range, respectively. One of the cyclic ureas, SD146, can inhibit the processing of in vitro translated HERV-K10 Gag polyprotein substrate by HERV-K10 protease. In addition, in virus-like particles isolated from the teratocarcinoma cell line NCCIT, there is significant accumulation of Gag and Gag-Pol precursors upon treatment with SD146, suggesting the compound efficiently blocks Life Sciences). The reaction mix was initially denatured at 94 °C for 5 min and then subjected to 30 cycles of denaturation at 94 °C, annealing at 50 °C, and extension at 72 °C. An aliquot of PCR reaction was used directly in DNA sequencing, with either PRT-A or PRT-B as a primer. Molecular Modeling of HERV-K10 Protease— The three-dimensional homology model of the truncated version of HERV-K10 protease was constructed using coordinates of HIV-1 protease complexed with SD146 as a template (Protein Data bank file 1QBT.pdb; Ref. 36) with the program Molecular Operating Environment (Chemical Computing Group Inc.). The sequence was aligned initially maximizing the homology but later adjusted to accommodate the insertion at position 39 (HIV numbering) at the elbow of the flap and the insertion at position 80 (HIV numbering) at the active site, mimicking the three-dimensional structure of feline immunodeficiency virus protease. The homology al-gorithm of the Molecular Operating Environment software created 10 models, each of which was generated by making a series of Boltzmann-weighted choices of side chain rotamers and loop conformations from a set of protein fragments of high resolution protein structures. An aver-age model was potential energy-minimized using AMBER forcefield.
The human genome contains a large number of endogenous retroviral sequences that are virtually all highly defective because of multiple termination codons, deletions or the lack of a 5Ј long terminal repeat (1,2). It is assumed that at some time during the course of human evolution, exogenous progenitors of human endogenous retroviruses (HERVs) 1 integrated into the cells of the germ line and thereby obtained the ability to be inherited by offspring of the host as a mendelian trait (3).
HERVs are grouped into at least a dozen single and multiple copy number families and are classified according to the tRNA that they use as primer for reverse transcription (1,4). The retroviral element that carries a primer binding site complementary to the 3Ј end of a lysine tRNA is called HERV-K. HERV type K represents the biologically most active form of a variety of retroviral elements present in the human genome (1,5). Although the HERV-K group comes closest of all known HERVs to containing infectious virus, no corresponding replication-competent virus has so far been described (1,3). Although humans harbor several dozen proviral copies of HERV type K per haploid genome (4,6,7), some of which code for the characteristic retroviral proteins Gag, Pol, and Env (8,9), recent studies raised a suggestion that no complete proviral copy of HERV-K exists (10,11); the issue remains to be clarified. In terms of infectious virion production, HERV-K could be defective at multiple levels, including the observed arrest during budding, inefficient RT enzyme activity, and incomplete Env expression and processing (1).
HERV-K elements exhibit restricted cell type expression, observed mainly in germ cell tumors (including testicular teratocarcinoma cell lines) and their testicular precursor lesions (8,12,13). Typically the coding regions of HERV-K elements are far less disrupted by mutations than other HERV families, and protein synthesis has been observed for all the main retroviral genes. The HERV-K Gag precursors are cleaved into major core, matrix, and nucleocapsid components (14 -16), presumably by HERV-K protease, because functional activity has been demonstrated for this enzyme (15,17).
Detailed electron microscopic surveys have revealed the existence of retrovirus-like particles in breast carcinoma and teratocarcinoma cell lines (18 -20). The phenotype of human teratocarcinoma-derived retrovirus particles has been correlated with complex mRNA expression of HERV-K sequences in those cells, reminiscent of the mRNA expression pattern observed after exogenous retrovirus infection with, for example, lenti-or spumavirus strains (8,9).
Several hypotheses have so far been proposed about possible implication of HERV expression in certain pathogeneses, including autoimmune diseases such as insulin-dependent diabetes mellitus (21), tumor development, and even cardiovascular disease (22). In addition, numerous possible roles have been proposed for HERVs in reproductive physiopathology (reviewed in Ref. 23). In the study published by Sauter et al. (16), authors reported that HIV-1-infected patients and especially patients with seminomas exhibit elevated titers of anti-HERV-K10 Gag antibodies. Towler et al. (25) reported that HERV-K10 protease is highly resistant to a number of clinically used HIV-1 protease inhibitors, including ritonavir, indinavir, and saquinavir.
They reported the protease to be a homodimer with a pH optimum at 4.5 and with a higher enzymatic activity and stability at elevated ionic strengths. The authors raised an interesting speculation that HERV-K protease might somehow complement HIV-1 protease under conditions where the latter activity is impaired because of either the presence of drug resistance mutations or the presence of potent HIV-1 protease inhibitor.
The aim of this study was to identify potent inhibitors of HERV-K10 protease and to demonstrate their action in virusproducing cells. The results shown in this report indicate that some members of the cyclic urea class can act as very potent inhibitors of this protease in a nanomolar range and are capable of blocking processing of HERV Gag in vitro as well as in the teratocarcinoma cell line NCCIT.
EXPERIMENTAL PROCEDURES
Cloning of Truncated Version of HERV-K10 Protease-Genomic DNA was extracted from the buffy coat fraction of fresh human blood (24). DNA coding for core region of HERV-K10 protease (25) was then amplified by polymerase chain reaction with Taq DNA polymerase (PerkinElmer Life Sciences). Oligonucleotides 5Ј-CTAGGAAGCTTCA-TATGGACTATAAAGGCGAAATTCAA-3Ј (PRT-A) and 5Ј-GCTGTGG-ATCCTTACTACATGGTGATTTCCGCACC-3Ј (PRT-B) were used as sense and antisense primer, respectively. PCR product was cloned into mammalian expression vector pcDNA3.1(ϩ) (Invitrogen) via HindIII and BamHI restriction sites. DNA sequencing of several clones revealed presence of substantial polyporphism. The clone with the DNA sequence identical to that published by Ono et al. (6) was chosen for further experiments. This clone was subjected to another round of PCR amplification, this time with oligonucleotides 5Ј-AGACTGGATCCGA-CTATAAAGGCGAAATTCAA-3Ј and 5Ј-ACAGATCTCGAGCATGGTG-ATTTCCGCACC-3Ј. The amplification product was cloned into Escherichia coli expression plasmid pET21a(ϩ) (Novagen) via BamHI and XhoI restriction sites.
Cloning of the Full-length Version of HERV-K10 Protease-The cloning of the full-length version of HERV-K10 protease into pET21a(ϩ) and its site-directed mutagenesis was described previously (25).
Expression and Purification of 13-kDa Form of HERV-K10 Protease-Luria-Bertani broth (1 L) supplemented with ampicillin (200 g/ ml) was inoculated with 5 ml of overnight culture of E. coli BL21(DE3) expression strain (Novagen) harboring pET21a(ϩ)/HERV-K10 protease construct. When an A 600 value of 0.6 was reached, the expression of HERV-K10 protease was induced by addition of isopropyl-1-thio--Dgalactopyranoside (Sigma) to a final concentration of 0.4 mM. After 3 h at 37°C the bacterial cells were pelleted by centrifugation at 6000 ϫ g for 10 min. The cells were resuspended in 50 ml of 5ϫ TE buffer (0.1 M Tris/HCl, 5 mM EDTA, pH 7.5) and subjected to sonication (6 ϫ 30 s, 40 W, microtip). The soluble fraction was discarded. Inclusion bodies were washed twice with 20 ml of 5ϫ TE buffer and then dissolved in 100 ml of 8 M urea, 0.1 M Tris/HCl, pH 7.5, 1 mM DTT. Refolding of HERV-K10 protease was achieved by dialyzing the solution against 4 liters of 20 mM PIPES, pH 6.5, 1 M NaCl, 1 mM DTT, at 4°C for 3 h and then against 4 liters of fresh buffer overnight. During the renaturation procedure the precursor form of HERV-K10 protease (20 kDa) completely autoprocessed to give rise to the mature, catalytically active 13-kDa form. The solution was centrifuged for 10 min to eliminate the precipitated proteins and then further clarified by filtration through a 0.45-m membrane. The solution was then mixed 1:1 with buffer A (50 mM PIPES, pH 6.5, 1 M NaCl, 1 mM EDTA, 1 mM NaK tartrate, 10% glycerol). Pepstatin A-agarose suspension (Sigma) was then added, and the flask was left overnight at 4°C. An Amersham Pharmacia Biotech column was packed with the slurry and then connected to a fast protein liquid chromatography system (Ä KTA, Amersham Pharmacia Biotech). The column was washed with 5 column volumes of buffer A at 1 ml/min. The bound proteins were eluted with buffer B (0.1 M Tris/HCl, pH 8.0, 1 mM NaK tartrate, 10% glycerol, 5% ethylene glycol). The protease containing fractions were pooled and concentrated with Amicon stir cell over YM3 membrane to about 2 ml. Protease concentration was determined with UV spectrophotometry (27). Calculated molar absorption coefficient of 29850 M Ϫ1 cm Ϫ1 was used. The protein solution was aliquoted and stored at Ϫ80°C.
Expression and Purification of Full-length Forms of HERV-K10 Protease-E. coli BL21(DE3) strain was transformed with expression plasmids containing either wild type form or active site mutant (D26N) of the 18-kDa version of HERV-K10 protease. Overnight culture was diluted 1:50 into 1 liter of LB broth. At an A 600 value of 0.6, 1 mM isopropyl-1-thio--D-galactopyranoside was added, and the culture was then incubated in a 37°C shaker for 1 h. Cells were spun down and washed with 50 mM Tris/HCl, pH 8.0, 5 mM EDTA. Cells were then resuspended in 25 ml of lysis/wash buffer (40 mM phosphate buffer, pH 7.0, 0.3 M NaCl, 20 mM imidazole). Lysozyme was added to 0.2 g/ml, and the suspension was incubated on ice for 30 min. Cells were then sonicated (6 ϫ 30 s) and then centrifuged at 10,000 ϫ g for 30 min. Supernatant was filtered through 0.45-m syringe filter, and the pellet was discarded. One ml of Qiagen nickel-nitrilotriacetic acid Superflow resin was put in a 10-ml Bio-Rad disposable column and then equilibrated with 10 ml of lysis/wash buffer. The soluble fraction was applied to the column and allowed to enter by gravity flow. 20 ml of lysis/wash buffer were used to wash the resin. The protease was eluted with 6 ml of elution buffer (40 mM phosphate buffer, pH 7.0, 0.3 M NaCl, 300 mM imidazole) and then further purified by ion exchange chromatography. Nickel-nitrilotriacetic acid purified material was dialyzed against 1 liter of buffer C (20 mM sodium acetate, 2 mM EDTA, 2 mM DTT, pH 5.0) and then applied to an Amersham Pharmacia Biotech MonoS HR 5/5 column. The resin was washed with buffer C, and the protease was eluted with a linear NaCl gradient from 0 -1 M NaCl. The fractions containing protease were pooled. Protein concentration was determined with UV spectrophotometry. Molar absorption coefficients of 33,690 and 34970 M Ϫ1 cm Ϫ1 were used for wild type and active site mutant forms, respectively.
Expression and Purification of HIV-1 Protease-HIV-1 protease was expressed in E. coli and then renatured from inclusion bodies as described previously (28).
N-terminal Amino Acid Sequence Analysis-The N-terminal sequence was determined using the Hewlett Packard G1005A protein sequencing system with on-line PTH analysis. All methods, reagents, and consumables used were those recommended by the manufacturer.
Mass Spectrometry-Matrix-assisted laser desorption ionization mass spectrometry data were obtained on a PerSeptive Biosystems Voyager DE-Pro mass spectrometer. The spectra were acquired in the linear mode with delayed extraction. External calibration was performed using calibrant 3 supplied by the manufacturer. The sample was diluted 1:10 in sinapinic acid matrix solution. The matrix was prepared by dissolving 10 mg/ml sinapinic acid in aqueous 30% acetonitrile containing 0.3% trifluoroacetic acid.
Generation of Anti-HERV-K10 Protease Antiserum-1 mg of truncated version of HERV-K10 protease was loaded on SDS-PAGE, and the band was excised from the gel. The gel slice was covered with phosphate-buffered saline and emulsified with a syringe through a 23-gauge needle. The emulsion was then used directly to immunize rabbits with 100 g/dose.
Enzyme Assay-To measure the inhibitory potency of compounds, the discontinuous HPLC method described in Erickson-Viitanen et al. (34) was used. The synthetic fluorescent cationic peptide substrate 2-aminobenzoyl-Ala-Thr-His-Gln-Val-Tyr-Phe(NO 2 )-Val-Arg-Lys-Ala (28) was incubated with truncated or full-length HERV-K10 at 25°C in an assay buffer containing 50 mM MES, pH 5.0, 1 M NaCl, 20% glycerol, 1 mM EDTA. The synthesis of the substrate has been described elsewhere (28). The enzymatic reaction was terminated with 0.2 M ammonium hydroxide. Enzymatic hydrolysis of the substrate yielded the fluorescent anionic product, (2-aminobenzoyl)-ATHQVY. The extent of hydrolysis was determined using anion-exchange HPLC. An Amersham Pharmacia Biotech HR5/5 MonoQ column eluted at 1.0 ml/min with 0 -70% buffer B for 10 min was used to separate the fluorescent cleavage product from the fluorescent substrate. The mobile phase buffer A contained 20 mM Tris/HCl, 0.02% sodium azide, and 10% acetonitrile at pH 9.0, whereas buffer B consisted of buffer A plus 0.5 M ammonium formate at pH 9.0. The column was washed with 100% buffer B for 5 min and then stepped down to 0% buffer B to recycle the gradient for the next injection. The cleavage product was measured at an emission wavelength of 430 nm and excitation wavelength of 330 nm. Linearity of enzymatic activity with time was first established, and based on the results, reactions involving the truncated or full-length HERV-K10 protease were quenched after 20 in or 40 min, respectively (Fig. 1).
The K m values were determined with fixed enzyme concentration (0.5 nM) and substrate concentrations of 0.5-50 M; the data were fitted directly to Michaelis-Menten equation with GraFit software version 4.0.10 (Erithacus Software Ltd.). In the next step, potent inhibitors were identified as described below. The active site concentrations of the proteases were determined by titrating the enzymes with different concentrations of SD146. This data then enabled us to convert v max values into those for k cat . Inhibition Kinetics-Samples of the HIV-1 protease inhibitors indinavir (MK-639), saquinavir (Ro 31-8959), and ritonavir (ABT-538) were synthesized at DuPont Pharmaceuticals. Pepstatin A was purchased from Sigma-Aldrich. The cyclic ureas were prepared as described elsewhere (29 -32). All inhibitors were dissolved in dimethyl sulfoxide and stored at Ϫ20°C. Their chemical structures are shown in Fig. 2. The activity of the proteases was measured in the absence and presence of seven different concentrations of inhibitor at a fixed concentration of both enzyme and substrate. The proteases were preincubated 5 min at 25°C with inhibitors. Substrate was then added to the final concentration of 2 M, and the assay was carried out as described above. Fractional activities ranging from 0.2 to 0.8 relative to uninhibited control were fitted directly to the following Morrison equation (33).
In this equation, [I] is inhibitor concentration, [E t ] is the concentration of active enzyme
, v i is the activity at a particular inhibitor concentration, v o is activity of uninhibited enzyme, v i /v o is fractional activity, and K i (app) is the estimated apparent inhibition constant. On the basis of previous studies with HIV-1 protease (34), the mode of inhibition was assumed to be competitive. To verify this assumption, the dose response data were obtained for SD146 as a representative compound at four substrate concentrations; IC 50 values increased linearly with increasing substrate concentration, indicating the competitive nature of inhibition (35).
Effect of SD146 on the Cleavage of in Vitro Translated HERV Gag Polyprotein by HERV-K10 Protease-Plasmid pcDNA3.1(ϩ)/HERV-K10 gag was used as template in TnT® Quick Coupled Transcription/Translation (Promega) reactions to produce [ 35 S]methionine-labeled HERV Gag polyprotein that then served as a substrate for HERV-K10 protease. The in vitro translation product was incubated together with 0.54 M HERV-K10 protease (truncated form) and various concentrations of SD146 (0 -1 M) in 20 mM PIPES, pH 6.5, 0.1 M NaCl, 1 mM DTT, 10% glycerol, for 1 h at 37°C. The substrate and cleavage products were separated on NuPage SDS-polyacrylamide gel (Novex) and autoradiographed. Subsequently, the dried gel was scanned for radioactivity with a Bio-Rad Molecular Imager FX, and the HERV Gag polyprotein bands were quantitated using QuantityOne software (Bio-Rad).
Mammalian Cell Cultures and Collection of Particulate Material-Human teratocarcinoma cell lines NCCIT, PA-1, and NTERA-2, as well as the embryonic kidney line 293 (all purchased from American Type Culture Collection) were cultured in Dulbecco's modified Eagle's medium (Life Technologies, Inc.) supplemented with 10% fetal calf serum, 2 mM glutamine, and antibiotics (100 units/ml penicillin, 100 g/ml streptomycin). Cell cultures were subcultured routinely twice per week. NCCIT cell line was treated with several concentrations of SD146 (up to 2 M) or left untreated, and aliquots of culture supernatants were taken at time 0 and after 1 day. HERV-K particles were recovered by ultracentrifugation. After a 10-min centrifugation in a Sorvall RT6000B table top centrifuge at 1500 rpm to remove unbroken cells and large cell debris, the samples were centrifuged for 3 h in a Sorvall RC80 ultracentrifuge at 78,000 ϫ g at 8°C. Medium was discarded. The virus pellets were resuspended in a minimal volume of 1% SDS, 1% mecaptoethanol, and 7% glycerol, heated at 56°C for 1 min, and loaded onto 10% polyacryamide gels.
Immunoblotting-Protein samples were separated with SDS-PAGE and transferred to Immobilon-P polyvinylidene diflouride membranes (Millipore) using semidry method. The membranes were probed with either anti-HERV-K10 protease antiserum at a dilution of 1:250 or polyclonal anti-HERV-K Gag antiserum (15) at a dilution of 1:10,000. Blots were stained indirectly by using horseradish peroxidase-conjugated donkey anti-rabbit antibodies and subsequent chemiluminescence detection (PerkinElmer Life Sciences).
Viral RNA Isolation and RT-PCR-RNA was isolated from concentrated virus particles with QIAamp Viral RNA Mini Kit (Qiagen). Eluted RNA was treated with RNase-free DNase I to digest any contaminating cell genomic DNA and repurified with the same kit. RNA was eluted in 60 l of diethylpyrocarbonate-treated water. Reverse transcription was performed in a volume of 20 l containing 5 l of viral RNA, 0.5 mM dNTP mix, 10 units of RNAsin, 100 ng of primer PRT-B, and 4 units of Omniscript reverse transcriptase (Qiagen). The reaction was carried out for 1 h at 37°C. 5 l of RT reaction was used in PCR amplification, together with 0.1 M primers PRT-A and PRT-B, 1.5 mM MgCl 2 , 0.2 mM dNTP mix, and 1 unit of Taq DNA polymerase (PerkinElmer Life Sciences). The reaction mix was initially denatured at 94°C for 5 min and then subjected to 30 cycles of denaturation at 94°C, annealing at 50°C, and extension at 72°C. An aliquot of PCR reaction was used directly in DNA sequencing, with either PRT-A or PRT-B as a primer.
Molecular Modeling of HERV-K10 Protease-The three-dimensional homology model of the truncated version of HERV-K10 protease was constructed using coordinates of HIV-1 protease complexed with SD146 as a template (Protein Data bank file 1QBT.pdb; Ref. 36) with the program Molecular Operating Environment (Chemical Computing Group Inc.). The sequence was aligned initially maximizing the homology but later adjusted to accommodate the insertion at position 39 (HIV numbering) at the elbow of the flap and the insertion at position 80 (HIV numbering) at the active site, mimicking the three-dimensional structure of feline immunodeficiency virus protease. The homology algorithm of the Molecular Operating Environment software created 10 models, each of which was generated by making a series of Boltzmannweighted choices of side chain rotamers and loop conformations from a set of protein fragments of high resolution protein structures. An average model was potential energy-minimized using AMBER forcefield.
Expression and Purification of HERV-K10
Proteases.-Two versions of HERV-K10 protease were expressed in E. coli. The amino acid sequences of polypeptide chains that were expressed are shown in Fig. 3. The C-terminal boundary of the truncated version was chosen on the basis of sequence homology with the mature HIV-1 protease (25). An additional 58 amino acid residues included at the N-terminal end of the protein were expected to be cleaved off in an autocatalytic manner. This N-terminal flanking portion was expressed to allow us to readily monitor autoprocessing activity (28). The nucleotide sequence of the clone that was chosen for E. coli expression was in complete agreement with the cDNA sequence of HERV-K10 protease ORF published in Ono et al. (6). The truncated, "core" protease was expressed as a 185-amino acid precursor at a high level in form of insoluble cytoplasmic inclusion bodies. During the renaturation step with dialysis all of the precursor (20 kDa) was autocatalytically processed to give rise to mature, enzymatically active 13-kDa form. The site of N-terminal autoprocessing was determined with N-terminal amino acid sequencing. It was shown to be GKAAY-WASQ with the dash designating the scissile bond and was in agreement with previous findings (25). When analyzed with mass spectroscopy, the protein showed a molecular mass that was in agreement with the expected size and that also suggested that no C-terminal autoprocessing occurred (data not shown). In addition to the monomer, a peak representing the mass of a dimerized protease was present. Affinity chromatography with pepstatin A as an immobilized ligand was used efficiently to purify the 13-kDa form of HERV-K10 protease (37). The method described by Wondrak et al. (37) for HIV-1 protease purification was adjusted to ensure the best yield of HERV-K10 protease. Several NaCl and (NH 4 ) 2 SO 4 concentrations in pepstatin A binding buffer were tested. The majority of HERV-K10 protease was bound to pepstatin A in the presence of 0.5 M NaCl and complete absence of (NH 4 ) 2 SO 4 . Bound protease was eluted with no salt buffer and appeared to be homogenous as assessed with SDS-PAGE, isoelectric focusing, and native PAGE (data not shown).
The expression plasmid for full-length HERV-K10 protease was constructed so that only five additional amino acids (in addition to T7 tag) were present at N terminus because the presence of longer flanking region was observed not to be necessary for proper autoprocessing. At the C terminus the protease extends all the way to the termination codon that is present in full-length HERV-K10 provirus (6), which accounts for additional 50 amino acid residues not present in the truncated, core protease version. Nucleotide and deduced amino acid sequence of the clone that coded for full-length protease differ from that published by Ono et al. (6) as described previously (25). The full-length version differs from the truncated form also in the residue at position 65 (mature HERV protease numbering; Fig. 3); this residue is not positioned close to the active site or in the flaps and is believed not to be important for substrate or inhibitor binding. Metal chelation chromatography and subsequent ion exchange chromatography were used to purify both mature wild type full-length HERV-K10 protease and its active site mutant (D26N). Soluble fraction of E. coli cells was applied to nickel resin, and His tag-containing protease was bound. After elution with high imidazole buffer, the protease was further purified to homogeneity with cationic ion exchange chromatography on MonoS column. The mature wild type enzyme had a molecular mass of 18.2 kDa (including His tag), whereas active site mutant showed a molecular mass of 20 kDa because of the presence of T7 tag and remaining N-terminal pentapeptide that was not cleaved off because of lack of enzymatic activity of the protein.
Enzymatic Activity of HERV-K10 Proteases-Enzymatic activity of the enzymes was quantitatively assessed by determining kinetic constants for the hydrolysis of 2-aminobenzoyl-Ala-Thr-His-Gln-Val-Tyr-Phe(NO 2 )-Val-Arg-Lys-Ala. First, the K m values were determined. After identifying compounds with potent inhibitory activity, the active sites of the protein preparations were titrated, and the k cat values were then calculated from v max . As can be seen in Table I, the K m value for the truncated version of HERV-K10 protease was about 20 times lower than that of the full-length counterpart. Similar ratio was observed previously for the hydrolysis of a different peptide substrate and under slightly different reaction conditions; in that report the K m for the truncated version was about 10 times lower than that of the full-length enzyme (25). The turnover capacity (k cat ) of the full-length protease was about 10 times higher than that of the truncated form, resulting in a catalytic efficiency that was twice higher for the 13-kDa protein than what one could observe with the 18-kDa form. The ratio of k cat values of both protease forms differs from that in a previous report (25); the difference is probably to be attributed to different substrates and to slightly different reaction conditions that were used in the assays.
Enzymatic activities of both versions of HERV-K10 protease were then tested against polyprotein substrate. Radioactively labeled HERV-K10 Gag polypeptide was shown to be successfully cleaved by both versions of the protease, as well as by recombinant HIV-1 protease (Fig. 4A). The specificities of fulllength and truncated forms seemed to be identical as suggested by similar cleavage patterns. HIV-1 protease, however, cleaved HERV-K10 Gag polypeptide at different sites, suggesting different substrate specificity under the reaction conditions that we used in our assay. Active site mutant of full-length HERV-K10 protease was not enzymatically active, as expected. These results seem to be consistent with differential cleavage of HIV-1 Gag and Pol precursors by HERV-K10 protease in the context of chimeric virions, where the HERV enzyme cleaved HIV-1 polyproteins at both apparently authentic as well as nonauthentic sites (17,38).
Identification of Potent HERV-K10 Inhibitors-To evaluate the capacity of potent HIV-1 protease inhibitors to inhibit HERV-K10 protease, K i (app) values for a series of P2,P2Ј-substituted cyclic ureas were determined. In addition, pepstatin and three Food and Drug Administration-approved HIV-1 protease inhibitors, ritonavir, saquinavir, and indinavir, were tested. The apparent inhibition constants for both versions of HERV-K10 protease are shown in Table II, together with previously reported values for inhibition of wild type HIV-1 protease (39). Although potent inhibitors of wild type HIV-1 protease activity, the three Food and Drug Administration-approved compounds turned out to be weak inhibitors of both versions of HERV-K10 protease. The linear peptidyl mimetic inhibitors had K i (app) values ranging from 0.6 to 5.7 M.
A series of 13 compounds of the cyclic urea class was tested, all of them being P2,P2Ј-substituted. The symmetric substituted cyclic ureas in general fared better in inhibition assay than the five asymmetric compounds tested. From the latter, compound Q8467 exhibited the weakest activity, with the apparent inhibition constants being 16 and 61 nM for truncated and full-length HERV-K10 protease, respectively. The remaining asymmetric ureas (SD152, SD145, XW805, and XV651) did not differ significantly from each other, their K i (app) values being in the range of about 3-8 nM for 13-kDa protein and about 30 -40 nM for 18-kDa form. Among cyclic C 2 symmetric ureas the compound with the smallest, cyclopropyl side groups, XK234, fared the worst, the K i (app) values being about 0.7 and 1.9 M. This compound had also turned out to be less efficient in inhibiting HIV-1 protease than the bulkier members of this group. XM412, also known as DMP450, containing m-aminomethylbenzyl groups, exhibited more inhibitory potency toward HERV-K10 proteases, although with apparent inhibition constants of about 90 and 400 nM, it was still much less potent than the remaining six cyclic ureas. XV643, XV644, SD146, XV648, and XV652 were capable of inhibiting 13-kDa protease in subnanomolar range, with the K i (app) values ranging from 0.10 nM for XV648 to 0.52 nM for XV652. The group of these five compounds inhibited the 18-kDa enzyme in nanomolar range; apparent inhibition constants were 2.3-4.3 nM. In general, K i (app) values for the full-length form of HERV-K10 protease were about 3-20 times higher than those for the truncated counterpart; however, the compounds that acted as weak inhibitors with one version of the protease were also weak with the other and vice versa. The differences in K i (app) values between both versions of the protease were consistent with the lower K m value obtained for the 13-kDa form and were ob- served also in a previous report where compounds KNI-227 and KNI-272 were measured (25). The differences are very likely to be attributed to 50-amino acid C-terminal extension present in the full-length enzyme; however, in the absence of x-ray data it is not possible to provide a more detailed explanation. Inhibition of HERV-K10 Gag Processing-SD146 was previously reported (40) to have potent activity in cells to block HIV-1 Gag processing by a variety of HIV-1 protease mutants. Because of this and its excellent potency against HERV-K10 proteases (Table II), it was chosen for detailed studies of HERV-K10 Gag processing. To estimate the range of concentration at which which SD146 inhibits processing of HERV-K Gag polyprotein, we first tested the system with recombinant HERV-K10 protease and in vitro translated HERV-K10 Gag polyprotein substrate. SD146 inhibited the processing of HERV Gag, with the dose response data shown in Fig. 4B. On the basis of the quantitation of substrate disappearance, the IC 50 was estimated to be 0.35 M.
Among the cell lines we examined, the only one in which we could detect synthesis/release of HERV Gag polypeptides was NCCIT (Fig. 5), although PA-1 appeared to express small quantities of complete and partially processed intracellular HERV Gag (not shown). NCCIT cells released HERV-K Gag polypeptides that were detected mainly at 30 kDa, although also observed were varying amounts of larger polypeptides of 39 and 76 kDa (Fig. 5A) (1, 16). When 1 or 2 M SD146 was added to the cells, and cells were incubated for 1 day or more, the pattern changed drastically, with the released particles containing little or no 30-kDa polypeptide and correspondingly greater amounts of the 76-kDa full-length Gag precursor (Fig. 5A). Also seen in inhibitor-treated NCCIT cells and virus particles were forms of Gag-related antigens larger than 76 kDa, suggesting possible accumulation of Gag-Pol precursors. This was concurrent with disappearance of the processed 30-kDa core polypeptide. The difference in cell lysates was present but much less obvious than that in virion samples, mainly because only a small quantity of processing of HERV Gag is intracellular. In cells treated with 1 M SD146 we could observe the disappearance of p30 (Fig. 5B). Dose response data were obtained for the inhibition of HERV Gag and HERV protease processing (Fig. 5, C and D, respectively). The size of the mature HERV-K protease seemed to be slightly higher than 18 kDa (Fig. 5D). On the basis of the quantitation of product appearance, we estimated the IC 50 to be 0.37 M in the case of Gag processing and 0.42 M in the case of protease maturation. Taken together, the results in Fig. 5 show that the HIV-1 protease inhibitor SD146 is able to effectively block HERV-K10 Gag processing, both in a teratocarcinoma cell line and in the released particles, as predicted from our enzyme inhibition cell-free results (Table II and Fig. 4B).
RT-PCR and DNA Sequencing of NCCIT-derived Virions-To verify that the particles derived from NCCIT cell line are indeed HERV-K encoded, viral RNA was isolated from the cell culture medium and its protease region RT-PCR amplified. A single product of expected size (ϳ500 base pairs) was obtained. Direct DNA sequencing of the PCR product resulted in a single sequence and revealed that this region differs from the HERV-K10 clone published by Ono et al. (6) in 2 nucleotides. Neither of the substitutions (T3545C and C3572T; numbering as in Ref. 6) lead to an amino acid change. When BLAST search was performed against all nucleotide sequences deposited in GenBank TM to that date, the amplified region of RNA of NC-CIT-derived HERV particles completely matched only the HERV protease region of a recently deposited Homo sapiens chromosome 5 clone CTB-69E10 (GenBank TM accession number AC016577). DISCUSSION Retroviral proteins are synthesized in the form of Gag or Gag/Pol precursors that are then processed by the action of a virus-encoded aspartic protease. The existence of a functional HERV-K protease was inferred from the presence of processed Gag proteins in teratocarcinoma cells (5). Direct evidence for a functional protease activity came from expression of different clones in E. coli (15,17,25,38).
Recently, a hypothesis has been proposed that HERV-K10 encoded aspartic protease might complement HIV-1 protease during infection and thereby interfere with clinical antiviral therapy because it is highly resistant to currently approved HIV-1 protease inhibitors (25). To identify low molecular weight compounds that could inhibit proteolytic activity of this enzyme, we first expressed two versions of this enzyme in an E. coli expression system. The N termini of both enzymes were the result of autocatalytic processing by the protease. The C terminus of smaller, core form was chosen on the basis of sequence homology with mature HIV-1 protease. The C-terminal boundary of full-length version corresponds to that found in prt-ORF of proviral DNA (6); this version has 50 additional amino acids on its C terminus. Whether it is the full-length enzyme that is biologically relevant or additional C-terminal processing occurs to give rise to smaller molecular species remains to be seen. Initial studies suggest that some limited cleavage of 13 amino acid residues at the C terminus occurs after prolonged incubation (25,38).
The DNA sequence of HERV-K10 protease ORF strongly suggests that this protease belongs to the group of aspartic proteases, because the ORF contains sequence motif LVDT-GAXX(T/S)(V/I). Furthermore, a sequence GLVGIG, a so-called "flap," is found downstream of the active center. In addition, the sequence GRDLL conserved in aspartic proteases, is found at nucleotide position 3723-3737 (Ref. 15; numbering as in Ref. 6). Schommer et al. (17) showed that presence of high concentration of HIV-1 protease inhibitor Ro 31-8959 (saquinavir) can inhibit autoprocessing of HERV-K10 protease in E. coli expression broth, suggesting a similarity between active sites of the two viral proteases. We therefore decided to test a series of our cyclic ureas, second generation HIV protease inhibitors (reviewed in Ref. 41), for their ability to inhibit HERV-K10 protease. Although as a whole the cyclic urea class has relatively poor pharmacokinetic properties, mostly because of low water and oil solubility (41), these compounds are extremely potent against HIV-1 protease in vitro, and some of them have very good resistance profiles. At least one of the cyclic ureas, DMP450 (42), is presently in human clinical trials versus HIV. Several symmetric bisamides exhibited high potency against both versions of HERV-K10 protease. In the absence of any available structural data, we built a homology three-dimensional model of the 13-kDa form of this enzyme to be able to understand the mode of action of the compounds.
The cyclic urea substituents at P1, P1Ј, P2, and P2Ј are optimized for good potency against HIV-1 protease. In this enzyme, P1 and P1Ј residues form van der Waals' contacts with Pro 81 , Val 82 , and Ile 84 , whereas P2 and P2Ј groups form contacts with Ile 47 , Ile 50 , and Ile 84 . Cyclic urea inhibitors with smaller P2 and P2Ј were shown to be less potent against HERV-K10 protease than HIV-1 protease (XK234, XM412). However, cyclic urea amides containing P3 and P3Ј groups are as potent against HERV-K10 protease as HIV-1 protease (e.g. XV652, XV643, XV644, SD146, and XV648). Most of the hydrogen bond contacts between the cyclic urea amide inhibitor and HIV-1 protease complexes are predicted to be maintained in the cyclic urea amide and HERV-K10 protease complexes (Fig. 6). The potency of cyclic ureas increase with the increasing potential of forming hydrogen bonds. For example, SD146 (HERV-K10 K i (app) ϭ 0.15 nM), which is capable of forming 12 hydrogen bonds, is ϳ4500 times more potent than XK234 (HERV-K10 K i (app) ϭ 670 nM). Besides the interaction with hydrogen bonds, the hydrophobic interaction is predicted to be important for the good potency of the cyclic urea amides. For instance, the substitution of Ile 47 in HIV-1 protease for Leu 52 in HERV-K10 protease is predicted to result in loss of van der Waals' interactions with P2 or P2Ј groups, but at the same time this change results in increased van der Waals' interactions between Leu 52 HERV and P3, P3Ј groups of cyclic urea amides. A similar effect caused by the hydrophobic interactions was observed previously in case of double mutant V82F/I84V of HIV-1 protease (40,42).
The question of activity of the cyclic ureas in cells was addressed. In this study we demonstrated that HERV-K Gag processing in a cell environment can be blocked by synthetic protease inhibitors, as could be seen by substantially reduced proportion of HERV Gag precursor being cleaved to smaller polypeptides in NCCIT cell line treated with SD146 (Fig. 5A). To our knowledge, this is the first report of inhibition of HERV-K Gag maturation in cell milieu. Given the inability of cyclic ureas to inhibit cellular proteases (34), our results strongly support a model in which the aspartic protease of HERV-K10 processes homologous Gag polypeptides in human teratocarcinoma cells. Much of the HERV-K10 Gag within NCCIT cells is unprocessed. This is different from the case with HIV-1-infected cells, where a significant percent of HIV-1 Gag is cleaved. In contrast, processing of extracellular HERV Gag appears to be efficient, implying the HERV-K10 protease is inactive or unavailable except in maturing virions. HIV-1 protease is toxic to a variety of mammalian cells (43), but clearly human cells, including some teratocarcinoma cell lines, are not damaged by endogenous retroviral proteases. The question of whether the cells have evolved to resist the action of the endogenous viral proteases or the enzymes are sequestered/inactive until packaging/exit should be addressed.
The extracellular particle yield, as estimated by Western blotting of total viral proteins, was roughly the same in presence of the protease inhibitor, indicating that HERV-K Gag polypeptide processing is not a limiting step for particle release. Similar results obtained with viral RNA isolation from the particulate material of NCCIT cell culture medium and its subsequent quantification with RT-PCR amplification support this observation (data not shown). These data are consistent with the observation that HIV-1 protease inhibitors block the processing of Gag and Gag-Pol precursor polyproteins in HIV-1-infected cells but do not markedly alter either the number of particles released from the infected cells (44,45) or the amount of packaged viral RNA (46,47). In addition to using antigenspecific immunoblotting, we verified the identity of NCCIT released virions by checking the nucleotide sequence of packaged RNA. The sequence of the 500-nucleotide protease region that was RT-PCR amplified unequivocally shows that the virions belong to HERV-K family. However, additional regions would have to be sequenced for an exact clone number to be assigned, especially with regard to the fact that the recent estimates based on BLAST searches and phylogenetic analyses show that there could be as many as 170 HERV-K elements present in human genome (4). The protease amino acid sequence deduced from the obtained nucleotide sequence was identical to that of HERV-K10 clone (6).
Although cyclic ureas act as potent inhibitors of HIV-1 and HERV-K protease, they do not inhibit mammalian, nonretroviral cellular aspartic proteases (34). However, a question arises whether cell processes could be affected because of HERV-K protease inhibition. The fact that HERVs remain a constitutive part of the genome and the notion that ORFs for all major viral proteins exist and have retained coding capacity despite extensive deleterious effects normally associated with endogenization of retroviruses suggest that they may confer certain positive traits to the host (48). HERV encoded proteins, including HERV-K protease, might well be involved in normal cell physiology and pathophysiology. Our results in which the activity of HERV protease and inhibition of viral protein processing could be efficiently accomplished in teratocarcinoma cells may help to clarify the role of HERVs in cell physiology.
Acknowledgments-We acknowledge Beverly C. Cordova for providing us with the human lymphocyte fraction and Drs. Ralf Tönjes and Reinhard Kurth (Paul-Ehrlich-Institut, Langen, Germany) for kindly supplying pcG3gag clone. We are especially grateful to Ronald M. Klabe and Dr. James L. Meek for excellent technical advice with HPLC enzyme assay. We thank Leah A. Breth and Jennifer E. Kochie for FIG. 6. Schematic representation of hydrogen bonds between HIV-1/HERV-K10 protease and the SD146. Hydrogen bonds between HIV-1 protease and SD146 were determined by x-ray crystallography (36), and those for HERV-K10 protease were modeled. In the model of HERV-K10 protease complexed with SD146, all hydrogen bonds are predicted to be preserved except that between the side chain of Asp 30 and ring nitrogen atom of the inhibitor (thicker line), because Asp 30 is replaced with Val 31 in HERV-K10 protease. HERV residues are in parentheses and in bold type. All distances are in Å.
raising anti-HERV-K10 protease antiserum, Jeanne I. Corman for Nterminal amino acid sequencing and mass spectroscopy analysis, and Wilfred Saxe for help with modeling of HERV-K10 protease. Thanks also to Drs. Lee T. Bacheler and Robert A. Copeland for helpful discussions and to Dr. Susan K. Erickson-Viitanen for continuing support and useful suggestions. | 9,557.8 | 2001-05-18T00:00:00.000 | [
"Biology"
] |
Cancer-associated fibroblasts promote non-small cell lung cancer cell invasion by upregulation of glucose-regulated protein 78 (GRP78) expression in an integrated bionic microfluidic device
The tumor microenvironment is comprised of cancer cells and various stromal cells and their respective cellular components. Cancer-associated fibroblasts (CAFs), a major part of the stromal cells, are a key determinant in tumor progression, while glucose-regulated protein (GRP)78 is overexpressed in many human cancers and is involved in tumor invasion and metastasis. This study developed a microfluidic-based three dimension (3D) co-culture device to mimic an in vitro tumor microenvironment in order to investigate tumor cell invasion in real-time. This bionic chip provided significant information regarding the role of GRP78, which may be stimulated by CAFs, to promote non-small cell lung cancer cell invasion in vitro. The data showed that CAF induced migration of NSCLC A549 and SPCA-1 cells in this three-dimensional invasion microdevice, which is confirmed by using the traditional Transwell system. Furthermore, CAF induced GRP78 expression in A549 and SPCA-1 cells to facilitate NSCLC cell migration and invasion, whereas knockdown of GRP78 expression blocked A549 and SPCA-1 cell migration and invasion capacity. In conclusion, these data indicated that CAFs might promote NSCLC cell invasion by up-regulation of GRP78 expression and this bionic chip microdevice is a robust platform to assess the interaction of cancer and stromal cells in tumor environment study.
INTRODUCTION
Lung cancer is one of the leading causes of cancerincidence and mortality in the world, accounting for more than 1.6 million new cases and 1.3 million deaths annually [1]. Histologically, non-small cell lung cancer (NSCLC) represents approximately 80% of all lung cancer cases. NSCLC is usually diagnosed at an advanced stage of disease and often metastases are present [2]. Thus, elucidation of the molecular mechanisms involved in NSCLC invasion and metastasis could lead to the emergence of novel diagnostic and therapeutic approaches. To date, accumulated evidence indicates that tumor lesions are composed of tumor parenchyma and stroma, two discrete but interactive cellular materials for cross talk and promotion of tumor growth [3,4]. Indeed, tumor stroma plays a significant role in cancer evolution [5] by promoting tumorigenesis [6], cancer progression [7], invasion, [8] and chemoresistance [9] through a variety of mechanisms. As a major component in tumor stroma and microenvironment, cancer-associated fibroblasts (CAFs) are thought to be activated by tumor cells. CAFs are characterized by upregulated expression of a-smooth muscle actin (a-SMA), Vimentin, and fibroblast activation protein (FAP) [10][11][12]. Activation of CAF from regular fibroblasts induces multiple functional changes in CAF thereby promoting cancer development, such as facilitating angiogenesis, epithelial-mesenchymal transition (EMT) [13], dysfunction of the local immune system [14], and tumor cell proliferation, invasion, and metastasis [10][11][12]. However, the underlying molecular mechanisms of CAFs in promotion of tumor cell invasion and metastasis are poorly understood.
For example, the tumor microenvironment can mediate tumor cell growth by triggering stress responses through accumulating levels of the unfolded and/or misfolded proteins in the endoplasmic reticulum (ER) lumen, subsequently resulting in the unfolded protein response (UPR) [15]. The glucose-regulated protein GRP78, a stress-induced endoplasmic reticulum (ER) chaperone, is able to regulate the ER stress signaling pathways to induce the UPR by facilitating the folding and assembly of proteins, targeting misfolded proteins for ER associated degradation (ERAD), and regulating calcium homeostasis; thus, GRP78 serves as an ER stress sensor. GRP78 protein is usually expressed at the basal level in normal adult organs, such as the brain, lung and liver, but is significantly upregulated in various human cancers [16]. Moreover, overexpressed GRP78 in cancer cells is associated with tumor progression, a reduction in apoptosis, resistance to chemotherapy, and poor prognosis of several cancers [17][18][19][20]. Recently, studies have demonstrated that GRP78 expression was associated with invasion and metastasis of different types of cancer cells, such as gastric, prostate, and breast cancers [21][22][23].
To study tumor cell interactions with stromal cells in vitro, current cell models are limited. For example, traditional in vitro studies of three-dimensional (3D) tumor cell invasion were performed using a commercially available Transwell chamber by measuring the number of cells migrating vertically through a gel into a filter [24]. This system lacks real-time observation and it is inherently difficulty to assess tumor cell interaction with stromal cells directly. Thus, there is an urgent need to develop a reliable and efficient in vitro culture model to closely mimic the in vivo microenvironment of cancer metastasis. To this end, microfluidics bring a novel opportunity to spatially and temporally control tumor cell growth and stimuli and microfabricated devices have been used to facilitate the research need concerning the biology of cells [25][26][27][28][29]. The successful reconstitution of the lung tissue architecture on a microfluidic device indicates that biomimetic microsystems may potentially serve as a replacement for animal experiments [28]. Thus, in this study, we developed a microfluidic-based 3D co-culture device to recreate an in vitro tumor microenvironment to investigate the invasion capacity of cancer cells with respect to tumor cell interactions with stromal cells in real-time. This bionic chip could provide insightful information regarding the role of GRP78, stimulated by CAFs, in promotion of lung cancer cell invasion capacity.
Construction of the bionic invasion microfluidic device
Microfluidic device was constructed to contain six chip units, which is used to assess cancer and stromal cell interactions and tumor cell invasion in vitro to mimic the in vivo conditions ( Figure 1A). In this study, we applied this microfluidic device to assess lung cancer (A549 or SPCA-1) and fibroblast WI38 cell interaction by culturing them in the cell chambers A and B, respectively, for 72 h. Cell viability demonstrated that cells grew well within this device ( Figure 1C). The migration channel filled with BME enabled formation of a stable concentration gradient. FITC was added into the growth medium to let it gradually diffuse into the basement membrane extract and spread to Channel C after 2 h, leading to a stable concentration gradient and maintaining over 4 h in the basement membrane extract. Figure 1DE measured such data in 5 h and showed data on tumor cell invasion over 48 h. As such, we replaced the CAFs' medium in Channel C to maintain the concentration gradient of the basement membrane extract to be able to monitor cell invasion capacity.
Transformation of fibroblast to CAFs using the chip
In order to obtain CAFs for our in vitro study, we activated normal fibroblasts to CAFs by seeding A549 and SPCA-1 cells into the cell chambers in the chip unit part I and human lung fibroblasts WI38 into Chamber B ( Figure 1A) and culturing them for 72 h. The data of the immunofluorescence assay revealed that WI38 cells cocultured with NSCLC cells showed positive α-SMA and Vimentin expression compared to WI38 alone culture ( Figure 2A). Moreover, we confirmed expression of these two markers in the Transwell WI38 co-cultured system using Western blot ( Figure 2B).
CAF-induced A549 and SPCA-1 cell migration and invasion through a three-dimensional invasion microdevice
This microdevice contains two units, Part I and II. Part I also contains two chambers to separate A549 and SPCA-1 cells from WI38 cells in chamber B. After WI38 transformed into CAF in co-culture in part I, we cultured NSCLC (A549 and SPCA-1) cells as control group and GRP78 knockdown NSCLC cells as siRNA experimental group in Chamber C. After cells adhered to the chamber, secretion from above flowed into the secretion chamber. Multiple inducers (IMDM, WI38 secretion, or co-culture medium) were added into secretion chamber and tumor cell migration was recorded using an inverted phase contrast microscope over a period of 48 h. We found that when treated with co-culture secretions, tumor cells migrated towards the microchannels where the inducers' concentration was highest and digested BME and then invaded towards the secretion chamber. Tumor cell migration and invasion behavior first appeared at 6 h after addition with co-culture secretions, but was not observed in tumor cells treated with IMDM medium and WI38 secretion. The migration and invasion capacity of NSCLC cells was lower after knocking down of GRP78 expression ( Figure 3A and Figure 4A). Quantitative data showed that tumor cells induced by co-culture secretions migrated faster, whereas tumor cells with knocking down of GRP78 expression showed less number and migration or invasion distance than that of control cells ( Figure 3B and Figure 4A).
CAF induction of GRP78 expression in A549 and SPCA-1 cells
To explore the underlying mechanism, we examined whether CAF regulates expression of GRP78 protein.
After knocking down GRP78 expression in tumor cells using siRNA, Western blot and immunofluorescence data revealed that this siRNA was effective ( Figure 5AB). However, when we cultured these cells with CAF secretion for an additional 24 h, Western blot and immunofluorescence data showed that CAF secretion induced a pronounced increase in GRP78 expression in these tumor cells compared with the corresponding controls (both negative control and siRNA-only control) ( Figure 5AB). These data indicate that CAF was able to induce GRP78 expression in A549 and SPCA-1 cells.
Traditional Transwell assay to confirm findings from microdevice data
In order to verify the microdevice is reliable, we repeated the tumor cell invasion assay using a traditional Transwell system. As shown in Figure 6A, Transwell assay revealed that NSCLC cells induced by co-culture conditional medium had a significant increase in invasion capacity compared to the control medium, whereas the number and invasion of NSCLC cells after down-regulated GRP78 expression were shown to be less than those of controls ( Figure 6AB).
DISCUSSION
The tumor microenvironment favors tumor immune privilege as well as inducing proliferation. Resistance to apoptosis is found in cancer cells and various stromal cells, such as fibroblasts, vascular cells and inflammatory cells [33]. A previous study demonstrated that CAFs, activated by cancer cells, can secrete ECM components, a variety of growth factors, and chemokines to promote tumor cell growth, invasion, and metastasis, besides their direct interaction with cancer cells [34]. To date, the molecular mechanisms by which CAFs promotes tumor invasion and metastasis remains to be defined. Compared to normal tissue, many human cancer cells showed upregulation of GRP78 protein expression. GPR78 is also implicated in oncogenesis, cancer progression, and drug resistance [35]. To further explore the mechanisms of NSCLC progression, our current study developed a bionic chip to allow the co-culture of human lung fibroblasts WI38 cells with human lung adenocarcinoma cells to mimic NSCLC cell migration and invasion in vivo. We first activated normal human fibroblasts to CAFs using this microdevice and showed increased level of the myofibroblast markers a-SMA and Vimentin, consistent with other previous studies [12]. These findings indicated that our bionic chip was an excellent platform to investigate the cell-cell interactions that mimic the in vivo environment.
We then assessed CAFs-conditioned growth medium in the promotion of NSCLC cell invasion and demonstrated that NSCLC cells exhibited increased migration. Our findings suggest that CAFs-secreted stuff was able to more strongly influence the motility of NSCLC cells than that of normal fibroblasts. Furthermore, we found that GRP78 expression was also upregulated in NSCLC cells after culture with CAFs-conditioned growth medium. The number, migration and invasion distance of invading NSCLC cells after GRP78 knockdown was significantly lower than those of control cells. We also found that both number and distance of SPCA-1 cell invasion were less than those of A549 cells, indicating that the migratory capacity of varying tumor cells induced by CAF was different. Following these findings using this novel microfluidic device to test tumor cell invasion capacity, we also confirmed our microdevice data using the traditional Transwell system. We demonstrated that our microdevice has several advantages over the traditional Transwell assay; for example, our chip is able monitor tumor cell migration across the BME in real-time. Moreover, our chip can greatly reduce cell numbers and amount of reagents needed. Our microdevice can also process variously treated cells under multiple inducers simultaneously, which can greatly reduce experimental error. Our current microdevice data are consistent with the previous studies showing that GRP78 protein is a regulator of tumor invasion in many kinds of human cancers [36,37]. In this sense, we propose that CAFs play a key role in progression of human lung adenocarcinoma cells via increase in GRP78 expression.
However, our current study does have some limitations. The interaction of CAFs with NSCLC cells in this bionic chip culture system and future research should incorporate additional stroma cell types into this system to study the cell-cell interactions taking place. Moreover, the underlying molecular mechanism by which CAFs interact with NSCLC cells still requires further study. Utilization of this novel microfluidic chip has some limitations, like all other techniques, so we may not use it to replace other technologies, but just add one. In summary, we developed an integrated co-culture bionic chip to assess tumor cell invasion in real time. Our current data showed that conditioned growth medium obtained from co-culture of cancer-associated fibroblasts and NSCLC cells promoted NSCLC cell invasion mediated by the up-regulation of GRP78 expression. These findings suggest that GRP78 may be a novel target in future treatment of NSCLC.
Design and fabrication of the bionic invasion chip
The schematic illustration of the integrated microenvironment chip is shown in Figure 1A and the manufacturing process was described in our previous study [30]. Specifically, this microchip was fabricated with poly-dimethylsiloxane (PDMS) (Dow, Corning, MI, USA) using standard soft lithography methods with replica molding of PDMS against the masters. The upper and lower layers of this microchip and a glass slide were irreversibly bonded together in a sequence via oxygen plasma surface treatment (150 mTorr, 50 W, 20 s) [31,32].
This device contains six units of chip and each unit is composed of two parts, one for cellular co-culture (part I) and another one for cell invasion (part II). Part I consisted of two layers of PDMS material, between which there is a 5-μm pore polycarbonate membrane (Nuclepore, Whatman, Buckinghamshire, UK) segmenting into twocell culture chambers (namely, A and B). Non-small cell lung cancer A549 and SPCA-1 cell lines were seeded into Chamber A at approximately 1 x 10 3 cells/cm 2 , while 2 × 10 3 cells/cm 2 fibroblast WI38 cells into Chamber B, respectively, to reach a ratio of fibroblasts to NSCLC cells of 2:1. It is structured as a non-contacting co-culture system to simulate tumor and stromal cell interactions to mimic the tumor microenvironment in vivo. The gap of pillar A is 8μm in size which is smaller than the size of WI38 cells, which can effectively block WI38 cells from crossing the cell chamber, whereas macromolecules secreted from WI38 cells are easily able to cross the chambers. Part II is also composed of two chambers (secretion chamber and cell chamber C) and a migration channel. The migration channel has a dimension of 40μm x 500 μm x 2mm (H x W x L), and several micropillars with 20μm gaps embedded into both edges. This design can only allow the cultrex basement membrane extract (BME; R&D Systems, Minneapolis, MN, USA) to flow into the migration channel but not flow into the adjacent chambers. For example, non-small cell lung cancer A549 and SPCA-1 cell lines were seeded into Chamber C and, following cell adherence to the base, secretion from the upstream allows flow into the secretion chamber constantly, which could induce tumor cells to digest BME and invade into the secretion chamber. This chip provides a scaffolding structure within which cancer cells and fibroblasts interact directly in three-dimensions (3D) mode [26] but without a direct contact each other. The cell-basement membrane extract (BME) mixture was used to seed into each cell culture chamber to separate these types of cells. In order to form a stable concentration gradient of CAFs' secretion in this chip to induce tumor cells invasion, this chip has been designed to connect cell culture chambers with an input and a syringe pump linking to each chamber in order to control the flow of a culture medium from the upper chamber to the downstream chamber to form a concentration gradient of the conditional medium or tracer (such as a fluorescence dye). A finished integrated bionic microfluidic device is shown in Figure 1B. Cells cultured in part I were stained with Hoechst/propidium iodide (PI; Figure 1C). We followed the diffusion process with FITCconjugated goat anti-mouse IgG at a dilution of 1:200 (Jackson ImmunoResearch, West Grove, PA, USA) and reviewed under a fluorescence microscope.
Cell lines and culture
Human lung adenocarcinoma A549 and SPCA-1 cell lines were obtained from American Type Culture Collection (Manassas, VA, USA) and cultured in RPMI-1640 medium (Gibco, Long Islands, NY), while human lung fibroblast WI38 cells were also obtained from ATCC and cultured in IMDM (Gibco) at 37°C in a humidified atmosphere of 5% CO 2 . These cell culture media were also supplemented with 10% fetal bovine serum (FBS, Hyclone, Logan, UT, USA) and penicillin (100 U mL -1 ), and streptomycin (100 μg mL -1 ).
Immunofluorescence
We performed immunofluorescence to detect expression of specific biomarkers α-SMA and Vimentin of CAFs, and GRP78 expression in A549 and SPCA-1 cells cultured in this chip and Transwell, respectively. In brief, cells were rinsed in phosphate buffered saline (PBS)for three times and fixed in 4% paraformaldehyde for 15 min and then permeabilized in 0.1% Triton X-100 (AppliChem, Switzerland) for 20 min. After three washes, cells were blocked in 5% bovine serum albumin (BSA, Sigma, St Louis, MO, USA) solution in PBS for 1h at 37°C. To confirm fibroblasts transformed into CAFs, WI38 cells were incubated with an anti-α-SMA antibody (Abcam, UK) and anti-Vimentin antibody (Abcam), respectively for 12 h at 4°C.
To detect level of GRP78 expression after induced by CAFs, A549 and SPCA-1 cells were incubated with ananti-GRP78 antibody (Abcam) and subsequently with an Alexa Fluor® 488-conjugated secondary antibody (donkey anti-mouse IgG, Invitrogen, Carlsbad, CA, USA) at 37°C for 1h. Cells were then counterstained with DAPI (Sigma) and reviewed under a fluorescent microscope with a confocal imaging system (Confocal Laser Scanning Microscope CLSM, Leica TCS SP5 II, Germany).
Production of CAF-conditioned growth medium using the transwell assay
To assess tumor-stromal cell interactions in vitro, we utilized an indirect contact co-culture system of a Transwell apparatus with a 0.4-μm-pore membrane (sixwell plate; Corning, Corning, NY, USA). Tumor cells were added into the upper chamber of the Transwell insert and WI38 cells were added into the lower chamber. After incubation for three days, the co-cultured medium was collected and centrifuged to remove cellular debris, and the supernatants were frozen at −80°C and used as a chemoattractant for tumor cell invasion assay. Co-cultured WI38 cells were collected for Western blot analysis of α-SMA and Vimentin proteins.
Protein extraction and western blot
Cells were harvested, lysed in an RIPA buffer with fresh addition of 10 mg/ml phenylmethanesulphonyl fluoride and 1% (v/v) cocktail protease inhibitor (Sigma) for 30 min. The protein concentration was determined by the BCA assay. Protein lysates were then separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred onto nitrocellulose membranes (Millipore, Billerica, MA, USA). For Western blotting, the membranes were blocked in 5% fat-free dry milk solution in PBS and then incubated with primary antibodies against grp78
RNAi interference
The siRNA used to knockdown Grp78 expression was designed and synthesized by Invitrogen (Shanghai, China) and DNA sequences of siRNA duplex were 5'-CCAAAGACGCUGGAACUAUTT-3' and 5'-AUAGU UCCAGCGUCUUUGGTT-3'. Transfection was conduted www.impactjournals.com/oncotarget by using Lipofectamine™ 3000(Invitrogen) according to the manufacturer's instructions. Briefly, cells were plated in 6-well plates and cultured for 24 h and then transfection complex containing 5 μg siRNA was added into cell culture and cells were further cultured for 72 h at 37°C.The level of GRP78 expression was determined by Western blot.
Tumor cell transwell invasion assay
Matrigel(Corning) was used to pre-coat the filters with 8-μm pore size between the upper and bottom chambers of the Transwell apparatus (Corning). After the Matrigel solidified at 37°C, cancer cells were seeded into the upper chambers and then co-cultured medium, WI38 cell secretion, and IMDM were added into the bottom Transwell chamber. Cells were incubated at 37°C overnight. Cells remaining in the surface of the filter were swabbed with a cotton swab and cells invaded into the surface of the bottom filter were fixed with 100% methanol for 10 min and stained in Giemza stain (Sigma Chemical Co) for 10min and then washed with distilled water. The number of cells invaded into the lower surface of the polycarbonate filter was counted at 100× magnification under a light microscope. Each type of cells was assayed in triplicate and repeated at least twice. Invading cancer cells were then collected for immunofluorescence and Western blotting.
Statistical analysis
Data were expressed as the means ± standard deviation and the difference among groups was analyzed by analysis of variance using SPSS13.0 for Windows software (SPSS, Chicago, IL, USA). A p value of ≤ 0.05 was considered statistically significant. | 4,848.2 | 2016-03-21T00:00:00.000 | [
"Biology"
] |
Semiclassical wave packet dynamics in Schrodinger equations with periodic potentials
We consider semiclassically scaled Schrodinger equations with an external potential and a highly oscillatory periodic potential. We construct asymptotic solutions in the form of semiclassical wave packets. These solutions are concentrated (both, in space and in frequency) around the effective semiclassical phase-space flow obtained by Peierl's substitution, and involve a slowly varying envelope whose dynamics is governed by a homogenized Schrodinger equation with time-dependent effective mass. The corresponding adiabatic decoupling of the slow and fast degrees of freedom is shown to be valid up to Ehrenfest time scales.
Introduction
1.1. General setting. We consider the following semiclassically scaled Schrödinger equation: ψ ε |t=0 = ψ ε 0 , with d 1, the spatial dimension, and ψ ε = ψ ε (t, x) ∈ C. Here, we already have rescaled all physical parameters such that only one semiclassical parameter ε > 0 (i.e. the scaled Planck's constant) remains. In the following we shall be interested in the asymptotic description of ψ ε (t, x) for ε ≪ 1. To this end, the potential V Γ (y) ∈ R is assumed to be smooth and periodic with respect to some regular lattice Γ ≃ Z d , generated by a given basis {η 1 , . . . , η d }, η ℓ ∈ R d , i.e.
In addition, the slowly-varying potential V is assumed to satisfy the following: Note that this implies that V (x) grows at most quadratically at infinity. Equation 1.1 describes the dynamics of quantum particles in a periodic lattice-potential V Γ under the influence of an external, slowly varying driving force F = −∇V (x). A typical application arises in solid state physics where (1.1) describes the timeevolution of electrons moving in a crystalline lattice (generated by the ionic cores). The asymptotics of (1.1) as ε → 0 + is a natural two-scale problem which is wellstudied in the physics and mathematics literature. Early mathematical results are based on time-dependent WKB type expansions [3,15,37] (see also [7] for a more recent application in the nonlinear case), which, however, suffer from the appearance of caustics and are thus only valid for small times. In order to overcome this problem, other methods based on, e.g., Gaussian beams [11], or Wigner measures [13,14], have been developed. These approaches yield an asymptotic description for time-scales of order O(1) (i.e. beyond caustics). More recently, so-called spaceadiabatic perturbation theory has been used (together with Weyl pseudo-differential calculus) to derive an effective Hamiltonian, governing the dynamics of particles in periodic potentials V Γ under the additional influence of slowly varying perturbations [22,39]. The semi-classical asymptotics of this effective model is then obtained in a second step, invoking an Egorov-type theorem.
On the other hand, it is well known that in the case without periodic potential, semiclassical approximations which are valid up to Ehrenfest time t ∼ O(ln 1/ε) can be constructed in a rather simple way. The corresponding asymptotic method is based on propagating semiclassical wave packets, or coherent states, i.e. approximate solutions of (1.1) which are sufficiently concentrated in space and in frequency around the classical Hamiltonian phase-space flow. More precisely, one considers and the purely time-dependent function S(t) denotes the classical action (see §1.3 below). The right hand side of (1.3) corresponds to a wave function which is equally localized in space and in frequency (at scale √ ε), so the uncertainty principle is optimized. In other words, the three quantities , and have the same order of magnitude, O(1), as ε → 0. The basic idea for this type of asymptotic method can be found in the classical works of [16,27] (see also [4,29] for a broader introduction). It has been developed further in, e.g., [8,9,34,35,38] and in addition also proved to be applicable in the case of nonlinear Schrödinger equations [6] (a situation in which the use of Wigner measures of spaceadiabatic perturbation theory fails). Asymptotic results based on such semiclassical wave packets also have the advantage of giving a rather clear connection between quantum mechanics and classical particle dynamics and are thus frequently used in numerical simulations (see e.g. [12]). Ehrenfest time is the largest time up to which the wave packet approximation is valid, in general. Without any extra geometric assumption, the coherent structure may be lost at some time of order C ln 1/ε, if C is too large. See e.g. [5,10,30,32,33] and references therein.
Interestingly enough, though, it seems that so far this method has not been extended to include also highly oscillatory periodic potentials V Γ x ε , and it will be the main task of this work to do so. To this end, it will be necessary to understand the influence of V Γ x ε on the dispersive properties of the solution ψ ε (t, x). In particular, having in mind the results quoted above, one expects that in this case the usual kinetic energy of a particle E = 1 2 |k| 2 has to be replaced by E m (k), i.e. the energy of the m-th Bloch band associated to V Γ . In physics this is known under the name Peierls substitution. We shall show that under the additional influence of a slowly varying potential V (x), this procedure is in fact asymptotically correct (i.e. for ε ≪ 1) up to Ehrenfest time, provided the initial data ψ ε 0 is sufficiently concentrated around (q 0 , p 0 ) ∈ R 2d . Remark 1.2. Indeed, we could also allow for time-dependent external potentials V (t, x) ∈ R measurable in time, smooth in x, and satisfying . Under this assumptions, it is straightforward to adapt the analysis given below. For the sake of notation, we shall not do so here, but rather leave the details to the reader.
1.2. Bloch and semiclassical wave packets. In order to state our result more precisely, we first recall some well-known results on the spectral theory for periodic Schrödinger operators, cf. [31,40]: Denote by Y ⊂ Γ the (centered) fundamental domain of the lattice Γ, equipped with periodic boundary conditions, i.e. Y ≃ T d . Similarly, we denote by Y * ≃ T d the fundamental domain of the corresponding dual lattice. The latter is usually referred to as the Brillouin zone. Bloch-Floquet theory asserts that H per admits a fiber-decomposition where for k ∈ Y * , we denote It therefore suffices to consider the following spectral problem on Y : where E m (k) ∈ R and χ m (y, k), respectively, denote an eigenvalue/eigenvector pair of H Γ (k), parametrized by k ∈ Y * , the so-called crystal momentum. These eigenvalues can be ordered increasingly, such that where each eigenvalue is repeated according to its multiplicity (which is known to be finite). This implies that where {E m (k); k ∈ Y * } is called the m-th energy band (or Bloch band). The associated eigenfunctions χ m (y, k) are Γ * -periodic w.r.t. k and form a complete orthonormal basis of L 2 (Y ). Moreover, the functions χ m (·, k) ∈ H 2 (Y ) are known to be real-analytic with respect to k on Y * \Ω, where Ω is a set of Lebesgue measure zero (the set of band crossings). Next, we consider for some m ∈ N the corresponding semi-classical band Hamiltonian, obtained by Peierl's substitution, i.e.
and denote the semiclassical phase space trajectories associated to h sc m by (1.6) q(t) = ∇ k E m (p(t)) , q(0) = q 0 , This system is the analog of (1.4) in the presence of an additional periodic potential.
that is, a shift with constant speed ω = ∇E m (p 0 ).
In order to make sure that the system (1.6) is well-defined, we shall from now on impose the following condition on E m (k). Assumption 1.4. We assume that E m (p(t)) is a simple eigenvalue, uniformly for all t ∈ R, i.e. there exists a δ > 0, such that It is known that if E m (k) is simple, it is infinitely differentiable and thus the right hand side of (1.6) is well defined. Under Assumption 1.4, we consequently obtain a smooth semi-classical flow (q 0 , p 0 ) → (q(t), p(t)), for all t ∈ R. In addition, one can choose χ m (y, k) to be Γ-periodic with respect to y and such that (y, t) → χ m (y, p(t)) is bounded together with all its derivatives. Example 1.5. By compactness of Y * , Assumption 1.4 is satisfied in either of the following two cases:
Main result.
With the above definitions at hand, we are now able to state our main mathematical result. To this end, we first define a semiclassical wave packet in the m-th Bloch band (satisfying Assumption 1.4) by with q(t), p(t) given by system (1.6) and u(t, z) ∈ C, a smooth slowly varying envelope which will determined by an envelope equation yet to be derived (see below). In addition, the ε-oscillatory phase is where S m (t) ∈ R is the (purely time-dependent) semi-classical action with L m denoting the Lagrangian associated to the effective Hamiltonian h sc m , i.e.
Remark 1.6. Note that this is nothing but the Legendre transform of the effective Hamiltonian h sc m . As in classical mechanics, one associates to a given Hamiltonian H(p, q) a Lagrangian via L(p, q) = p ·q − H(p, q).
The function ϕ ε given by (1.8) generalizes the usual class of semiclassical wave packets considered in e.g. [16,27]. Note that in contrast to two-scale WKB approximation considered in [3,15,37], it involves an additional scale of the order 1/ √ ε, the scale of concentration of the amplitude u. In addition, (1.8) does not suffer from the appearance of caustics. Nevertheless, in comparison to the highly oscillatory Bloch function χ m , the amplitude is still slowly varying and thus we can expect an adiabatic decoupling between the slow and fast scales to hold on (long) macroscopic time-scales. Indeed, we shall prove the following result: Theorem 1.7. Let V Γ be smooth and V satisfy Assumptions 1.1. In addition, let Assumption 1.4 hold and the initial data be given by with q 0 , p 0 ∈ R d and some given profile u 0 ∈ S(R d ).
Then there exists C > 0 such that the solution of (1.1) can be approximated by Here, ϕ ε is given by (1.8) with where β(t) ∈ iR is the so-called Berry phase term and v ∈ C(R; S(R d )) satisfies the following homogenized Schrödinger equation In particular there exists C 0 > 0 so that Remark 1.8. In fact it is possible to prove the same result under less restrictive regularity assumptions on u 0 and V Γ . Indeed, Proposition 5.1 shows that it is sufficient to require that u 0 belongs to a certain weighted Sobolev space. Concerning the periodic potential, it is possible to lower the regularity considerably, depending on the dimension. For example, in d = 3 it is sufficient to assume V Γ to be infinitesimally bounded with respect to −∆. This implies which, together with several density arguments (to be invoked at different stages of the formal expansion), is enough to justify the analysis given below. Theorem 1.7 provides an approximate description of the solution to (1.1) up to Ehrenfest time and can be seen as the analog of the results given in [16,27,8,9,18,34,35,38] where the case of slowly varying potentials V (x) is considered. The proof does not rely on the use of pseudo-differential calculus or space spaceadiabatic perturbation theory and can thus be considered to be considerably simpler from a mathematical point of view. In fact, our approach is similar to the one given in [18], which derives an analogous result for the so-called Born-Oppenheimer approximation of molecular dynamics. Note however, that we allow for more general initial amplitudes, not necessarily Gaussian. Indeed, in the special case where the initial envelope u 0 is a Gaussian, then its evolution u remains Gaussian, and can be completely characterized; see §4. 3. Also note that in contrast to the closely related method of Gaussian beams presented in, e.g., [11], we do not need to include complex-valued phases and in addition, obtain an approximation valid for longer times.
The Berry phase term is an example for so-called geometric phases in quantum mechanics. It is a well known feature of semiclassical approximation in periodic potentials, see, e.g., [28] for more details and a geometric interpretation. The homogenized Schrödinger equation features a rather unusual dispersive behavior described by a time-dependent effective mass tensor Hessian of E m (k) evaluated at k = p(t). To our knowledge, Theorem 1.7 is the first result in which a Schrödinger type equation with time-dependent effective mass has been rigorously derived (see also the discussion in Remark 3.1).
Remark 1.9. Let us also mention that the same class of initial data has been considered in [1] for a Schrödinger equation with locally periodic potential V Γ (x, y) and corresponding x-dependent Bloch bands E m (k; x). In this work, the authors derive a homogenized Schrödinger equation, provided that ψ ε 0 is concentrated around a stationary point point x 0 , p 0 of the semiclassical phase space flow, i.e.
This implies q(t) = q 0 and p(t) = p 0 , for all t ∈ R, yielding (at least asymptotically) a localization of the wave function. We observe the same phenomenon in our case under the condition V (x) = 0 and ∇ k E m (k) = 0 (see Example 1.3).
This work is now organized as follows: In the next section, we shall formally derive an approximate solution to (1.1) by means of a (formal) multi-scale expansion. This expansion yields a system of three linear equations, which we shall solve in Section 3. In particular, we shall obtain from it the homogenized Schrödinger equation. The corresponding Cauchy problem is then analyzed in Section 4, where we also include a brief discussion on the particularly important case of Gaussian profiles (yielding a direct connection to [16]). A rigorous stability result for our approximation, up to Ehrenfest time, is then given in Section 5.
Remark 1.10. We expect that our results can be generalized to the case of (weakly) nonlinear Schrödinger equations (as considered in [6,7]). This will be the aim of a future work.
Formal derivation of an approximate solution
2.1. Reduction through exact computations. We seek the solution ψ ε of (1.1) in the following form where the phase φ(t, x) is given by (1.9), the function U ε = U ε (t, z, y) is assumed to be smooth, Γ-periodic with respect to y, and admits an asymptotic expansion Note that due to the inclusion of the factor ε −d/4 the L 2 (R d ) norm of the right hand side of (2.1) is in fact uniformly bounded with respect to ε, whereas the L ∞ (R d ) norm in general will grow as ε → 0. The asymptotic expansion 2.2 therefore has to be understood in the L 2 sense. Taking into account that in view of (1.9), ∇ x φ m (t, x) = p(t), we compute: where in all of the above expressions, the various functions have to be understood to be evaluated as follows: Thus, ordering equal powers of ε in equation (1.9) we find that So far, we have neither used the fact that q(t), p(t) are given by the Hamiltonian flow (1.6), nor the explicit dependence of φ m on time. Using these properties, allows Now, recall that in the above lines, U ε is evaluated at the shifted spatial variable z = (x − q(t))/ √ ε. Taking this into account, we notice that the above hierarchy has to be modified, and we find: Next, we perform a Taylor expansion of V around the point q(t): since V is at most quadratic in view of Assumption 1.1. Recalling that h sc m (p, q) = E m (p)+ V (q), the terms involving V (q) cancel out in b ε 0 , the terms involving ∇V (q) cancel out in b ε 1 , and thus, we finally obtain: Lemma 2.1. Let the Assumptions 1.1, 1.4 hold and ψ ε be related to U ε through (2.1). Then it holds and a remainder r ε (t, z, y) satisfying where the constant C > 0 is independent of t, z, y and ε.
2.2.
Introducing the approximate solution. We now expand U ε in powers of ε, according to (2.2). To this end, we introduce the following (time-dependent) linear operators In order to solve (1.1) up to a sufficiently small error term (in L 2 ), we need to cancel the first three terms in our asymptotic expansion. This yields, the following system of equations Assuming for the moment that we can do so, this means that we (formally) solve (1.1) up to errors of order ε 3/2 (in L 2 ), which is expected to generate a small perturbation of the exact solution (in view of the ε in front of the time derivative of ψ ε in (1.1)). We consequently define the approximate solution where the remainder terms r ε 1 , r ε 2 are given by r ε 1 (t, z, y) = L 2 U 1 (t, z, y), r ε 2 (t, z, y) = L 1 U 2 (t, z, y), and r ε satisfies where the constant C > 0 is independent of t, z, y and ε.
Derivation of the homogenized equation
3.1. Some useful algebraic identities. Given the form of L 0 , the equation Before studying the other two equations, we shall recall some algebraic formulas related to the eigenvalues and eigenvectors of H Γ . First, in view of the identity (1.5), we have Taking the scalar product in L 2 (Y ) with χ m , we infer Since H Γ is self-adjoint, the last term is zero, thanks to (1.5). We infer Differentiating (3.2) again, we have, for all j, ℓ ∈ {1, . . . , d}: Taking the scalar product with χ m , we have:
3.2.
Higher order solvability conditions. By Fredholm's alternative, a necessary and sufficient condition to solve the equation L 0 U 1 + L 1 U 0 = 0, is that L 1 U 0 is orthogonal to ker L 0 , that is: Given the expression of L 1 and the formula (3.1), we compute In view of (3.3), we infer that (3.5) is automatically fulfilled. We thus obtain 1 , the part of U 1 which is orthogonal to ker L 0 , is obtained by inverting an elliptic equation: Note that the formula for L 1 U 0 can also be written as ) χ m (y, p(t)) · ∇ z u(t, z), thus taking into account (3.2), we simply have: u ⊥ 1 (t, z, y) = −i∇ k χ m (y, p(t)) · ∇ z u(t, z). At this stage, we shall, for simplicity choose u 1 = 0, in which case U 1 becomes simply a function of u: (3.6) U 1 (t, z, y) = −i∇ k χ m (y, p(t)) · ∇ z u(t, z).
As a next step in the formal analysis, we must solve By the same argument as before, we require With the expression (3.6), we compute and we also have + iuṗ(t) · ∇ k χ m (y, p(t)) .
Recalling thatṗ(t) = −∇V (q(t)), we find: By making the last sum symmetric with respect to j and ℓ, and using (3.4), we finally obtain the homogenized Schrödinger equation with time-dependent effective mass: where we recall that the so-called Berry phase term. From χ m L 2 (Y ) = 1, we infer that In other words, χ m , ∇ k χ m L 2 (Y ) ∈ iR and thus iβ(t) ∈ R, acts like a purely timedependent, real-valued, potential. Thus, invoking the unitary change of variable implies that v(t, z) solves (1.12). Equation (3.8) models a quantum mechanical time-dependent harmonic oscillator, in which the time dependence is present both in the differential operator, and in the potential.
This equation has been derived in [2] (see also [36,20,23] for similar results). Note, however, that in the quoted works the scaling of the original equation (1.1) is different (i.e. not in semiclassical form).
The envelope equation
We examine the Cauchy problem for (3.8), with special emphasis on the large time control of u.
4.1.
The general Cauchy problem. Equation 3.8 can be seen as the quantum mechanical evolutionary problem corresponding to the following time-dependent Hamiltonian, Under Assumptions 1.1 and 1.4, this Hamiltonian is self-adjoint, smooth in time, and quadratic in (z, ζ) (in fact, at most quadratic would be sufficient). Using the result given in [24, p.197] (see also [25]), we directly infer the following existence result: Lemma 4.1 (From [24]). For d 1 and v 0 ∈ L 2 (R d ), consider the equation If the coefficients a jk and b jk are continuous and real-valued, such that the matrices (a jk ) j,k and (b jk ) j,k are symmetric for all time, then (4.2) has a unique solution v ∈ C(R; L 2 (R d )). It satisfies Moreover, if v 0 ∈ Σ k for some k ∈ N, then v ∈ C(R; Σ k ).
In particular, this implies that if u 0 ∈ Σ k , then (1.12) has a unique solution v ∈ C(R; Σ k ). As a consequence, (3.8) has a unique solution u ∈ C(R; Σ k ) such that u |t=0 = u 0 . Remark 4.2. It may happen that the functions a jk are zero on some non-negligible set. In this case, (4.2) ceases to be dispersive. Note that the standard harmonic oscillator is dispersive, locally in time only, since it has eigenvalues. We shall see that this is not a problem in our analysis though.
4.2.
Exponential control of the envelope equation. To prove Theorem 1.7, we need to control the error present in Lemma 2.2 for large time. In general, i.e. without extra geometric assumptions on the wave packet, exponential growth in time must be expected: Then the solution u to (3.8) satisfies u ∈ C(R; Σ k ), and there exists C > 0 such that Proof. The result can be established by induction on k. The constant C must actually be expected to depend on k, as shown by the case of There, the fundamental solution is explicit (generalized Mehler formula, see e.g. [21]), and we check that u(t) Σ k behaves like e kt . For k = 0, the result is obvious, since in view of Lemma 4.1, the L 2 -norm is conserved. The case k = 1 illustrates the general mechanism of the proof, and we shall stick to this case for simplicity. The key remark is that even though the operators z and ∇ z (involved in the definition of Σ 1 ) do not commute with the Hamiltonian (4.1), the commutators yield a closed system of estimates. First, multiplying (3.8) by z, we find and, in view of Assumption 1.1, Summing over the two inequalities and using the conservation of mass, we infer and Gronwall's lemma yields the proposition in the case k = 1. By induction, applying (z, ∇ z ) to (3.8) k times, the defects of commutation always yield the same sort of estimate, and the proposition follows easily.
Gaussian wave packets.
In the case where the initial datum in (3.8) is a Gaussian, we can compute its evolution and show that it remains Gaussian, by following the same strategy as in [16] (see also [17,18]). As a matter of fact, the order in which we have proceeded is different from the one in [16], since we have isolated the envelope equation (3.8) before considering special initial data. As a consequence, we have fewer unknowns. Consider (3.8) with initial datum where the matrices A and B satisfy the following properties: A and B are invertible; where the matrices A(t) and B(t) evolve according to the differential equations (4.10) In addition, for all time t ∈ R, A(t) and B(t) satisfy (4.5)-(4.8).
Proof. The argument is the same as in [16] (see also [17,18]): One easily checks that if A(t) and B(t) evolve according to (4.10), then u given by (4.9) solves (3.8). On the other hand, it is clear that (4.10) has a global solution. Finally, since ∇ 2 k E m (p(t)) and ∇ 2 x V (q(t)) are symmetric matrices, it follows from [16, Lemma 2.1] that for all time, A(t) and B(t) satisfy (4.5)-(4.8).
Stability of the approximation up to Ehrenfest time
As a final step we need to show that the derived approximation ψ ε app (t) indeed approximates the exact solution ψ ε (t) up to Ehrenfest time.
The a-priori L 2 estimate yields The assertion then follows from Lemma 2.2 (establishing the needed properties for the functions r ε , r ε 1 and r ε 2 ), Proposition 3.2, and Proposition 4.3. With this approach, we need to know that r ε is in L 2 z , so U 0 , U 1 and U 2 have three momenta in L 2 z : in view of Proposition 3.2 and Proposition 4.3, this amounts to demanding u 0 ∈ Σ 5 . This asymptotic stability result directly yields the assertion of Theorem 1.7.
Remark 5.2. The construction of the approximate solution ψ ε app has forced us to introduce non-zero correctors U 1 and U 2 , given by elliptic inversion. Therefore, we had to consider well-prepared initial data for ψ ε app . This aspect is harmless as long as one is interested only in the leading order behavior of ψ ε as ε → 0. As a consequence, our approach would not allow us to construct arbitrary accurate approximations for ψ ε (in terms of powers of ε), unless well-prepared initial data are considered, i.e. data lying in so-called super-adiabatic subspaces, in the terminology of [28] (after [26]). This is due to the spectral analysis implied by the presence of the periodic potential V Γ , and shows a sharp contrast with the case V Γ = 0.
Of course the above given stability result immediately generalizes to situations where, instead of a single ϕ ε , a superposition of finitely many semiclassical wave packets is considered, u n x − q n √ ε χ mn x ε , p n e ipn·(x−qn)/ε .
Since the underlying semiclassical Schrödinger equation (1.1) is linear, each of these initial wave packets will evolve individually from the rest, as in Theorem 1.7. Up to some technical modifications, it should be possible to consider even a continuous superposition of wave packets, yielding a semiclassical approximation known under the name "frozen Gaussians", see [19]. | 6,349 | 2011-01-17T00:00:00.000 | [
"Physics",
"Mathematics"
] |
Crystallinity of FRCM/GPM with High PB through Microbial Growth
Fiber reinforced composite (FRC) requires a process of grinding, mixing and compounding natural fibers from cellulosic waste streams into a polymer matrix that creates a high-strength fiber composite. In this situation, the spe-cified waste or base raw materials used are the waste thermoplastics and different types of cellulosic waste including rice husk and saw dust. FRC is a high-performance fiber composite achieved and made possible through a proprietary molecular re-engineering process by interlinking cellulosic fiber molecules with resins in the FRC material matrix, resulting in a product of exceptional structural properties. In this feat of molecular re-engineering, se-lected physical and structural properties of wood are effectively cloned and obtained in the FRC component, in addition to other essential qualities in order to produce superior performance properties to conventional wood. The dynamic characteristics of composite structures are largely extracted from the reinforcing of fibres. The fiber, held in place by the matrix resin, contributes to tensile strength in a composite, enhancing the performance properties in the final part, such as strength and rigidity, while minimizing weight. The advantages of composite to advance manufacturing, recycling and future perspective for and next generation This will bring a new horizon for future science with technology and every aspect of modern science which will bring a stable stability by process with for next generation science.
Introduction
Fiber reinforced composite materials assume a significant job in the technology today through the structure and assembling of cutting edge materials equipped for accomplishing higher solidness/thickness and quality/thickness proportions.
Of specific significance is the issue of harm inception and development in fiber-fortified metal framework composite plates. In spite of the fact that the writing is wealthy in new improvements in the composite materials innovation, it needs hugely a predictable examination of harm systems in composite materials. During the previous two decades, specialists have been utilizing micromechanical strategies in the investigation of composite materials. The upsides of utilizing such techniques are that nearby impacts can be represented and distinctive harm components can be distinguished [1]. Composite materials comprise of at least two microconstituents that vary in structure and compound sythesis and which are insoluble in one another. Be that as it may, numerous composite materials are made out of just two stages: one named the framework, which is consistent and encompasses the other stage, frequently named the scattered stage. The goal of having at least two constituents is to exploit the unrivaled properties of the two materials without settling on the shortcoming of either of the two materials without settling on the shortcoming of either [1]. In a fiber fortified composite, the strands convey the heft of the heap and the framework fills in as a mode for the exchange of the heap. The network can be metal, polymer, or fired. The filaments similarly can be metal, fired, glass or polymers.
A portion of the upsides of composites is high explicit quality, high explicit solidness or modulus, great measurement soundness, an abnormal mix of properties not effectively possible with amalgams, and so forth [2]. Surface completion assumes a significant job in numerous territories and is a factor critical in the assessment of machining precision. Albeit numerous elements influence the surface state of a machined part, cutting boundaries, for example, speed, feed, profundity of cut, and apparatus nose span affect the surface harshness for a given machine instrument and workpiece set-up. The utilization of fiber-fortified composite materials (FRCM) in industry has developed significantly as of late because of a few attractive properties that these materials have. Be that as it may, different characteristics inalienable in these materials, for example, anisotropy and weakness, are hindering both structure and assembling applications. In particular, for gap age procedures, harm marvels, for example, spalling, delamination, edge chipping and split development, are an aftereffect of the previously mentioned attributes of these materials. A few opening creation forms, including and so on, have been proposed for an assortment of monetary and quality reasons, however traditional boring is as yet the most broadly utilized strategy in industry today [3]. As of late, there has been an expanding enthusiasm for the investigation of keen or astute structures. Actuators and sensors are two key segments in these structures and piezoelectric composite is one of the dynamic materials which is maybe most broadly utilized as actuators and sensors. As a rule, piezoelectric composite materials are thought to be homogenous with successful electro-flexible properties which are subject to their constituent properties and microstructural geometry. So as to tailor a piezoelectric composite to the particular prerequisites of its job in a shrewd structure, it is important to build up a proficient logical model to foresee its viable electro elastic properties, and in this way to examine the impacts of the composite constituent properties and microstructural geometry on the general composite properties [4]. Composite cylinders can be strengthened with nonstop filaments. At the point when such tubes are exposed to pulverizing loads, the reaction is mind boggling and relies upon collaboration between the various systems that control the devastating procedure. The methods of pounding and their controlling systems are portrayed.
Additionally, the subsequent squashing procedure and its productivity are tended to [5]. Boring is one of the major machining tasks which are as of now did on fiber-fortified composite materials. There are common place issues experienced when penetrating fiber-strengthened composites. These issues incorporate the delamination of the composites, quick instrument wear, fiber pullout, nearness of fine chip [6]. One of the deterrents confronting FRC material use is an absence of data with respect to the impact of plan boundaries on the mechanical presentation of a prosthesis. Engineering composite creators have figured out how to control mechanical properties in FRC structures by changing fiber direction, fiber content, and geometry, commonly alluded to as cross-sectional course of action or design. The greater part of dental fiber-strengthened composite (FRC) materials are manufactured utilizing hand lay-up forms, wherein the expert may decide the mechanical properties of the conclusive reclamation by the manner in which the composite constituents are organized in the last structure [7]. Improvement of the tar pre-impregnated FRC frameworks has prompted the expanded use of FRCs in the manufacture of research center made single crowns and incomplete or full inclusion fixed fractional false teeth, just as seat side periodontal bracing, glue fixed halfway false teeth, post center frameworks, and in orthodontic applications. Resin pre-impregnated FRC has been appeared to have satisfactory flexural modulus and flexural solidarity to work effectively in the oral cavity. The presentation of the FRC framework relies upon the polymer lattice just as fiber type, volume part, and the nature of the fiber-polymer grid interface. Notwithstanding mechanical execution, the arrangement of the polymer lattice and filaments likewise has a significant job in the holding capacity of Composites are propelled materials establishing of at least two synthetically particular constituents on a large scale, having an unmistakable interface isolating them. At least one broken stages are in this way, installed in a consistent stage to frame a composite. In the majority of the circumstances, the irregular stage is generally harder and more grounded than the ceaseless stage and is known as the fortification though, the constant stage is named as the grid. The framework material can be metallic, polymeric or can even be ceramic. When the grid is a polymer, the composite is called polymer lattice composite (PMC).
The fortifying stage can either be sinewy or non-stringy (particulates) in nature.
The fiber strengthened polymers (FRP) comprise of filaments of high quality and modulus installed in or clung to a grid with unmistakable interface between them. In this structure, the two strands and framework hold their physical and substance characters. All in all, strands are the chief burden bearing individuals while the lattice places them at the ideal area and direction, goes about as a heap move medium among them and shields them from ecological harms [10].
FRC/AMM
For the advanced fiber reinforced composite (FRC) materials with the combination of advanced manufacturing method (AMM) the additive method for carbon is one of them.
Additive Manufacturing of Carbon Fiber
Added substance fabricating (AM) is characterized as a procedure of joining materials to make objects from 3D model information, normally layer upon layer, instead of subtractive assembling methodologies. AM advancements make it conceivable to construct an enormous scope of models or utilitarian segments with complex geometries that can't or if nothing else hard to be produced by The degree of the interior harm was resolved utilizing estimated changes in the dynamic properties of the framework (misfortune factor, dynamic solidness and mode shape), to get the reaction data at higher frequencies a modular investigation framework was worked around the presentation qualities of a laser doppler vibrometer (LDV) and an electronic spot design interferometer (ESPI).
These two gadgets gave integral data to the assurance of the dynamic attributes of every vibration mode. With this framework, harm actuated changes in the dynamic qualities of composite materials were estimated at frequencies up to 10 kHz. The aftereffects of this investigation demonstrated the accompanying. Torsion modes give the most affectability to confined interior damage. The assessment of higher recurrence NDI information requires the capacity to connect the deliberate misfortune factor and resounding frequencies with the genuine mode shape. The information acquired over the recurrence scope of the test could be diminished to a progression of slants that give a touchy sign of the material condition. The affectability of the dynamic technique to restricted harm is constrained by the estimation of the misfortune factor [18]. Two interferometric techniques were used to obtain the dynamic information from each target.
Laser Doppler Vibrometry (LDV) was used to measure the frequency response of the target, while an electronic speckle pattern interferometer (ESPI) recorded the associated mode shapes. Each of these interferometric techniques provides sensitivity for dynamic measurements that is better than more traditional measurement approaches. which also represents through Figure 2.
Impacts of carbon fiber content on malleable properties (counting rigidity, Young's modulus, sturdiness, yield quality and pliability) of CFRP composite examples. Boxplots were utilized to communicate the dispersions of information at every carbon fiber content. The case goes secured from 25% to 75% percentile of certainty span. Mean an incentive in the crate was utilized to communicate the patterns on impacts of carbon fiber content on tractable properties. In the accompanying Sections, the strainestress bends were chosen in the same. To augment the harm actuated changes in the misfortune factor two conditions must be met. To begin with, the harm ought to be focused in a zone of high arch (for example high-strain vitality). This condition follows from the meaning of the misfortune factor estimation. Second the separation between sufficiency tops in the standing wave ought to be littler than the physical degree of the harm to be recognized. At the point when the top-to-top separation is bigger then the harm zone, the commitment of the imperfection to the damping related with that vibration cell is decreased. The affectability conditions suggest that there is an ideal mode shape for delivering the greatest sign of a specific harm zone.
Likewise the ideal reverberation recurrence will be the least that meets the two affectability rules. The affectability conditions demonstrate that the adjustments in the dynamic qualities of torsional modes are bigger than for higher frequencies. The outcome is then a negative torsional slant for the harmed bar.
The utilization of fiber-fortified composites is the way to lightweight development, which is essential for present day life. Aviation, car, shipbuilding or railroad, rotor edges for wind vitality generators or sports supplies-the applications subtle, yet a known impact when utilizing glass microspheres in epoxy details.
Cyclic weariness execution can be improved fundamentally by a few hundred percent [19].
There were comparative connections between carbon fiber content versus sturdiness and carbon fiber content versus yield quality as showed. It tends to be seen that both mean estimations of durability and yield quality had a general diminishing inclination when carbon fiber content expanded from 0 wt% to 10 wt% (aside from a delicate ascent at 5 wt% carbon fiber content). Sturdiness and yield quality of the parts with carbon fiber content lower than 10 wt% were bigger than those of the parts with carbon fiber content higher than 10 wt%. At the point when carbon fiber content surpassed 10 wt%, these two mechanical properties expanded once more. The biggest mean qualities for both sturdiness and
Cell Walls Plantation Surface with FRC
Utilizing extensiometry and polarization confocal microscopy, we give here the Low, middle of the road and high fluorescence power are listed as blue, green and red separately portions of the two model tissues were exposed to uniaxial strain explores in two ways, in the plane of the external (peri clinical) divider.
For the anisotropic onion epidermis these bearings were equal and opposite to the mean cellulose direction. The upper brace was joined to a vibration exciter ( Figure 4 represents the unidirectional tensile strength).
Novel FRCM for Sustainable Geopolymer Matrix
Geopolymers are speaking to the most encouraging green and eco-accommodating (ATI) has been associated with examining the recovery of these airplanes for quite a long while. While early work was performed on standard composite materials, the accessibility of real creation tests of the materials utilized in the new airplane has added another measurement to the reusing issue.
Recycling of Fiberglass
The breeze vitality industry is one of the quickest developing application areas of shows fibre reinforcement kg cost with increasing reinforcement performance).
The Economic and Mechanical Potential of RCP
Current common airplane applications have focused on supplanting the optional structure with stringy composites where the support media has either been carbon, glass, Kevlar or half and halves of these. The grid material, a thermosetting epoxy framework is either a 125˚C or 180˚C restoring framework with the last turning out to be prevailing a direct result of its more prominent resilience to natural degradation ( Figure 10 shows Recycling process from FRCM). The disfigurement reaction of strong structures exposed to outer powers can be acquired by expecting the structure as a nonstop body or a continuum, without focusing on its atomistic structure. Hence, it is conceivable to perform both static and dynamic examinations of huge structures inside a sensible measure of time. The traditional methodology that is utilized to investigate strong structures is known as "old style continuum mechanics" and has been effectively applied to various issues previously. Inside the traditional continuum mechanics system, it is accepted that the persistent body is made out of a vast number of minute volumes, which are called material focuses. This material focuses connect with one another in particular in the event that they are inside the closest neighborhood Advances in Nanoparticles of one another; at the end of the day, through an immediate collaboration (contact) (Figure 11 represents different bonds existing on FRCM).
A gathering of materials whose modulus of versatility can be custom fitted to explicit necessities is fiber-strengthened composites (FRCs). The modulus of flexibility and other mechanical properties rely upon the sort, structure and amount of fortifying fibers. 10 The holding capacity of FRC material relies for the most part upon the kind of polymer network between the strands. The thermosetting polymer framework, similar to those got with light-polymerized dimethacrylate saps, brings about a profoundly cross-connected polymer lattice that can be adhesively attached to the composite luting concrete with free extreme polymerization, for example, polymer without cross-connected polymer chains, can be broken up somewhat by monomers of composite gum luting concretes and structure an interpenetrating polymer arrange (IPN) holding during polymerization [27]. Slight covered composites have become a significant gathering of materials for present day innovation. Composite material frameworks permit built heterogeneity to be planned into the material to accomplish explicit designing capacities [28]. The significant issue of finding a connection between a plainly visible harm variable and the procedure of harm gathering inside a material is tended to in this article. Monotonic and cyclic distortion conduct has been concentrated in a haphazardly conveyed glass fortified polyester lattice composite [29].
The uses of fiber composite materials are broad because of a few focal points controlled by them, for example high explicit quality and solidness. In any case, their exhibition under effect stacking is commonly poor and it limits their applications. A definite writing review of the mechanics of effect and the presentation of composite materials [30]. Polymeric materials are finding expanded application under conditions in which they might be exposed to strong molecule disintegration. For instance erosive disappointment has been accounted for in polyethylene gas funneling frameworks because of contaminant particulate issue [31].
Thermal expansion of FRCM
The complexities of composite materials are because of the obscure highlights, for example, compound similarity, wettability, adsorption qualities and ad-Advances in Nanoparticles There are different kinds of the centers, similar to froth center, honeycomb center, creased center and bracket center. The honeycomb centers show high anisotropic reaction during the in-plane and out-of-plane squashing [42].
In vertebrates, all together for controlled real development to be conceivable, power produced by the enactment of contractile atoms inside muscle Þbers must be applied onto the hard skeleton of the body. A Þrst venture of this power transmission process is to permit power to cross the cell film of the muscle cell [43]. Many fiber fortified composite materials offer a blend of solidarity and modulus that is either equivalent to or superior to customary metallic materials.
Due to their low densities, the particular quality and explicit modulus of these composite materials might be especially better than those of metallic materials [44]. Furthermore weariness solidarity to weight proportions, just as weakness harm resistance, of numerous composite covers are brilliant. Thus, fiber fortified composites have risen as a significant class of basic material [45].
Wellbeing prerequisites are endorsed for the circuit, the vehicle and the pilot gear. Specifically the vehicles are exposed to various tests, both static and dynamic, to guarantee that the necessary degree of wellbeing execution is accom- [47].
The crack conduct of nanotube-fortified composites is relied upon to be portrayed like their fiber-strengthened composite analogs by the blend of complex microdamage occasions, for example, fiber breakage, interface de-union, framework disappointment, to make reference to a couple. A significant material trademark adding to the multifaceted nature of the disappointment procedure in composites is the disappointment method of every one of the composite constituents, which relies emphatically upon the material microstructure and other significant boundaries, for example, the idea of the fiber/grid interface [48].
Craniofacial bone remaking is chosen to address enormous skull bone deformities emerging from the treatment of tumors, contamination, injury, intracranial discharge, or localized necrosis. These imperfections cause both useful and stylish uneasiness to patients. In instances of intracranial drain or then again dead tissue, these conditions may cause growing of cerebrum tissue and in this manner decompressive craniotomy can be a live-sparing procedure. There is a long history of remaking huge skull bone deformities with autogenous bone and it remains the highest quality level of treatment. An autogenous bone join as a rule reaped from calvarium, iliac peak, tibia, or fibula. During craniotomy, the extricated skull bone fold can likewise fundamentally be taken to the mid-region or cryopreserved. Drawbacks to reproduction are identified with conceivable disease of the bone unite, giver site grimness, and treatment of the bone join, which regularly is tedious [49]. Global showcase is expeditiously moving towards the vitality preservation and vitality decrease process. For the most part, the common strands were regularly used to decrease the heaviness of the segments for example the filaments are strengthened with the appropriate network. In the part of cost, sustainable and biodegradability, the characteristic plant strands have a lot of points of interest when contrast with the engineered fibers. Several creators did their examination in the region of common filaments [50].
Microbial Growth on FRC
Microorganisms might be answerable for physical and concoction changes in composite materials. Immunization of a parasitic consortium to pre-cleaned [54]. Glass fiber-fortified composite materials are significant materials and are utilized in development industry, car and sports merchandise producing, because of the upsides of its mechanical properties. They have high explicit firmness and quality, high damping, great consumption opposition and low warm development [55].
Picking a material reason for high-temperature structure materials experiences a key difficulty. Oxide materials show creep proclivity at high temperatures, restricting their application in vitality offices and the avionic business.
Non-oxide materials, which have expanded wet blanket steadiness at high temperatures, are not thermodynamically stable in air and respond with oxygen and additionally moisture. Those which structure defensive oxide layers either contain oxide-type grain limit stages advancing wet blanket and expanded inward oxygen dissemination at higher temperatures or have lower durability esteems [56]. Optimal mechanical execution of glass fiber strengthened epoxy framework composites normally depends on the reasonableness of its interfacial properties to the applied stacking conditions. The differentiating interfacial necessities for glass fiber composites in ballistic versus basic applications require multi-segment structures that lead to an expansion in the framework's weight, cost and multifaceted nature. Along these lines, interfacial alterations that yield a multifunctional composite execution are important to at the same time yield maximal vitality assimilation under unique stacking rates, and high protection from delamination under static stacking rates [57]. Albeit completely inexhaustible asset based materials are all the more ecofriendly, such materials may not fulfill execution characteristics for certain mechanical applications. The polymers and materials got from blended inexhaustible and petroleum product sources not just show solid guarantee in lightening the non-renewable energy source reliance yet in addition have an additional preferred position of conveying the ideal execution from a progressively supportable stock material [58].
Three dimensional (3D) printing is an added substance fabricating procedure to build 3D objects from an advanced model that has gotten a lot of considera- considered, in spite of the fact that the genuine effect is exceptionally subject to the structure standards. What's more, reused carbon fiber segments have low being used vitality use because of mass decreases and related decrease in mass-actuated fuel utilization [62].
Engineered polymers are as of now joined with different biodegradable fortifying filaments so as to improve mechanical properties and acquire the attributes requested in real applications. Research is proceeding to supplant manufactured strands with lignocellulosic filaments as reinforcement. Compared to different engineered filaments, the lignocellulosic strands (corn tail, rice husk, palm, coir, jute, abaca, wheat straw, and grass, and so forth.) are lightweight, decline wear in the machine utilized for their creation, and are effectively accessible, inexhaustible, biodegradable, and economical [63]. Crystalline morphology of semicrystalline polymers pronouncedly impacts the mechanical and physical exhibitions, subsequently, it is in fact useful to control the crystalline morphology with the point of propelling the properties and functionalities [64].
Built composite materials have become a significant establishment of a considerable lot of todays' innovations and structures. Of these, fiberglass-fortified composites specifically have found applications in fields like the car and aviation enterprises, which request lightweight materials with a high unbending nature Advances in Nanoparticles and strength. To fulfill the different innovative requests, a balanced and adequate interfacial bond among filaments and resinous grid is of vital significance.
Grip to a great extent relies upon sub-atomic interfacial structures and sub-atomic interfacial communications. Thus, fiber surface alteration by coatings has been recognized as a reasonable way impact and control the grip and similarity of the constituent materials [65]. Many organic materials are composites, with models running from the mineral protein composites of issue that remains to be worked out polymer-polymer composites making up the plant cell divider. In spite of the constituent materials regularly having poor properties, Nature figures out how to create mass materials with incredible properties that emerge through stunning control of interfaces between the constituents. It is trusted that a crucial comprehension of interfacial structure in nature will offer ascent to new thoughts for future applications [66].
Crystalline Morphology of FRCM
Carbon fiber strengthened polymers (CFRP), made of carbon filaments (CFs) fortifying a sap framework, are vital to the creation of more grounded at this point lighter segments for planes, vehicles, trains, transport holders and wind turbines [67]. The consistently expanding necessities for cost productivity and natural arrangements prompted the slow relocation from metallic to composite structures in these ventures, as the decrease in weight, lessens fuel utilization and nursery outflows. The high explicit in planemechanical properties of CFs is the foundation of the predominant exhibition of CFRPs [68]. Lightweight nanocomposites strengthened with carbon nanotube (CNT) congregations raise the possibilities for a scope of cutting edge designing applications. In any case, a relationship between's their heterogeneous compound structure and spatial association of nanotubes ought to be unmistakably comprehended to amplify their exhibition. Here, we actualize the propelled imaging capacities of nuclear power microscopy joined with close field infrared spectroscopy (AFM-IR) to examine the many-sided concoction structure of CNT fiber-fortified thermoset nanocomposites [69].
Crystalline morphology of semicrystalline polymers pronouncedly impacts the mechanical and physical exhibitions, in this manner, it is as a matter of fact productive to control the crystalline morphology with the point of propelling the properties and functionalities [64].
Composite materials made by consolidating two centers more segments accomplish properties, which couldn't be achieved with the different components.
Generally, fortifying fillers have a positive effect on mechanical properties and lessen the expense of last products. Currently, different inorganic materials, for example, powder, mud, calcium carbonate and glass fiber are as often as possible used as strengthening fillers in composite materials [70]. Approximately 95% of composites utilized today are manufactured from glass fibers. Properties of a composite material rely upon the properties of its constituent parts, particularly Advances in Nanoparticles on the collaboration of interfaces between the support and a network. The shape, size, surface movement, and volume division of any filler will impact the last composite properties [71]. Polymer-grid fiber-fortified composites may ingest around 3 percent by weight of water during ecological introduction due, for the most part, to dispersion through the network. The potential impacts of this water on the mechanical properties of the composite are critical [72].
To lessen the general weight and improve the efficiency of vehicles, an ever increasing number of metal parts are being supplanted by polymer composite materials. In opposition to metals, particularly in pressure, most composites are by and large described by a fragile instead of pliable reaction to stack [73]. On the entire, irregular hacked fiber composites are still viewed as moderately new materials in the field and regularly come up short on the gritty material property and execution portrayal that are require before they can be utilized broadly in different applications. Intriguedusers can allude to our work on car crashworthiness where in we have taken a gander at the particular vitality ingestion in a pressure formed arbitrary cleaved carbon fiber [74]. At present, the surgeries used to lighten this condition incorporate circle extraction, chemonucleolysis and spinal combination. While these strategies have a high pace of achievement in reducing side effects, none is without optional issues including narrowing of the circle space and degeneration at contiguous fragments [75]. | 6,307 | 2020-10-12T00:00:00.000 | [
"Materials Science"
] |
Calibration sample for arbitrary metrological characteristics of optical topography measuring instruments
Areal optical surface topography measurement is an emerging technology for industrial quality control. However, neither calibration procedures nor the utilization of material measures are standardized. State of the art is the calibration of a set of metrological characteristics with multiple calibration samples (material measures). Here, we propose a new calibration sample (artefact) capable of providing the entire set of relevant metrological characteristics within only one single sample. Our calibration artefact features multiple material measures and is manufactured with two-photon laser lithography (direct laser writing, DLW). This enables a holistic calibration of areal topography measuring instruments with only one series of measurements and without changing the sample. © 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement OCIS codes: (120.6660) Surface measurements, roughness; (150.1488) Calibration; (110.6895) Three-dimensional lithography; (180.6900) Three-dimensional microscopy. References and links 1. R. Leach, Characterisation of areal surface texture (Springer, 2013). Chap. 1. 2. K. Stout, Development of methods for the characterisation of roughness in three dimensions (Penton, 2000). 3. L. Blunt and X. Jiang, Advanced techniques for assessment surface topography: Development of a basis for 3D surface texture standards SURFSTAND (Kogan Page Science, 2003). 4. J. Seewig and M. Eifler, “Calibration of areal surface topography measuring instruments,” Proc. SPIE 10449, 1044911 (2017). 5. International Organization for Standardization, “Geometrical product specifications (GPS) Surface texture: Areal Part 1: Indication of surface texture,” ISO 25178–1 (2016). 6. International Organization for Standardization, “Geometrical product specifications (GPS) Surface texture: Areal Part 601: Nominal characteristics of contact (stylus) instruments,” ISO 25178–601 (2010). 7. International Organization for Standardization, “Geometrical product specifications (GPS) Surface texture: Areal Part 600: Metrological characteristics for areal-topography measuring methods,” ISO/DIS 25178–600 (2018). 8. International Organization for Standardization, “Geometrical product specifications (GPS) Surface texture: Areal Part 701: Calibration and measurement standards for contact (stylus) instruments,” ISO 25178–701 (2010). 9. International Organization for Standardization, “Geometrical product specifications (GPS) Surface texture: Areal Part 700: Calibration and verification of areal topography measuring instruments,” ISO 25178– 700.3:2016, WD (2016). 10. C. L. Giusca, R. K. Leach, F. Helary, T. Gutauskas, and L. Nimishakavi, “Calibration of the scales of areal surface topography-measuring instruments: part 1. Measurement noise and residual flatness,” Meas. Sci. Technol. 23(3), 035008 (2012). 11. C. L. Giusca, R. K. Leach, and F. Helery, “Calibration of the scales of areal surface topography measuring instruments: part 2. Amplification, linearity and squareness,” Meas. Sci. Technol. 23(6), 065005 (2012). 12. C. L. Giusca and R. K. Leach, “Calibration of the scales of areal surface topography measuring instruments: part 3. Resolution,” Meas. Sci. Technol. 24(10), 105010 (2013). Vol. 26, No. 13 | 25 Jun 2018 | OPTICS EXPRESS 16609 #323218 https://doi.org/10.1364/OE.26.016609 Journal © 2018 Received 13 Feb 2018; revised 17 May 2018; accepted 21 May 2018; published 14 Jun 2018 13. R.K. Leach, C.L. Giusca, and P. Rubert, “A single set of material measures for the calibration of areal surface topography measuring instruments: the NPL Areal Bento Box,” in Proceedings of Met and Props, 406–413 (2013). 14. R. K. Leach, C. L. Giusca, K. Rickens, O. Riemer, and P. Rubert, “Development of material measures for performance verifying surface topography measuring instruments,” Surf. Topo. Met.Prop. 2(2), 025002 (2014). 15. P. de Groot, “Progress in the specification of optical instruments for the measurement of surface form and texture,” Proc. SPIE 9110, 91100M (2014). 16. P. de Groot, “The Meaning and Measure of Vertical Resolution in Optical Surface Topography Measurement,” Appl. Sci. 7(1), 54 (2017). 17. J. K. Hohmann, M. Renner, E. H. Waller, and G. von Freymann, “Three-Dimensional μ-Printing: An Enabling Technology,” Adv. Optical Mater. 3(11), 1488–1507 (2015). 18. M. Eifler, J. Seewig, J. Hering, and G. von Freymann, “Calibration of z-axis linearity for arbitrary optical topography measuring instruments,” Proc. SPIE 9525, 952510 (2015). 19. F. Ströer, J. Hering, M. Eifler, I. Raid, G. von Freymann, and J. Seewig, “Ultrafast 3D High Precision Print of Micro Structures for Optical Instrument Calibration Procedures,” Additive Manufacturing 18, 22–30 (2017). 20. M. Eifler, J. Hering, G. von Freymann, and J. Seewig, “Manufacturing of the ISO 25178-70 material measures with direct laser writing – a feasibility study,” Surf. Topo. Met.Prop. in press. 21. G. V. Samsonov, Handbook of the Physicochemical Properties of the Elements (Springer, 1968). 22. R. Krüger-Sehm, P. Bakucz, L. Jung, and H. Wilhelms, “Chirp-Kalibriernormale für Oberflächenmessgeräte (Chirp Calibration Standards for Surface Measuring Instruments),” Techn. Mess. 74(11), 572–576 (2007). 23. J. Seewig, M. Eifler, and G. Wiora, “Unambiguous evaluation of a chirp measurement standard,” Surf. Topo. Met.Prop. 2(4), 045003 (2014). 24. International Organization for Standardization, “Geometrical product specifications (GPS) – Surface texture: Areal – Part 603: Nominal characteristics of non-contact (phase-shifting interferometric microscopy) instruments,” ISO 25178–603 (2013). 25. International Organization for Standardization, “Geometrical product specifications (GPS) – Surface texture: Areal – Part 2: Terms, definitions and surface texture parameters,” ISO 25178–2 (2012). 26. International Organization for Standardization, “Geometrical product specification (GPS) – Surface texture: Areal – Part 70: Material measures,” ISO 25178–70 (2014). 27. M. Eifler, “Modellbasierte Entwicklung von Kalibriernormalen zur geometrischen Produktspezifikation,” Kaiserslautern: Technische Universität Kaiserslautern (2016). 28. Deutsches Institut für Normung, “Terms and definitions used on ageing of materials – Polymeric materials,” DIN 50035 (2012). 29. K. Klauer, M. Eifler, F. Schneider, J. Seewig, and J. C. Aurich, “Ageing of roughness artefacts – impact on the measurement results,” in Proceedings of euspen’s Int. Conf. & Exhibition 17, 403–404 (2017). 30. D. W. Hoffman and J. A. Thornton, “Internal stresses in sputtered chromium,” Thin Solid Films 40, 355–363 (1977). 31. J. S. Oakdale, J. Ye, W. L. Smith, and J. Biener, “Post-print UV curing method for improving the mechanical properties of prototypes derived from two-photon lithography,” Opt. Express 24(24), 27077–27086 (2016).
Introduction
In the past decades, areal surface topography measurement has emerged (see e.g [1][2][3].for a historical overview).Based on the increasing industrial application, the WG 16 "Areal and profile surface texture" of the ISO Technical Committee 213 started to work on the standard ISO 25178 "Geometrical product specifications (GPS) -Surface texture: Areal" in 2003 [4,5].For the calibration of areal surface topography measuring instruments, the ISO 25178-6xx series (see e.g [6].) defines the metrological characteristics: part 600 in a general way and the other parts for specific measuring principles.
A metrological characteristic is defined as a characteristic "which may influence the results of a measurement" [7] and thus, should be considered during the measuring process.The ISO 25178-600 therefore defines currently the following basic metrological properties for a surface topography measurement [7]: (i) the amplification coefficient of the three axes describes the slope of the response function of the axis.(ii) The linearity deviation of an axis is the maximum local difference between the straight-line fit of the response function and the measured response function itself.(iii) The flatness deviation, which is the maximum deviation between an ideal plane and its measured topography.(iv) The measurement noise and (v) the topographic spatial resolution are characteristics of the height axis.(vi) The x-y mapping deviation describes the local deviations of the lateral axes including their perpendicularity.(vii) The topography fidelity characterizes whether the measuring instrument adequately transfers topographic features.
As the metrological characteristics can be determined by performing measurement tasks, they are subject to the calibration of topography measuring instruments, which is described in the ISO 25178-7xx series (see e.g [8].).The structure is identical to the 6xx series: part 700 [9] gives a basic framework for general calibration procedures whereas the other parts define calibration tasks which are specific for a certain measuring principle.The calibration tasks are performed with material measures (also known as measurement standards or calibration artefacts) that feature defined metrological characteristics.State of the art is the calibration of metrological characteristics with multiple material measures and samples.A practical approach for the determination of all general metrological characteristics has been introduced by Giusca et al. [10][11][12].In order to perform a cost-and time-efficient calibration of all characteristics, it is essential to use as few as possible material measures that feature and calibrate all relevant metrological characteristics reliably.A set of five single material measures for measuring all metrological characteristics has been introduced with the NPL BentoBox [13] in 2013 which also includes material measures for the calibration of the measurement of areal surface texture parameters [14].Additionally, the practical specification of topography measuring instruments based on the metrological characteristics is subject to research work and aims at achieving acceptance of the new definitions within the industrial application [15,16].
Until now, however, a set of multiple material measures is still necessary for a holistic calibration of the metrological characteristics and thus, a high amount of work is required for an instrument calibration.An easier calibration procedure and less measuring effort are required if only one sample featuring all metrological characteristics could be used.In the following, we propose an approach which allows the calibration of arbitrary metrological characteristics with only one sample featuring several material measures.We use direct laser writing (DLW) to fabricate the according samples [17].The general suitability of DLW for the manufacturing of calibration geometries has already been proven in previous studies [18][19][20].
Sample fabrication
DLW offers the possibility of fabricating almost arbitrary 3D structures: an acrylate-based negative-tone photo resist (IP-S, Nanoscribe GmbH) is scanned by the focus of a femtosecond pulsed laser beam (λ = 780 nm, 63x objective, NA = 1.4).Following to two-photon absorption, the polymerization takes only place within the very focal volume, allowing for the generation of true 3D structures.We fabricate all samples with a Photonic Professional GT (Nanoscribe GmbH).According to the technical specification of this device, the lateral resolution and minimal feature size are 500 nm and 200 nm, respectively, while ensuring a very high fabrication velocity (>10000 µm/s).Thus, the device fulfills all necessary requirements of a fast and highly precise fabrication of various (surface) structures.In order to be applicable for the manufacturing of material measures, the fabrication process needs to be at least as precise as the measuring instruments to be calibrated.We realize all desired sample geometries with Matlab and export the sample files to a stereo lithography format (.stl) (see section 3).Subsequently, those files are separated horizontally (hatching) and vertically (slicing) to x, y and z coordinates for the deflection of the laser focus.For all geometries, those parameters, as well as scan speed and excitation power are optimized iteratively controlling the outcome with a light microscope (Olympus BX60, Olympus K.K.).
As we use the samples for the calibration of optical topography measuring instruments, they must feature reflective surface properties.Because metallic materials can currently not be directly fabricated with DLW, we coat the surface after manufacturing.As standard coating material in optics, gold (Au) is chosen and sputtered with a layer thickness of 20 nm.Due to the weak bonding between the (glass-) substrate and the gold-layer, a thin (10 nm) chromium-layer (Cr) is required as an adhesion-promoting agent and thus, chromium itself is analyzed as a second coating material as well.Generally, the samples should be mechanically stable to allow for a tactile sampling.Hence, iridium (Ir) is chosen as another coating material, as it possesses a high Vickers hardness (see [21]).
Regarding the calibration procedure of low magnification objectives (e.g.5x), some of the desired measures have to be on a scale of almost 1 mm 2 .Since the scanning field of the DLW system is limited to approximately 300 µm x 300 µm, those large geometries are stitched together.Here, single fields of smaller dimension (max.120 µm x 120 µm) are chosen to minimize the influence of vignetting.The positioning accuracy of the stage is approximately 1 µm.Thus, for the elimination of possible stitching gaps, a 2 µm overlap of the separated fields is applied.
In the end, the fabrication time of all geometries necessary for a complete calibration procedure of an optical measuring device using the most common objectives (5x, 10x, 20x, 50x, 60x and 100x magnification) and, thus, the fabrication time of this "universal calibration artefact" sums up to less than 10 hours and can be realized easily overnight.
Design of a "universal calibration artefact" and calibration strategies
For the calibration of optical 3D topography measuring instruments, the basic metrological characteristics as introduced in section 1 are taken into account.Figure 1(a) visualizes the simulated manufacturing data of the required six target geometries.Depending on the microscopic magnification, the objective's field of view determines the required size of the structure.100 µm x 100 µm, 200 µm x 200 µm, 400 µm x 400 µm and 800 µm x 800 µm samples are considered in order to provide typical sample sizes for 100x, 60x, 50x, 20x, 10x and 5x magnifications.
As pre-processing algorithm, the inner 80% of each measured structure are extracted, aligned and the parameters defined within the ISO 25178-2 and −70 [25,26] are calculated.Afterwards, the individual parameters of each geometry are examined.For a holistic calibration, the following material measures and evaluation routines are required.
The Siemens Star geometry (type ASG [26]) may be used to obtain a width metric related to the topographic spatial resolution as described in ISO 25178-600 ( R W ) [12].This parameter is indicated with the term "ASG width metric".For the target data set 16 petals featuring a height of 1 µm are chosen.The lateral resolution limit of the measuring instrument is determined as described by Giusca and Leach [12].Furthermore, the surface texture parameters a S and q S are evaluated in order to additionally characterize the height axis of the examined measuring device.
For the measurement of the topography fidelity FI T , a chirp-standard (type CIN [22]) is used as suggested by Seewig et al. who introduced an according evaluation and calibration routine [23].However, currently there is not yet an agreement on the standardization of the fidelity calibration and the determination of according parameters.The chirped sample features sinusoidal profiles with an amplitude of 3 µm and 20 different wavelengths between 9.46 µm and 0.47 µm (considering the 100 µm x 100 µm sample).The sample is as well characterized by a S and q S .Additionally, the small scale fidelity limit ssf [23] is calculated, corresponding to a transmission of the target amplitude of ± 50%.
A flatness standard (type AFL [26]) allows for the calibration of the noise M N and the residual flatness FLT z of the height axis [10].Here, a S and q S can be applied to evaluate the metrological characteristics as they are directly correlated with noise and flatness.Using a radial sinus wave (type ARS [26]), an integral calibration of the measuring device with a period length of 10 µm (considering again the 100 µm x 100 µm sample) and an amplitude of 3 µm is possible.The integral transfer characteristics of the measuring device are evaluated with a S and q S as described in ISO 25178-70 [26].
For the lateral axes, a cross-grating (type ACG [26]) allows for the calibration of the xand y-axis mapping deviation by a determination of the linearity , x y l l , amplification coefficients , x y α α and the perpendicularity PERxy Δ of the axes as suggested by Giusca et al. [11].In doing so, the entire calibration of the lateral axes is possible with the aid of one calibration geometry.A pitch length of 10 µm (considering again the 100 µm x 100 µm sample), an amplitude of 3 µm and a groove width of 6 µm are chosen.The pitch lengths of the grating in both lateral directions and the angle β between the gratings in x-und y-direction are evaluated for the determination of the described characteristics.
Considering the calibration of the height axis we apply a modification of the current ISO 25178-60x series.For example, in the ISO 25178-603 [24], which describes the metrological characteristics for phase-shifting interferometers, the determination of the linearity z l and the amplification coefficient z α of the height axis is suggested based on the response function of the axis [24].Usually, this response function is estimated by measuring various step height artefacts [11].Instead we chose an irregular roughness calibration geometry (type AIR [26]) with a deterministic structure and a linear Abbott-curve as according material measure.Previously, we showed that this allows for a similar calibration of the linearity z l and the amplification coefficient z α of the height axis [18,27].Additionally, an integral calibration of the device based on the 3D roughness parameters is possible with the type AIR geometry [18].The surface topography is based on an actual engineering surface and thus enables a practical calibration of multiple properties of surface topography measuring instruments.The aforementioned linear Abbott-curve allows to achieve an almost stepless calibration of the height axis [18,27].Additionally to specific linearity criteria that compare the n m ⋅ topography heights of the measured Abbott-curve ( ) C Mr with the target Abbott-curve, we use an alternative analysis to derive z α and z l , the parameters described in the ISO 25178-60x series [24].In doing so, the response function of the axis is not determined with a series of step height measurements but with all height values of the Abbott-curve of the proposed type AIR material measure: where ( ) C Mr represents the function of the measured Abbott-curve and ( ) C Mr of the virtual sample [27].In order to neglect possible outliers, the straight line fit of the transmission is performed with the inner 80% of the height values and leads to a least squares fit with the slope m and the intercept t [27].These parameters are used to determine the ISO criteria of amplification coefficient and linearity deviation [24,27]: The fit of the slope equals the measured amplification coefficient, whereas the largest pointwise deviation between the fit and the original data set of the response function is the linearity deviation as defined within for example the ISO 25178-603 [24].The proposed method uses a large number of topography points for a determination of the z-axis linearity instead of only using a small number of step heights for the response function estimation.Additionally to the parameters based on the response function, a S and q S of the sample can be evaluated for the integral calibration [18,27].The resulting "universal calibration artefact" is shown in Fig. 1(b).There, a scanning electron microscopic (SEM) image of one type ARS sample is shown exemplarily.DLW allows for fabricating all of the aforementioned geometries on one single sample.Thus, a holistic calibration is possible without changing and adjusting the sample several times.Even for varying microscopic magnifications only one sample is required: the geometries are defined in a way that scaling under consideration of those different microscopic magnifications is possible.Therefore, the coordinates of the lateral axes are scaled with the factors two, four, and eight to additionally realize material measures with sizes of 200 µm x 200 µm, 400 µm x 400 µm and 800 µm x 800 µm on the same sample.The amplitudes of the different material measures are not changed by the scaling.Table 1 (Appendix) provides the evaluation parameters of the different geometries for their use with different fields of view and magnifications.Generally, due to the high versatility of the manufacturing, it is possible to perform almost any scaling of the proposed geometries in order to adapt the sample towards a specific measuring principle or instrument.
Subsequently, we examine the aging and scaling of these varying geometries in order to qualify the manufactured samples for practical calibration applications.
Aging study
In order to determine the time-dependent stability and the aging behavior of the manufactured samples, we perform an artificial aging study.Three identical samples, as illustrated in Fig. 1, are manufactured and a 30 nm metal coating is sputtered on top of the polymeric surface in order to achieve suitable optical properties for surface topography measuring instruments.As described, Au, Cr (Univex 450C, Oerlikon GmbH for both) and Ir (Leica EM ACE600, Leica Microsystems CMS GmbH) are used in order to compare the varying coating materials and their respective aging behavior.Since aging is a time-dependent process [28], a climate chamber is used to emulate an accelerated artificial aging of the samples.In previous studies with material measures, which were manufactured with the aid of ultra-precision cutting, it has been shown that this examination method is suitable for the description of aging effects [29].Therefore, this approach is used to investigate these effects for the direct laser written geometries as well.
First, we take a reference measurement of the structures with a size of 100 µm x 100 µm.Then, the samples are stored at 80°C within the climate chamber and additional measurements after t = 1, 2, 5, 8, 12 and 19 days storage time are performed.Here, dry air conditions are realized with the aid of a drying agent.For each measurement, the described evaluation parameters are examined and their time-dependent behavior is investigated.Sample measurements are taken with a confocal microscope NanoFocus µSurf with 100x magnification.The instrument is linked to a traceability chain based on other certified material measures.
Figure 2 summarizes the results of the chirped standard (type CIN) exemplarily: the evaluation parameters a S and q S of the coatings gold, chromium and iridium are compared to their respective target values shown in Table 1.Additionally, the absolute values of the small scale fidelity are provided.It can be observed that the measurement results of both lateral resolution parameters, the small scale fidelity limit determined by the chirped standard and the ASG width metric determined by the Siemens Star (Appendix) are influenced by the aging process.The aged surfaces enable a measurement of narrower areas of the petals or the sinusoidal structures, respectively.This is as well corroborated by the roughness parameters a S and q S who tend to decrease during the first few days.
Fig. 2. Aging study exemplarily shown for the type CIN material measure.Evaluated parameters: (a) small scale fidelity limit (ssf) as defined in [23], (b) deviation of arithmetic mean roughness (S a -S a,tar ) / S a,tar and (c) deviation of root mean square roughness (S q -S q,tar ) / S q,tar .
All parameters stabilize after a certain amount of time and thus indicate a stationary aging result.This effect is more significant for the chirp standard as more areas with narrow structures and steep angles are present (see Fig. 2).When the varying coatings are compared, equal results are achieved.However, the Au-coating does not provide as stable roughness parameters as the other materials for the chirped standard.
The significant change of the parameter values can be additionally explained when profiles are extracted from the measured data sets after different aging times, e.g. for the sample with Iridium-coating.This is shown in Fig. 3. There, an extracted profile which was used for the chirp evaluation is displayed before the aging commences (t = 0 days) and after storage times of t = 5, 8 and 19 days in the climate chamber.The large deviations at the beginning of the amplitude roughness parameters can be explained as follows: due to sharp edges there are many optical artefacts before the aging which lead to an increased measured roughness.During the aging process, the sharp edges are smoothed and the structures become optically more cooperative for the measurement.In consequence, the massive change in the roughness parameters is not only caused by the shrinking itself but much stronger by the reducing number of optical artefacts.An exemplified evaluation of the small scale fidelity limit is illustrated in Fig. 4. There, the extracted profile from the geometry with the Ir-coating after a storage time of t = 19 days is shown, as well as its evaluation with the application of the ssf.In Fig. 4(a), the fit of the nominal sinusoidal surface structures of the chirp-geometry is illustrated, whereas Fig. 4(b) shows the determination of the ssf based on these fits: the fitted amplitudes are imaged for the different estimated period lengths of the fit.The ssf can be determined as smallest period length where the amplitude of the fit deviates less than 50% from the target amplitude [23].In the given example, this limit is determined to a period length of 1.91 µm which is estimated to 1.86 µm.Thus, it can be concluded that both, the laser lithographic manufacturing and the measuring process maintain a small scale fidelity up to the micrometer-scale.It cannot be distinguished whether the manufacturing or the measuring is the limiting process here.This can be clarified with further studies using high-resolution AFM measurements.Examining the other ISO-based geometries, the given observations are well confirmed.All results are given in the Appendix (Figs. 10, 11, 12, and 13).The roughness values of the flat surface (type AFL) decrease with increasing time as the structure is also smoothed through shrinking or warping effects.The parameters of the radial sinus wave (type ARS) do not change significantly as the structure exhibits only small angles and is thus not heavily influenced by the changes.This is in contrast to the chirp structure, which exhibits large areas with narrow structures.The observations of Fig. 3 support the explanation: the aging leads to significant changes in the areas with steep angles as they cause optical artefacts.After the aging occurs, the surface becomes more optically cooperative.When the cross-grating (type ACG) is evaluated, the lateral pitch lengths both in x-and y-direction do not change throughout the aging process as well as the perpendicularity between both axes.The parameters only scatter statistically.Thus, it can be concluded that mainly the height axis is influenced by the aging.
This effect is further examined with the type AIR material measure.The results of the evaluation are displayed in Fig. 5. Similar to the previous observations, the surface texture parameters decrease during the aging process.As in the rough surface many steep angles are present, the effect is significant.After a few days, also a stationary behavior is achieved and no more significant changes occur.This is also valid when the ISO 25178-60x parameters of the height axis, amplification coefficient and linearity deviation, are observed.The first one approaches the target value of 1 and the latter one decreases as well.The results are more or less independent from the coating material with a small offset only when the linearity deviation for the Cr-coating is examined.For the evaluation, we calculate these parameters as suggested in previous work [18,27] (see section 3): all height values of the measurement data are compared with the linear target Abbott-curve and the inner 80% of the height values serve for the response function estimation.This evaluation method leads to a high correlation of the amplification coefficient and the a S -value.Figure 6 shows the resulting topographies and measured Abbott-curves before the aging (t = 0 days) and after t = 19 days storage time in the climate chamber for the sample with the Ir-coating.Again, the optical artefacts due to steep angles influence the measured metrological characteristics.This is also illustrated by the comparisons of the measured and the target Abbott-curve.Fig. 5. Aging study.Evaluation of the parameters: (a) deviation of arithmetic mean roughness (S a -S a,tar ) / S a,tar , (b) deviation of root mean square roughness (S q -S q,tar ) / S q,tar , (c) deviation of the amplification coefficient α and (d) linearity deviation l z as defined in ISO 25178-60x series for the type AIR material measure.
Analyzing the three coatings of the samples with a light microscope we observe that the gold-coating appears friable and the chromium-coating features fissures, especially for the larger geometries.Because of these effects, as well to assess an error of those parameters, an additional investigation is carried out for both materials.
In order to verify these results, two structures, CIN and ARS (100 µm x 100 µm, respectively) serve for the additional examination and are measured 12 times after every timestep of a second aging study.Based on the repetitive measurements it is also possible to obtain information regarding the measurement uncertainty.For the determination of the areal roughness parameters, standard deviations are found to be in the nanometer-range.Besides, the results are in good agreement with the first study: the sample with many steep angles (type CIN) shows a decrease of the areal roughness parameters within the aging process whereas the sample with few steep angles (type ARS) does not exhibit any change of the roughness parameters.However, the roughness parameters of the chirped standards do not change as significantly as in the first study.When the new samples are examined with a light microscope, the aforementioned quality issues of the coatings can be observed as well: the chromium-coating features fissures especially for the larger samples and the gold-coating appears to be friable.Since chromium grows under high internal stresses during the sputtering process, subsequent shrinkage effects of the polymerized photo resist (which increase with an increasing polymerized area) presumably lead to those cracks when re-expanding under atmospheric conditions [30].On the other hand, gold tends to show a microcrystalline growth which can be an explanation of the observed granularity.As the sample with the iridium-coating does feature a better quality and exhibits not only similar optical properties as gold but as well similar results in the aging study considering the examined parameters, this material is chosen for further examinations.Iridium is also the hardest material of the examined coatings and should therefore be suitable for a tactile sampling.
Scaling study
In order to perform a calibration of varying microscope magnifications, the structures are scaled to cover the different fields of view of typical optical surface topography measuring instruments.This scalability is characterized by measurements with the aforementioned confocal microscope using varying objectives.The structures feature sizes of 100 µm x 100 µm to 800 µm x 800 µm.Figure 7 illustrates exemplary the measured Ir-coated ARS material measure with 100x, 60x and 20x magnifications.It can be seen that a scaling of the geometry is generally possible.However, when the samples with edge lengths of 400 µm or larger are observed, stitching errors of the manufacturing process become visible which are caused by the described positioning accuracy of the stage (see section 2).
Additionally, the previously described roughness parameters can be evaluated in order to describe their scale-dependent effects.When the Siemens-Star (type ASG) is examined, it becomes observable that the ASG width measure is changing with the microscopic magnification as the field of view and the sampling discretization change as well.The 100x and 60x objective magnifications feature a resolution of a few micrometers; the 20x magnification has a higher resolution limit.All results are displayed in the Appendix.Figure 8 displays the results of the chirp standard (type CIN).The scaling of the chirp structure results in varying wavelengths of the sinusoidal structures.When the small scale fidelity limit is compared between the different measurements, it can be observed that larger samples lead to a bigger small scale fidelity limit.When the roughness parameters are compared, the 200 µm sample features the largest deviations.Since this sample is not stitched, vignetting obviously introduces stronger deviations than the following stitching errors.The 100 µm sample features the smallest deviations.small scale fidelity limit, (b) deviation of arithmetic mean roughness (S a -S a,tar ) / S a,tar , (c) deviation of root mean square roughness (S q -S q,tar ) / S q,tar .I 100 µm sample measured with 100x magnification, II 200 µm sample, 60x magnification, III 400 µm sample, 20x magnification, IV 400 µm sample, 60x magnification, V 800 µm sample, 20x magnification, VI 800 µm sample, 60x magnification.
When the flat surface (type AFL) is examined (Appendix), the roughness values are very small for the 100 µm and 200 µm sample, whereas the larger material measures, which are stitched during the manufacturing process, show higher values of the roughness parameters.It can be observed that the stitching process in the manufacturing does have an influence on the averaged surface roughness (see also Fig. 7).As the objective of the AFL sample is to represent a perfectly smooth surface (with target values 0 a q S S = = ), the stitching effects are best visible here.Nevertheless, the areal sinusoidal material measure (type ARS) features very stable results, independent from scaling and objective magnifications (see Appendix).This is caused by the very smooth surface featuring no steep angles which cannot be transmitted by the measuring instrument.As the cross-grating (type ACG) is characterized by smooth surface structures and used for the lateral axes calibration, also here the target values of the different pitch lengths and perpendicularity are imaged properly independent from magnification and scaling of the sample.
The irregular roughness structure (type AIR) features steep surface angles and structures that are more complex.Thus, the scaling effects become more visible.Figure 9 shows the results.It can be observed that an increased scaling of the geometry leads to a better compliance to the target values.The roughness values are smaller when larger structures are measured as the slope values decrease with an increasing scaling factor.Thus, the impact of the optical artefacts is significantly reduced when the larger structures are examined.This becomes as well visible when the linearity deviation and the amplification coefficient are compared as shown in Fig. 9(c) and 9(d).With a larger lateral scaling of the surface, the linearity deviation becomes significantly smaller and also the amplification coefficient tends towards its target value of 1.This indicates that the transmission characteristics improve with smaller surface slopes.Fig. 9. Scaling study.Evaluation of the type AIR material measure.Evaluated parameters: (a) deviation of arithmetic mean roughness (S a -S a,tar ) / S a,tar , (b) deviation of root mean square roughness (S q -S q,tar ) / S q,tar , (c) deviation of the amplification coefficient α and (d) linearity deviation l z as defined in the ISO 25178-60x series.
Summary and conclusion
The general manufacturing feasibility of material measures with DLW has been demonstrated in previous examinations.Here, a new sample was proposed that can calibrate all relevant metrological characteristics with just one set of measurements and one sample.This "universal calibration artefact" was examined regarding its practical abilities.In doing so, the aging behavior was investigated with different material coatings.It was shown that the structures as well as steep angles tend to smooth within the first days within a climate chamber.After that however, a stationary state is achieved.Thus, the sample is capable for practical application as aging improves the quality of the surface.As the iridium-coating showed the best surface quality, it was selected as the coating material for the following studies.For future work, it will be examined whether the aging process can be anticipated with a UV-postprocessing in order to achieve stable samples more quickly.In the study of Oakdale et al. [31] it was observed that the development of the polymer can be accelerated with this method.
A second criterion for the practical application, the scalability of the geometries, was as well examined.In doing so, it was shown that various microscopic magnifications can be calibrated with the proposed set of material measures when the lateral sizes of the structures are adapted towards the featured field of view.It was also shown that the samples with fewer steep angles that resulted from the scaling were easier to measure.However, larger structures which needed to be manufactured by stitching did show some differences, which e.g. had an influence towards the respective calibration properties of the smooth flat surface material measure.Other surfaces like the type AIR material measure featured smaller deviations with a larger scaling because the surface slopes decreased.For the quantitative characterization of
Fig. 1 .
Fig. 1.(a) Target geometries imaged with a size of 100 µm x 100 µm (simulated data), (b) overview of all target geometries featuring varying sizes -the final "universal calibration artefact" (simulated data and SEM image of the type ARS material measure).
Fig. 3 .
Fig. 3. Aging study, exemplary profiles for the Ir-coated type CIN material measure.(a) Extracted profile before aging (t = 0 days), (b) extracted profile after t = 5 days of aging, (c) extracted profile after t = 8 days of aging, (d) extracted profile after t = 19 days of aging.
Fig. 4 .
Fig. 4. Aging study based on the small scale fidelity limit, exemplarily shown for the Ir-coated type CIN material measure after t = 19 days.(a) Fit (red) of the measured (blue) chirp geometry, (b) small scale fidelity limit (ssf) determination.
Fig. 7 .
Fig. 7. Scaling study.Evaluation of the type ARS material measure with the above mentioned iridium coating.The 100 µm x 100 µm material measure is analyzed with a CM using (a) 100x magnification, whereas the 800 µm x 800 µm geometry is imaged using (b) 60x and (c) 20x magnification.
Fig. 11 .
Fig. 11.Results of the type AFL material measure.(a)-(b): aging study; (c)-(d): scaling study.I-VI as described in Fig. 8.Because the target roughness parameters for the flat surface are S a = S q = 0, the measured parameters are plotted as absolute values.
Fig. 13 .
Fig. 13.Results of the type ACG material measure.(a)-(b): aging study.Pitch lengths l x ,l y and deviation of the angle β between the x-and y-grating; (c)-(d): scaling study.I-VI as described in Fig. 8. Funding Deutsche Forschungsgemeinschaft (DFG), Collaborative Research Center 926. | 8,542.8 | 2018-06-14T00:00:00.000 | [
"Materials Science"
] |
A new approach to solubility and processablity of polyaniline by poly ( aniline-coo-anisidine ) conducting copolymers
Homopolymers of aniline, o-anisidine and their copolymers were synthesized by chemical oxidative polymerization using different ratios of monomers in the feed of H2SO4 medium. The synthesized polymers are characterized by employing Fourier transform infrared (FTIR) spectroscopy and X-ray diffraction (XRD) techniques for understanding the details of the structure of the synthesized polymers. Morphological, thermal and electrical conductivity of the as synthesized polymers were also studied by employing scanning electron microscopy (SEM), thermogravimetric analysis (TGA) and dc electrical conductivity respectively. Rod and spherical shaped nanoparticles were observed for PANI and for copolymers respectively. A three step thermal degradation was observed for all the polymers. The electrical conductivities of the copolymers are less compared with PANI, and at higher temperature the conductivities of all the polymers are more or less same. The copolymers show better solubility but lower conductivity than PANI.
synthesis, good environmental stability, and ability to dope with protonic acids. 1,2However conductive form of polyaniline is difficult to process, since it is insoluble in common organic solvents and unstable at melt processing temperatures, has restricted its applications.To overcome this, several substituted polyaniline homopolymers (single type of monomers involved) soluble in organic solvents have been prepared such as alkyl 3 and alkoxy 4 as well as alkyl-N-substituted polyanilines. 5Another approach followed is copolymerization (more than one type of monomers involved) of aniline with suitable substituted aniline.Using the later method, Chen and Hwang 6 have synthesized the first water soluble self-acid doped polyaniline, poly(aniline-co-Npropanesulphonicacidaniline).Heeger et al. 7 have prepared soluble polyaniline in its conducting form by doping it with functionalized surfactants such as camphorsulphonic acid and dodecylbenzenesulphonic acid.Among substituted polyanilines, polytoluidines polyanisidines have attracted considerable attention since they exhibit better solubility 8 in many organic solvents and better processability 9 than polyaniline with moderate to good conductivity.However polyanisidines show less conductivity than polyaniline,which may be due to steric constraints imposed by methoxy group that could induce additional deformation along the polymer backbone as well as increase in the inter-chain distance.Both factors reduce the mobility of the carriers and as a result lower conductivity is exhibited.In order to combine the high conductivity of polyaniline with the good solubility of the polyaniline der ivatives, copolymerization received greater attention as it helps to tailor-make a material with specifically desired proper ties, like excellent electrical, optical and mechanical properties.
Although there are repor ts on the copolymers of aniline with anisidine [10][11][12] , however, these studies were only partially characterized, and not focused on the morphology.Moreover homopolymers and copolymers reported to be soluble in solvents like dimethyl sulphoxide (DMSO), N-methylpyrrolidone (NMP) N,Ndimethylformamide (DMF), and tetrahydrofuran (THF) which are hazardous, costly and comparatively viscous.As our main aim of the study was to obtain copolymer salts with improved solubility and better processability compared to PANI and early reported PANI derivatives.We synthesized copolymers which are soluble in most common, economical, non-viscous, comparatively safe, non-hazardous solvent like ethyl alcohol, which helps in making films.In the present research program homopolymers of aniline, anisidine and their copolymers [poly (aniline-coo-anisidine)] of different compositions have been synthesized by chemical polymerization in acidic (H 2 SO 4 ) medium.Resulted homopolymers and copolymers were characterized by Fourier transform infrared (FTIR) spectroscopy and Xray diffraction (XRD) techniques.We have evaluated their conductivity (dc electr ical conductivity), thermal (thermogravimetric analysis (TGA)) and morphological (scanning electron microscopy (SEM)) properties.We also made an effort to understand the effect of having electron donating methoxy group in the polymer chain and noted the differences between the homopolymers and copolymers.
Materials
Aniline and o-ansidine (Merck) were distilled twice and all other chemicals (analytical grade) were used as procured.Double distilled water was used for the preparation of required solutions.
Synthesis of homopolymers (polyaniline (PANI))
In a typical experiment, aqueous solution of 0.1 M oxidizing agent, ammonium persulfate was added dropwise into 1.0 M H 2 SO 4 solution containing 0.1 M aniline at a temperature of 0-5 0 C. The oxidation of aniline is highly exothermic and therefore, the rate of addition of the oxidant was adjusted to prevent any increase in the temperature of the reaction mixture.After the addition of oxidant, the reaction mixture was left stirring at constant temperature for 4 h.The precipitated polyaniline was filtered and then washed with distilled water until the washing liquid was colourless.In order to remove oligomers and other organic byproducts, the precipitate was washed with acetone until the solution was colourless.Finally, the resulting polymer salt was dried at 100 0 C till a constant mass.(Polyaniline base was prepared by dedoping polyaniline-sulfate salt (1 g), with constant stirring at ambient temperature in 100 mL sodium hydroxide solution (1 M) for 12 h.The resultant solid was filtered and washed with water, followed by acetone and finally dried at 100 0 C till a constant mass.
Synthesis of copolymers
Copolymers of aniline with o-anisidine were synthesized in various molar fractions using ammonium persulphate as oxidizing agent and H 2 SO 4 as acid medium.A typical procedure for the preparation of copolymers 50:50 ratio of aniline and o-anisidine is as follows:
Synthesis of copolymers, poly (aniline-co-oanisidine) (PAPOA)
An aqueous solution of ammonium persulphate (0.1 M) was added drop wise to 1 M H 2 SO 4 solution containing o-anisidine (0.05 M) and aniline (0.05 M) maintained at 0°C, stirred for 4 h and then kept at room temp for about 15 h.The green precipitate of copolymer salt obtained was filtered and washed with distilled water several times and then washed with acetone and subsequently dried at 100 °C till a constant mass (Scheme 1).
The copolymer base was obtained by stirring 1 g of salt in 100 mL of 1.0 M NH 4 OH for 12 h.The resultant solid was filtered and washed with water, followed by acetone and finally dried at 100 0 C till a constant mass.
Characterization techniques and studies used A weighed amount (10 mg) of the homopolymer and copolymer was added separately to 2 mL of the solvent with stirring.Additional solvent was added at the rate 1 mL per 10 min up to 10 ml, the copolymer completely dissolved was taken as soluble.; the polymers which did not dissolve completely during this period were taken as "partially soluble".
The FT-IR spectra of the polymers were recorded on a JASCO FTIR-5300 instrument in the range 4000-400 cm -1 at a resolution of 4 cm -1 by making KBr pellets.The XRD patterns were obtained employing a JEOL JDX-8p spectrometer using Cu Ka radiation (l = 1.54 Å).The X-ray generator was operated at 30 kV and 20 mA.The scanning range, 2è/è was selected.The morphologies of the polymers were studied by using coupling JSM-840A scanning electron microscope.The electron microscope was operated at 20 kV.The thermogravimetric analysis (TGA) measurements were made using a Mettler Toledo Star System at a heating rate of 10°C per min under nitrogen atmosphere.Conductivity measurements were done at room temperature by two-probe method on pressed pellets obtained by subjecting the powder to a pressure of 50 kN.The error in resistance measurements under galvanostic conditions with a Keithley model 220 programmable current sources and a Keithely model digital 195A voltammeter was less than 2%.
RESULTS AND DISCUSSION
Homopolymers of aniline is denoted as PANI and of o-anisidine as POA and copolymer, poly (aniline-co-o-anisidine) (as PAPOA), synthesized with various molar fractions of aniline and o-anisidine in the feed ranging from 0.25, 0.5 and 0.75 M are denoted here as PAPOA13, PAPOA11 and PAPOA31 respectively.
Yield
Homopolymers, PANI, POA and copolymers were obtained in good yields (90% to ) by keeping an oxidant to monomer ratio of 1:1.Yields of the copolymers were found to increase with an increase in the amount of aniline in the feed for most of the copolymers.The steric hindrance in anisidine which predominates over the effect of electron-donating groups may reduce the yield of the copolymers slightly when the amount of substituted polyaniline in the monomer feed ratio is higher¹³.
Solubility
PANI is soluble with dark bluish violet colour in polar solvents like DMSO, NMP, DMF, and THF and partially soluble with light bluish colour in less polar solvents like chloroform and insoluble in ethyl alcohol and benzene.Substituted polyaniline homopolymer, POA were clearly soluble in all above mentioned solvents as well as common solvents like ethyl alcohol, chloroform and other solvents with viscous dark bluish violet colour to non viscous blue colour.Among copolymers, PAPOA31 is sparingly soluble because of high feed of aniline ratio, PAPOA11, PAPOA13 are clearly soluble.Good copolymer solubility results from the presence of a large number of methoxy substituent on the aniline ring and an amorphous structure, which increases the distance between the macromolecular chains and then significantly reduces the interaction between the copolymer chains. 11In addition, PANI is insoluble in ethyl alcohol, POA is soluble in ethyl alcohol and copolymer PAPOA11 andPAPOA13 are completely soluble, better solubility is evidence that the polymerization product produced is indeed copolymer containing two monomers rather than a simple mixture of two homopolymers 14 .
FT-IR spectroscopy studies
The characteristic IR peaks of PANI, POA and the copolymer salts PAPOA11 are shown in Figure 1.An accurate quantitative determination of the compositions of the copolymers is rendered difficult due to the overlap of the absorption bands where the spectra of the polyanisidine differ from that of PANI 12 , especially in the case of copolymers which have low anisidine contents.The characteristic bands in the IR spectrum of the PANI salt occur at 1562, 1487, 1302 1240, 1107 and 798 cm "1 .A broad band at 3440 cm -1 was assigned to the free N-H stretching vibration.The bands at 2920 and 2850 cm -1 was assigned to vibration associated with NH part in C 6 H 4 NH 2 C 6 H 4 .The high-frequency bands at 1562 and 1487 cm "1 are assigned to the C=C ring stretching vibrations of the benzenoid ring and the C-N stretching of the quinoid ring, respectively.The bands at 1302 and 1240 cm "1 correspond to the N- The remaining bands at 1107 and 798 cm -1 could be attributed to the in-plane and out-of plane C-H bending modes respectively.The C __ H out-of-plane bending mode has been used as a key to identify the type of substituted benzene.For polyaniline salt, this mode was observed as a single band at 798 cm -1 , which was almost nearer to the range 800-860 cm -1 reported for 1,4-substituted benzene. 16The IR spectra of POA similar to that of the PANI salt with the bands showing slight shifts to higher frequencies.A new band which is not present in PANI appears in the spectra of POA at 1167 cm -1 , which could be attributed to the -OCH 3 rocking mode.Approximate estimations of the copolymer compositions can be made by utilizing the intensity of this new band at around 1160 cm -1 .Thus, the infrared spectra of the copolymer salts help us to estimate roughly, the amounts of anisidine present in the copolymers.The spectral characteristics of the copolymers are similar to those of polyaniline and polyanisidine (Fig 1).The IR absorptions at 1487-1574 cm -1 are associated with aromatic ring stretching.The peak at 1574 cm -1 assigned to the quinoid ring and peak at 1487 cm -1 to the benzenoid ring and the intensity of the peak near 1160 cm -1 increases as the amount of o-anisidine in the feed increases.Our findings are consistent with the findings of Umare et al 11 .
X-ray diffraction studies
Figure 2 shows the X-ray diffraction patterns of homopolymers and copolymers.The PANI exhibits three broad peaks at 2è angles around 10.3 0 , 19.5° and 25.5°, 2θ=25.5° is characteristics of the vander Waals distances between stacks of phenylene rings (polyaniline ring). 15,17 hese broad peaks indicate crystalline domains in the amorphous structure of PANI .X-ray diffractograms of copolymers show broadening of peaks, which indicates copolymers are of amorphous nature while PANI is crystalline.It is observed that there is increase in d spacing and decrease in coherence length as the fraction of o-anisidine increases in the copolymer composition.On increasing the amount of o-anisidine in the copolymer the crystallite size decreases, which may be due to the presence of methoxy group on the aromatic ring which increases disorder and decreases crystal size.Thus, in copolymers the increased charge localization may be due to the reduction of interchain diffusion of charge, decrease of interchain band width which is caused by the large transverse unit cell length and decrease in coherence between the chains caused by greater disorder in interchain separation within crystalline region, due to the existence of the side group (-OCH3) along the main copolymer chain.
Morphological studies
SEM images of homopolymers PANI, POA and their copolymer salt (PAPOA11) are shown in Figure 3 (a-c) respectively.In Figure 3a, bundles of agglomerated nanorods of PANI with typical sizes around 100 nm to 200 nm observed.POA and copolymer salts exist as highly agglomerated globular particles with typical sizes around 100-500 nm.
Thermogravimetric analysis
Thermogravimetric analysis of PANI, POA and their copolymers were performed in an air, employing a heating rate of 10 0 C min -1 .The TGA curve for the PANI salt shows a three-step weight loss.The weight loss of 6% up to 150 0 C is due to the loss of moisture.The weight loss of 4.5% occurring up to 395 0 C is attributed to the loss of the dopant H 2 SO 4 .The final step starts at around 400 0 C, leads to the complete degradation of the polyaniline salt. 9Figure 4 shows the representative TGA trace of the copolymer PAPOA11.Thermograms of the POA and copolymer salts show similar three-step degradation, but at lower temperatures than that of PANI.
Conductivity measurements
The conductivities of the homopolymers and copolymers were measured by using the twoprobe method.Figure 5 shows the conductivity measurement of PANI, POA and PAPOA11.It was noticed that as the amount of anisidine increases in the copolymers, the conductivity decreases.The presence of even 0.25 M anisidine in the feed decreases the conductivity.However copolymers show moderately high conductivity when compared to that of POA, The higher conductivity of the copolymers could be explained as being due to the formation of block copolymers which facilitates faster charge transport by bipolarons. 10However at higher temperatures all polymers tend to have almost same conductivity.
CONCLUSION
In order to make soluble and processable PANI, substituted polyaniline homopolymers and copolymers were synthesized by chemical polymerization with easily available reagents which in addition assures a good reaction yield.The resulting product exhibits reasonable conductivity with excellent solution processing properties.However, conductivity of copolymers is lower compared to PANI but higher than that of POA.At higher temperatures, all polymers tend to have almost same conductivity.Morphology of PANI salt showed nanorods with average diameter of 100-200 nm size, whereas copolymers showed 100-500 nm size particles.This processable form of polyaniline salt and its copolymers could be widely applicable to coatings, to making thin films, preparation of clay composites and solution blending with other commodity polymers.Still much research work is necessary to improve the quality of the materials to make commercially viable products. | 3,505.6 | 2009-12-27T00:00:00.000 | [
"Materials Science"
] |
Fractional variational problems depending on fractional derivatives of differentiable functions with application to nonlinear chaotic systems
In the present work, we formulate a necessary condition for functionals with Lagrangians depending on fractional derivatives of differentiable functions to possess an extremum. The Euler-Lagrange equation we obtained generalizes previously known results in the literature and enables us to construct simple Lagrangians for nonlinear systems. As examples of application, we obtain Lagrangians for some chaotic dynamical systems.
Introduction
The calculus with fractional derivatives and integrals of noninteger order started more than three centuries ago when Leibniz proposed a derivative of order 1/2 in response to a letter from l'Hôpital [1]. This subject was also considered by several mathematicians as Euler, Fourier, Liouville, Grunwald, Letnikov, Riemann, and others up to nowadays. Although the fractional calculus is almost as old as the usual integer order calculus, only in the last three decades it has gained more attention due to its applications in various fields of science (see [2][3][4][5][6][7] for a review). Fractional derivatives are generally nonlocal operators and are historically applied to study nonlocal or time-dependent processes, as well as to model phenomena involving coarse-grained and fractal spaces. As an example, applications of fractional calculus in coarse-grained and fractal spaces are found in the framework of anomalous diffusion [8][9][10] and field theories [11][12][13][14][15][16].
The fractional calculus of variation was introduced in the context of classical mechanics. Riewe [17,18] showed that a Lagrangian involving fractional time derivatives leads to an equation of motion with nonconservative forces such as friction. It is a remarkable result since frictional and nonconservative forces are beyond the usual macroscopic variational treatment [19] and consequently, beyond the most advanced methods of classical mechanics. Riewe generalized the usual calculus of variations for a Lagrangian depending on fractional derivatives [17,18] in order to deal with linear nonconservative forces. Recently, several approaches have been developed to generalize the least action principle and the Euler-Lagrange equations to include fractional derivatives [20][21][22][23][24][25][26].
Despite that the Riewe approach has been successfully applied to study open and/or nonconservative linear systems, it cannot be directly applied to nonlinear open systems. The limitation follows from the fact that, in order to obtain a final equation of motion containing only integer order derivatives, the Lagrangian should contain only quadratic terms depending on fractional derivatives. In the present work we formulated a generalization of Riewe fractional action principle by taking advantage of a so-called practical limitation of fractional derivatives, namely, the absence of a simple chain and Leibniz's rules.
Conference Papers in Mathematics
As examples, we applied our generalized fractional variational principle to some nonlinear chaotic third-order dynamical systems, so-called jerk dynamical systems because the derivative of the acceleration with respect to time is referred to as the jerk [27]. These systems are important because they are the simplest ever one-dimensional autonomous ordinary differential equations which display dynamical behaviors including chaotic solutions [28][29][30][31][32][33][34][35]. It is important to mention that jerk dynamical systems describe several phenomena in physics, engineering, and biology, such as electrical circuits, mechanical oscillators, laser physics, and biological systems [28][29][30][31][32][33][34][35].
The Riemann-Liouville and Caputo Fractional Calculus
The fractional calculus of derivative and integration of noninteger orders started more than three centuries ago with l'Hôpital and Leibniz when the derivative of order 1/2 was suggested [1]. This subject was also considered by several mathematicians as Euler, Laplace, Liouville, Grunwald, Letnikov, Riemann and others up to nowadays. Although the fractional calculus is almost as old as the usual integer order calculus, only in the last three decades it has gained more attention due to its applications in various fields of science, engineering, economics, biomechanics, and so forth (see [2,3,5,6] for a review). Actually, there are several definitions of fractional order derivatives. These definitions include the Riemann-Liouville, Caputo, Riesz, Weyl, and Grunwald-Letnikov (see [1][2][3][4][5][6][7] for a review). In this section we review some definitions and properties of the Riemann-Liouville and Caputo fractional calculi. Despite haing many different approaches to fractional calculus, several known formulations are somehow connected with the analytical continuation of Cauchy formula for -fold integration: where Γ is the Euler gamma function. The proof of Cauchy formula can be found in several textbooks (e.g., it can be found in [1]). The analytical continuation of (1) gives us a definition for an integration of noninteger (or fractional) order. This fractional order integration is the building block of the Riemann-Liouville and Caputo calculi, the two most popular formulations of fractional calculus, as well as several other approaches to fractional calculus [1][2][3][4][5][6][7]. The fractional integration obtained from (1) with < and , ∈ R, are called left and right fractional Riemann-Liouville integrals of order ∈ R, respectively.
For integer the fractional Riemann-Liouville integrals (2) coincide with the usual integer order -fold integration (1). Moreover, from definitions (2) it is easy to see that the Riemann-Liouville fractional integrals converge for any integrable function if > 1. Furthermore, it is possible to prove the convergence of (2) for ∈ 1 [ , ] even when 0 < < 1 [4].
The integration operators and play a fundamental role in the definition of fractional Riemann-Liouville and Caputo calculi. In order to define the Riemann-Liouville derivatives, we recall that for positive integers > it follows the identity where is an ordinary derivative of integer order .
where / stands for ordinary derivatives of integer order .
On the other hand, the Caputo fractional derivatives are defined by inverting the order between derivatives and integrations.
An important consequence of definitions (3)-(6) is that the Riemann-Liouville and Caputo fractional derivatives are nonlocal operators. The left (right) differintegration operator (3) and (5) ((4) and (6)) depends on the values of the function at left (right) of , that is, ≤ ≤ ( ≤ ≤ ). On the other hand, it is important to note that when is an integer, the Riemann-Liouville fractional derivatives (3) and (4) and reduced to ordinary derivatives of order . On the other hand, in that case, the Caputo derivatives (5) and (6) differ from integer order ones by a polynomial of order − 1 [3,4].
It is important to remark, for the purpose of this work, that the fractional derivatives (3)-(6) do not satisfy a simple generalization of the chain and Leibniz's rules of classical derivatives [1][2][3][4][5][6][7]. In other words, generally we have The absence of a simple chain and Leibniz's rules is commonly considered a practical limitation of the fractional derivatives (3)- (6). However, in the present work we take advantage of this limitation in order to formulate a generalized Lagrangians for nonlinear systems.
In addition to the definitions (3)-(6), we make use of the following property in order to obtain a fractional generalization of the Euler-Lagrange condition.
It is important to notice that the formulas of integration by parts (8) relate Caputo left (right) derivatives to Riemann-Liouville right (left) derivatives.
Finally, in order to obtain the equation of motion for our examples, we are going to use the following two relations: The proof of (9) can be found in [36], and (10) follows from the general semigroup property = + (see, e.g., [4,7]).
A Generalized Fractional Lagrangian
In classical calculus of variations it is of no conceptual and practical importance to deal with Lagrangian functions depending on derivatives of nonlinear functions of the unknown function . This is due to the fact that in these cases we can always rewrite the Lagrangian as a usual Lagrangiañby applying the chain's rules. As, for example, for a differentiable function we can rewrote ( , , ( / ) ( )) = ( , , ( / ) ( )| =̇) =̃( , ,), where / =. However, this simplification for the fractional calculus of variation is not possible due to the absence of a simple chain's rule for fractional derivatives. It is just this apparent limitation of fractional derivatives that opens the very interesting possibility to investigate new kinds of Lagrangians suitable to study nonlinear systems. In the present work we investigate for the first time these kinds of Lagrangians and we apply them to construct Lagrangians for some Jerk nonlinear dynamical system.
Our main result is the following theorem.
Proof. In order to develop the necessary conditions for the extremum of the action (11), we define a family of functions (weak variations) where * is the desired real function that satisfy the extremum of (11), ∈ R is a constant, and the function defined in [ , ] satisfies the boundary conditions The condition for the extremum is obtained when the first Gâteaux variation is zero: = lim 4
Conference Papers in Mathematics
Since the function satisfies both ( ) = ( ) = 0 and( ) = ( ) = 0 boundary conditions (14), we can use the fractional integration by parts (8) in (15), obtaining where an additional usual integration by parts was performed in the terms containing. Finally, by using the fundamental lemma of the calculus of variations, we obtain the fractional Euler-Lagrange equations (12).
It is important to notice that our theorem can be easily extended for Lagrangians depending on left Caputo derivatives, and Riemann-Liouville fractional derivatives. Actually, it is also easy to generalize in order to include a nonlinear function ( ) instead of (). Finally, it is important to mention that our theorem generalizes [17,18] and the more general formulation proposed in [20], as well as the Lagrangian formulation for higher order linear open systems [37] (for a review in recent advances in calculus of variations with fractional derivatives see [26]).
Lagrangian for Nonlinear Chaotic Jerk Systems
As an example for application of our generalized Euler-Lagrange equation (12), in this section we obtained Lagrangians for some jerk systems. The first example is the simplest one-dimensional family of jerk systems that displays chaotic solutions [27][28][29][30][31][32][33][34][35]: where is a system parameter and ( ) is a nonlinear function containing one nonlinearity, one system parameter and a constant term. A Lagrangian for this jerk system is given by In order to show that (18) gives us (17), we insert (18) into our generalized Euler-Lagrange equation (12), obtaining and we follow the procedure introduced in [17, 18] by taking the limit → . Taking the limit in (19) and using (9) and (10) we get (17). The Lagrangian (18) is equivalent to the introduced by us in [37]. However, it is important to stress that (17) is the only chaotic jerk system containing nonlinearity depending only on [27][28][29][30][31][32][33][34][35]. For jerk systems with more complex nonlinearities, as for example,,̇2, and, it is not possible to formulate a simple Lagrangian, depending only on and its derivatives, by using classical calculus of variation or previous formulations including fractional derivatives. Using our Euler-Lagrange equation (12) we can formulate, by the first time, a Lagrangian for these jerk systems [27][28][29][30][31][32][33][34][35]
Conclusions
In the present work we obtained an Euler-Lagrange equation for Lagrangians depending on fractional derivatives of nonlinear functions of the unknown function . Our formulation enables us to obtain Lagrangians for nonlinear open and dissipative systems, and consequently, it enables us to use the most advanced methods of classical mechanics to study these systems. As examples of applications, we obtained a Lagrangian for some chaotic jerk dynamical system. | 2,604 | 2013-07-31T00:00:00.000 | [
"Physics",
"Mathematics"
] |
3D basin and petroleum system modelling of the NW German North Sea (Entenschnabel)
: 3D basin and petroleum system modelling covering the NW German North Sea (Entenschnabel) was performed to reconstruct the thermal history, maturity and petroleum generation of three potential source rocks, namely the Namurian–Visean coals, the Lower Jurassic Posidonia Shale and the Upper Jurassic Hot Shale. Modelling results indicate that the NW study area did not experience the Late Jurassic heat flow peak of rifting as in the Central Graben. Therefore, two distinct heat flow histories are needed since the Late Jurassic to achieve a match between measured and calculated vitrinite reflection data. The Namurian–Visean source rocks entered the early oil window during the Late Carboniferous, and reached an overmature state in the Central Graben during the Late Jurassic. The oil-prone Posidonia Shale entered the main oil window in the Central Graben during the Late Jurassic. The deepest part of the Posidonia Shale reached the gas window in the Early Cretaceous, showing maximum transformation ratios of 97% at the present day. The Hot Shale source rock exhibits transformation ratios of up to 78% within the NW Entenschnabel and up to 20% within the Central Graben area. The existing gas field (A6-A) and oil shows in Chalk sediments of the Central Graben can be explained by our model.
The German North Sea ( Fig. 1) covers an area of around 35 000 km 2 and about 80 exploration wells have been drilled. The NW part of offshore Germany referred to as the Entenschnabel (Ducks's beak) has a size of approximately 4000 km 2 and contains about 29 exploration wells (Fig. 1a). Despite numerous petroleum discoveries in the neighbouring offshore areas, only two commercial petroleum fields have been discovered in offshore Germany: the Mittelplate oil field and the A6-A gas field. The latter is the only commercial natural gas field discovered in the Entenschnabel area so far (Fig. 1a), despite the fact that the geological structures are continuous from the Dutch to the Danish offshore sectors. Using the results from a recent detailed mapping campaign in the Entenschnabel area (Arfai et al. 2014) that was based on highquality 3D reflection seismic data, we studied the petroleum generation and migration from three potential source rock formations. We constructed a 3D basin model that covers the basin's structural development from the Devonian until the present day to reconstruct key elements and processes important for evaluating the petroleum systems in the study area. These key factors include: (a) the basal heat flow history; (b) the maturation history of potential source rocks; and (c) the timing of hydrocarbon generation.
Geological setting
In Figure 2, the main tectonic events of the study area are summarized in relation to the stratigraphic succession of the Entenschnabel.
The Entenschnabel area is tectonically subdivided by several graben and structural highs. The three major structural elements are the Schillgrund High, the Central Graben and the Step Graben System. These major structural elements are internally characterized by a number of minor structural features such as the John Graben, Clemens Basin belonging to the Central Graben, and the Mads Graben, Outer Rough High and the Outer Rough Basin of the Step Graben System (Fig. 1b) (Arfai et al. 2014). Important phases of the geodynamic history of the study area include the Saalian phase of uplift and erosion, Early Triassic extension and subsidence, Mid-and Late Cimmerian erosion and rifting, and Sub-Hercynian inversion phases (Ziegler 1990;Evans et al. 2003;Doornenbal & Stevenson 2010). During the Early Carboniferous, NW Europe, including the North Sea region, was located in the foreland basin of the Variscan Orogen (Ziegler 1990;Doornenbal & Stevenson 2010). As a result of the Carboniferous -Early Permian collapse of the Variscan Orogen, regional thermal uplift and concomitant erosion dominated parts of the Upper Carboniferous and Lower Permian deposits (Saalian unconformity; the ages of these tectonic events are discussed below) (Krull 2005;Ziegler 2005; Kombrink et al. 2012). Regional uplift was followed by late Early Permian thermal relaxation and subsidence. During this transtensional phase, voluminous volcanics were emplaced, as well as continental siliciclastic red-bed series of the Rotliegend strata which were subsequently buried by carbonates and evaporites of the Zechstein Group (Ziegler 2005;Kley & Voigt 2008;Stollhofen et al. 2008;Ten Veen et al. 2012). In the early Mesozoic rifting initiated, and the Central Graben developed as a half-graben from the Early Triassic (Sclater & Christie 1980;Frederiksen et al. 2001;Arfai et al. 2014). Salt tectonics have been active since the Late Triassic. As a result of thermal uplift related to the North Sea Dome (Mid-Cimmerian erosional phase: Underhill & Partington 1993;Graversen 2002Graversen , 2006, regional erosion affected Lower Triassic -Middle Jurassic sediments predominantly on structural highs in the Entenschnabel area (Arfai et al. 2014). During the Late Jurassic -Early Cretaceous, a major extensional phase (Late Cimmerian : Ziegler 1990) took place in combination with extensive reactive diapirism of Zechstein salt and deposition of clastic sediments. A change in the European stress pattern from an extensional to a compressional tectonic regime (Gemmer et al. 2002(Gemmer et al. , 2003 in the Late Cretaceous resulted in inversion, accompanied by deep erosion of locally uplifted sedimentary deposits. Erosion affected mainly Cenomanian-Santonian sediments in the central and NW part of the Entenschnabel. Simultaneously, syn-inversion deposition of chalk and subsidence continued in the southern German Central Graben. Seafloor spreading has taken place in the North Atlantic from the Eocene onwards and the North Sea Basin became tectonically quiescent (Ziegler 1992). Subsidence started due to thermal relaxation and caused a regional transgression. The basin was progressively filled by clastic deposits of the surrounding landmasses (e.g. Vejbaek & Andersen 1987;Rasmussen 2009). During the Mid-Miocene, a transgression formed the Mid-Miocene Unconformity (MMU), on top of which more than 1 km of sediments have been deposited since the late Mid-Miocene (Thöle et al. 2014).
Exploration history and hydrocarbon plays
South of the central German North Sea, within the Dutch North Sea, numerous wells have been drilled and discovered economic natural gas reservoirs. Further, to the NW in the UK offshore, two oil fields exist within reservoirs in Upper Jurassic sandstones. In the Danish offshore to the north, numerous oil fields are exploited which have their reservoirs in Upper Cretaceous -Palaeogene Chalk sediments within the Tail End Graben. Despite the long period of exploration in the Entenschnabel since 1968, and the oil and gas fields in the neighbouring countries, only one natural gas field has been discovered (A6-A in 1974) accompanied by some minor oil shows within the Central Graben area. The gas field (A6-A) has a complex reservoir situation with accumulations in Rotliegend volcanics, Zechstein carbonates and Upper Jurassic sands, and is sourced by Carboniferous coals.
Investigations within the Danish Central Graben found a regional distribution of Lower Carboniferous deposits, comprising coal seams and/or thick sections of coaly shale (Petersen & Nytoft Kombrink et al. (2012). The last column describes the main tectonic phases recognized within the 3D basin modelling study. 2007). These could be a gas source for deep plays in the Danish Central Graben and extend into the Entenschnabel. Wells B-11-2, A-9-1 and B-10-1 (Figs 1a & 3) penetrate Lower Carboniferous (Namurian -Visean) sediments, containing coal seams (EBN 2015). Dinantian and Visean coals are also proven to the south of the study area and may have contributed to gas charge in the northern Dutch North Sea (De Jager & Geluk 2007). In the study area, the facies during the Early Carboniferous is characterized as being deltaic to fluvial lacustrine (Kombrink 2008). The coals which developed in this environment might also have contributed to gas accumulations within the Entenschnabel. Thus, we assume a coaly source rock facies at the boundary between the Visean and the Namurian. The reservoirs might be Rotliegend sandstones, similar to those in the Dutch North Sea which are sealed by Zechstein evaporites.
Three relevant hydrocarbon plays in the adjacent realms of the Entenschnabel area are known. The most important play within the Dutch North Sea is the Rotliegend play (De Jager & Geluk 2007), a sandstone reservoir with good properties, sealed by Zechstein salt and sourced by Carboniferous coals. Most traps occur at the edges of horst blocks. It has been speculated that the Rotliegend play might also be sourced by Namurian shales with kerogen type II, these are expected to generate gas under deep burial conditions (Abdul Fattah et al. 2012a, b). However, the main source rock is the Westphalian B coal-bearing layer (Kombrink 2008). These gas-prone Westphalian sediments, deposited in a lower delta-plain environment (Kombrink 2008), are characterized by a kerogen type III that reaches a maximum total organic carbon (TOC) content of 70% in the Dutch North Sea (Verweij et al. 2003). Uplift and erosion during the Late Carboniferous-Early Rotliegend in the Entenschnabel area is crucial in this regard as much, if not all, of the Upper Carboniferous source rocks may have been removed.
The second important hydrocarbon play is represented by the Upper Jurassic and Lower Cretaceous sandstone reservoirs which are sealed mainly by Lower Cretaceous marls. Traps occur as anticlines, formed in response to the Late Cretaceous basin inversion. Accumulated gas, which is preserved in Upper Jurassic and Lower Cretaceous plays, is partly fed from the Westphalian coal, similar to that of the Rotliegend play. Shales of the Lower Jurassic (Posidonia Shale and Aalburg Formation: De Jager & Geluk 2007) and/or from Upper Jurassic coal-bearing sequences additionally contribute to the reservoirs.
The oil plays are restricted to the Jurassic Central Graben system, with the Dutch Central Graben and the Danish Tail End Graben. The main source rock responsible for oil accumulations in the Dutch North Sea is the Toarcian Posidonia Shale. Its thickness ranges between 15 and 35 m, with an average TOC content of approximately 10% and a hydrogen index (HI) of up to 800 mg HC g 21 TOC (Verweij et al. 2003;De Jager & Geluk 2007). Within the Dutch Central Graben, the Lower Jurassic Posidonia Shale is the main source rock for the Upper Cretaceous Chalk play (De Jager & Geluk 2007) and may also extend into the Entenschnabel.
The Upper Cretaceous Chalk Group is the reservoir for several major oil fields in the Danish sector of the North Sea (e.g. Dan, Gorm, Skjold). The traps are mainly faulted anticlines formed during Late Cretaceous basin inversion. Within the Danish North Sea, the Upper Cretaceous play is sourced by excellent Upper Jurassic shales. The oil source rock is the Bo Member (Hot Shale) of the Farsund Formation in the Danish offshore, which is equivalent to the Kimmeridge Clay Formation in the British offshore sector and the Clay Deep Member in the Dutch North Sea (Ineson et al. 2003). A few wells encountered the Hot Shale layer in the Entenschnabel: for example, well DUC-B-1 (Fig. 1a) located south of the Outer Rough Basin. The thickness of the Hot Shale in DUC-B-1 is 85 m. Similar deposits within the Danish Central Graben are good to very good source rocks. This mudstone-dominated succession is typically 15 -30 m thick, showing high HI values of between 200 and 600 mg HC g 21 TOC, and has a TOC content of 3-8%, although locally exceeding 15% (Ineson et al. 2003).
As described above, source rocks, as well as reservoirs, are present all around the German Entenschnabel (e.g. Lokhorst 1998;Ineson et al. 2003;De Jager & Geluk 2007;Kombrink 2008). This motivated the study of the petroleum potential of the Entenschnabel area using 3D basin and petroleum system modelling.
Basin model: methods and database, input and boundary conditions
Methods and database 3D petroleum system modelling was performed with the software PetroMod V. 14. The software calculates the evolution of a sedimentary basin from the oldest to the youngest event (forward modelling), and the processes of petroleum generation and migration (Hantschel & Kauerauf 2009). For the calculation of vitrinite reflectance from temperature histories, the EASY%Ro algorithm of Sweeney & Burnham (1990) is used. This calculation method follows a kinetic reaction scheme, and is valid for calculated reflectance values between 0.3 and 4.5%. To depict the burial, thermal and maturity history of the study area, we used representative 1D extractions of the 3D model at well locations in the NW and central part of the Entenschnabel, as well as within the Central Graben area. Additionally, maps of calculated maturity and transformation ratios are presented.
Detailed mapping in the Entenschnabel (Arfai et al. 2014) provided the present-day stratigraphic and structural framework of the model from base Zechstein to Present. Fourteen depth grids (250 × 250 m cell size) and thickness maps of prominent seismic formations were taken from this study. This was complemented with pre-Zechstein formations (358 -260.5 Ma), including the Carboniferous-Rotliegend sedimentary successions adopted from literature (Krull 2005;Geluk 2007;Doornenbal & Stevenson 2010). For modelling purposes, information from 29 confidential wells covering the study area was used. Results of three wells with geological information and calibration data (vitrinite reflectance and/or temperature data) are shown in anonymized form, representing the two structural elements: the Central Graben and the Step Graben System, respectively. The well-penetration chart ( Fig. 3) illustrates that most of the wells in the study area terminate either in the Mesozoic or within the Zechstein level. Consequently, temperature data from deeper stratigraphic units are restricted or even not available for the Central Graben area. Eight wells were drilled to the pre-Zechstein level located in the Step Graben System. Well B-11-2, drilled on a basement high, reached Namurian -Visean sedimentary sequences and is located in the immediate vicinity of the Central Graben area (Figs 1 & 3).
Geological model
The input model consists of 27 stratigraphic layers covering a time interval from the Early Carboniferous to the Present.
A sedimentary basement of 2000 m thickness for the Dinantian and Devonian is added to extend the 3D model below the Upper Carboniferous. The latter is separated into the Stephanian, Westphalian and Namurian, whereby present-day and palaeothickness values are based on Krull (2005). The Namurian succession is assumed to have a constant thickness of 500 m in large parts of the study area. Locally, thicknesses of up to 920 m were assigned for the Namurian based on well data.
Lower Rotliegend distribution within the Entenschnabel area is taken from Geluk (2007). A present-day thickness of 100 m is assigned for the Lower Rotliegend layer, as Geluk (2007) Upper Carboniferous sediments have not been encountered in wells and their sediment distribution is not indicated in the maps by Doornenbal & Stevenson (2010): therefore we assume that the Saalian erosion event removed previously deposited Stephanian and Westphalian sediments throughout the study area. The palaeothicknesses of the Westphalian and Stephanian sediments are each 500 m.
The domal uplift during the Mid-Late Cimmmerian phase resulted in a widespread erosion in large parts of the study area. The intensity and amount of erosion varied in the area, depending on the structural elements (basin, high or platform). The erosion event began during the Bathonian (165 Ma) and ended in the Callovian (156 Ma). Layers that were affected by this erosional phase include the Lower and Middle Buntsandstein, Upper Buntsandstein, Muschelkalk, Keuper, Lower Jurassic, and Middle Jurassic.
A major Late Cretaceous inversion phase in the North Sea Basin resulted in uplift and erosion of the sedimentary fill in different pulses. Partly, Upper Jurassic, Lower Cretaceous and Upper Cretaceous sediments were eroded in the central and NW part of the study area. Here, erosion during the Late Cretaceous was active between 98 and 89 Ma. Therefore, subdivision of the Upper Cretaceous succession into three units (Cenomanian -Turonian; Coniacian -Santonian; and Campanian-Danian) was done to consider a local erosional phase in the NW on the Step Graben System during the Coniacian -Santonian. During this phase, sedimentation continued in the Central Graben.
The final erosion during the Mid-Miocene, with a duration of 3 Ma, is included within erosion 30 m in thickness. Periods of sedimentation, erosion and non-deposition are summarized in Table 1. 3D BASIN & PETROLEUM SYSTEM MODEL (ENTENSCHNABEL) An assigned lithology for each of the layers is based on generalized well descriptions within the study area (Table 1).
Three layers have been defined as source rocks in the model. Namurian -Visean coals are included as a gas-prone source rock. This source rock was assigned a thickness of 10 m, a TOC content of 30% and a HI value of 150 mg HC g 21 TOC.
Two Jurassic layers are included as oil-prone source rocks, namely the Posidonia Shale and the Upper Jurassic Kimmeridge Clay (Hot Shale). A thickness of 15 m is assigned to the upper section of the Lower Jurassic formation. This thickness is described by De Jager & Geluk (2007) for the Posidonia Shale in the Dutch North Sea. The Posidonia Shale is characterized in our model with an average TOC content of 8% and an HI value of up to 400 mg HC g 21 TOC. The thickness of the Bo Member (Hot Shale) of the Upper Jurassic varies strongly in the Danish Central Graben, from less than 10 m in the southern salt dome province to over 100 m in the western part of the Danish Central Graben, and extending into the Entenschnabel with about 85 m at the location of well DUC-B-1 (Ineson et al. 2003). In our basin model, the Upper Jurassic layer with a maximum thickness of approximately 2200 m is subdivided into two layers based on well analyses and literature. The upper layer has a thickness of 25 m for the Hot Shale source rock, and the lower layer has a variable thickness for the underlying Kimmeridgian and Oxfordian strata. The Hot Shale source rock is defined by a TOC content of 8% and an HI value of 430 mg HC g 21 TOC.
Hydrocarbon generation for the Posidonia Shale and Hot Shale was calculated using the kinetic dataset TII North Sea of Vandenbroucke et al. (1999). Hydrocarbon generation for the Lower Carboniferous gas source rock (Namurian -Visean coal) was calculated with the TIII kinetic after Burnham (1989). Additionally, kinetics based on Pepper & Corvi (1995) type TII, B and Di Primio & Horsfield (2006) for Kimmeridge Clay (BH263) were used to study the influence of kinetic datasets on hydrocarbon generation. Figure 5 shows a NW-SE-trending 2D cross-section through the 3D model with assigned source rock intervals, reservoirs and possible seals.
Boundary conditions
Palaeowater depth (PWD). The palaeowater depth (PWD) curve used in the model was constructed based on PWD trends of adjacent areas in the southern Dutch Central Graben (Verweij et al. 2009;Abdul Fattah et al. 2012a, b). The PWDs were allowed to vary in time but were kept constant over the entire area at a certain time. The PWDs range between 0 and 200 m, with peaks during Sediment -water interface temperature (SWIT). The palaeosurface temperature at the sediment -water interface was calculated with an integrated software tool that takes into account the PWD and the palaeolatitude of the study area (Wygrala 1989) (Fig. 6b).
Basal heat flow. One heat flow trend was assigned to the Step Graben System and a different one to the Central Graben. The thermal and maturity history of the Central Graben area includes a heat flow peak of 80 mW m 22 during the Early Permian attributed to rifting and also manifested in volcanic activity in the Central European Basin, including the North Sea region (Fig. 6c). The value used is one according to Kearey et al. (2009) for a wide rift mode. During the Early-Late Triassic (246 Ma), a second peak (Fig. 6c). This major extensional phase during the Late Jurassic formed the present-day Central Graben geometry. Subsequent Cretaceous and Cenozoic subsidence was largely controlled by a phase of postrift thermal subsidence. The compressional stress regime resulted in several phases of basin inversion during the Late Cretaceous. However, this event had only a minor impact on the heat flow history and we assigned a heat flow value of 65 mW m 22 for this time period. The present-day heat flow was calibrated based on temperature and vitrinite reflectance data. The heat flow trend for the Step Graben System is the same as for the Central Graben until the Middle Jurassic. The Late Jurassic rifting of the Central Graben is omitted in the Step Graben heat flow trend (Fig. 6c, dotted line). The values decrease constantly from the Middle Jurassic to the present-day value of 52 mW m 22 . Figure 7 shows the fit between measured and modelled vitrinite reflectance values for three wells with vitrinite reflectance values measured over a wide depth range.
Burial history
The present-day German Central Graben area is dominated by major subsidence and sedimentation events during the Late Carboniferous, Late Permian-Early Triassic and the Late Jurassic. This is visualized by a 1D extraction of the burial history at the location of well C-16-1 (Fig. 8a). The presence of Zechstein evaporites greatly influenced the post-Permian structural and sedimentary development of the area (Fig. 5). The initial depositional thickness of the Zechstein Group in the 3D model is 700 m, while its present-day thickness varies from approximately 3000 m to only a few metres on structural highs. Significant subsidence occurred during the Late Jurassic (Figs 5 & 8). An additional, Late Cretaceous phase of rapid subsidence and sedimentation is distinct in the Central Graben area (Fig. 8a). The central and NW part of the Entenschnabel has under gone less burial than the rest of the area (Fig. 8b, c). The current burial of the source rock units shows a decreasing trend from the Central Graben area towards the NW. Thus, the burial depth of Lower Carboniferous source rocks (Namurian -Visean) within the Central Graben at the present day shows a difference of approximately 3000 m to those preserved within the NW part of the Entenschnabel (Fig. 8c). A phase of uplift and erosion affected the entire study area during the Late Carboniferous (Fig. 8). Significant uplift occurred during the Late Jurassic visible in the NW part of the Entenschnabel at the location of well A-9-1 (Fig. 8c). As a result of significant pulses of inversion tectonics, a second important phase of tectonic uplift occurs during the Mid-Late Cretaceous (e.g. Fig. 8b).
Thermal and maturity history Thermal history. The Namurian -Visean unit reached a maximum temperature of up to 2508C during the Late Jurassic (Fig. 8). This temperature is related to the burial depth of the formation, and the assumed heat flow value ranging between 80 and 85 mW m 22 during the Late Jurassic -Early Cretaceous. The present-day temperature amounts up to 2208C within the Central Graben area (Fig. 8a). The overlying reservoir clastics of the Upper Rotliegend reach temperatures of around 2208C during the Late Jurassic -Early Cretaceous, while the present-day temperature field ranges between 200 and 2108C. During the Late Cretaceous, the temperatures of the formations decreased as a result of the Sub-Hercynian basin inversion phase. A second temperature peak is reached at the present day in the entire study area due to the maximum burial of the sediments (Fig. 8a-c). The present-day temperatures of the Posidonia Shale and the Hot Shale layer in the entire study area vary between 117 and 808C (Fig. 8a-c).
Maturity history. Thermal calibration of the 3D model is based
on present-day temperatures of six wells mainly located within the Central Graben area and vitrinite reflectance measurements of 16 wells covering different structures in the study area. Figure 7 shows a comparison of measured (black dots) and calculated (line) vitrinite reflectance and temperature data with depth. Three of the confidential wells are shown in Figure 7. Two of them are located in the Central Graben (Fig. 7a, b) and one is from the Step Graben System (Fig. 7e). A very good match is achieved between measured and calculated vitrinite reflectance values for the wells in the Central Graben area and on the Outer Rough High (Fig. 7a, b, e). Good agreement is achieved between the measured temperature data and simulated results at the location of two wells covering the Central Graben (Fig. 7c, d). The maturity evolution (vitrinite reflectance) and the hydrocarbon zones are illustrated in Figure 9 at three well locations. In addition, hydrocarbon zones and transformation ratios of the Carboniferous and the Lower and Upper Jurassic source rocks are depicted in Figures 10 -12 and, respectively, for four time steps in map view. The maturity history indicates that the deepest parts of the Namurian -Visean source rock unit had already entered the oil window during the Late Carboniferous (Fig. 9a). Maturity increased during the Triassic and the source rock reached the gas window during the Middle Triassic within the Central Graben (Fig. 9a). At the present day, the Lower Carboniferous Namurian -Visean layer is overmature in the John Graben (Figs 9 & 10). Within the central and northern parts of the Entenschnabel area at the locations of wells A-9-1 and B-11-2, the Namurian -Visean source rock layer only enters the main oil-window range (Fig. 9b, c). The latter is also reached on the structural highs (Outer Rough High, Mads High and Schillgrund High), whereby in the graben and basins in the north (Mads Graben and Outer Rough Basin) the gas window is reached (Fig. 10). During the Late Jurassic, the Posidonia Shale is mostly immature, with the exception of the John Graben where the early oil window is reached (Fig. 11a). Maturity increased during the Early Cretaceous and the wet-gas window was reached in the deepest parts of the John Graben (Fig. 11b): however, large parts of the Posidonia Shale in the Central Graben are still immature. From the Eocene onwards, most of the Posidonia Shale is in the oil window (Fig. 11c). Maximum maturity of the source rock in the whole study area is reached at the present day (Fig. 11d). The Upper Jurassic Hot Shale layer is in the oil window over the whole of the Entenschnabel at the present day, except for locally uplifted areas in the Central Graben where it is still immature (Fig. 12d). This source rock entered the oil window during the Late Cretaceous (Fig. 12a) and later during the Oligocene (27 Ma) at the location of well C-16-1 (Fig. 9a). The maximum maturity (0.9%Ro) is reached in the Outer Rough Basin at the present-day.
On the Outer Rough High (Fig. 9c, well A-9-1), the Posidonia Shale was eroded by Mid-Cimmerian erosion and only the Hot Shale is preserved there, reaching the oil window first during the Late Miocene.
Hydrocarbon generation history
The transformation ratio is an indicator of hydrocarbon generation of the source rocks. The transformation ratios are calculated for the source rock units according to the assigned reaction kinetics ( Vandenbroucke et al. 1999, TII North Sea;Burnham 1989, TIII), and are therefore more specific for source rocks than the general classification using oil and gas windows based on vitrinite reflectance values. Figures 10-12 show the transformation ratios (TR%) of the top of the three source rock layers for four time steps.
A remarkable difference is observed in the transformation ratio of the Namurian -Visean layer between the northern part (Outer Rough High), the Central Graben area and the southern part on the Schillgrund High ( Fig. 10e -h). The model indicates that the total transformation of organic material into hydrocarbons is already reached within the Central Graben (John Graben) during the Late Jurassic (Fig. 10g). Eighty per cent are reached at the present day within the Outer Rough Basin at the border with the Danish North Sea in the NE (Fig. 10h).
Up to 96% of the organic matter of the Posidonia Shale layer was transformed during the Late Jurassic -Early Cretaceous within the deepest parts of the John Graben (Fig. 11e, f ). The maximum of 97% is reached at the present day (Fig. 11h).
The Hot Shale source rock shows only low transformation ratios of less than 20% within the Central Graben area at the present day ( Fig. 12e-h). There is a remarkable difference with the present-day transformation ratio of the Hot Shale layer in the Outer Rough Basin. Here, the transformation ratio reaches up to 78% (Fig. 12h). A significant increase in the transformation ratio is observed between the Oligocene (44%) and present day (Fig. 12f, g).
Discussion
The aim of this study was to reconstruct the maturity evolution and petroleum generation of three potential source rocks in the NW German North Sea. Uncertainties in the model are introduced from various sources but can be studied with scenario calculations. We addressed the uncertainty of the source rock distribution of the Posidonia Shale, the influence of different reaction kinetics on the marginally mature Hot Shale and the influence of initial salt thicknesses.
In the simulations, we assumed that the Namurian -Visean source rock is present across the whole study area (Fig. 10). We also assigned source rock properties for the entire Hot Shale layer, which is partly eroded in the Step Graben System (Fig. 12). The distribution of the Posidonia Shale was greatly reduced by erosion during the Mid-Cimmerian tectonic phase. In the Step Graben System, only erosional remnants of the Posidonia Shale occur within the Mads Graben. In the Central Graben, the Posidonia Shale is present to a large extent but was eroded towards the Schillgrund High (Fig. 11). In the base model, we only assigned source rock properties to the Posidonia Shale where the Middle Jurassic is also present, thus reducing the possible kitchen area.
Burial history
1D burial histories indicate that deepest burial is at the present day, and is associated with the maximum maturity of the source rocks in the NW and SE part of the study area (Figs 10 -12). In the north, the current burial depth of the Lower Carboniferous source rocks is approximately the same as during the Jurassic (Fig. 9b, c).
Ten calculated an initial thickness of the Zechstein Group of about 700 m within the Dutch North Sea, and we used this value in our base model. From the Dutch North Sea in the north into the Entenschnabel and approximately along the German-Danish border, the salt basin margin is approached which leads to reduced Zechstein salt thicknesses. Therefore, we calculated models using different initial thicknesses of 500 and 900 m and the present-day thicknesses. The influence of different initial thicknesses on the present-day maturity of the three source rocks is negligible for all three models. Nevertheless, the salt thickness has an influence on migration, especially in the case of a thin salt layer which can be more easily eroded or mobilized.
Thermal and maturity history
The modelled present-day heat flow is calibrated using measured vitrinite reflectance and temperature data in wells covering the study area. The matches between the measured and modelled calibration data suggest that the combination of the present-day heat flow and the thermal conductivity of the major lithologies is acceptable ( Fig. 7a, b, e). The derived present-day heat flow of 52 mW m 22 (Fig. 6c) in the main model scenario is similar to those used in publications of surrounding realms of the German Central Graben: for example, Beha et al. (2008: 52 High. Based on these data, the thermal history of the Central Graben in the SE portion of the Entenschnabel area is found to be distinctly different to that in the central and NW parts of the Entenschnabel area (Fig. 7e). Therefore, for reconstruction of the heat flow history from the beginning of basin formation until the present, two different scenarios for the Step Graben System and the Central Graben have been applied (Fig. 6c). The main difference is a Late Jurassic heat flow peak of 85 mW m 23 for the Central Graben, whereas the heat flow for the Step Graben System does not exceed 57 mW m 23 (Fig. 6c). Thus, using a decreasing heat flow during the Late Jurassic (Fig. 6c, dotted line) to a presentday value of 52 mW m 2 within the Step Graben System, the modelled results of the Namurian -Visean source rock interval (Fig. 7e, dashed line) show a better fit to measured vitrinite reflection data (Fig. 7e,solid line). This shows that the Central Graben area has experienced higher heat flow values during rifting in the Late Jurassic than the central and NW parts of the Entenschnabel.
Hydrocarbon generation (Central Graben)
The Namurian -Visean source rock in the Central Graben entered the early oil window in the Late Carboniferous and reached an overmature state by the Late Jurassic within the John Graben (Fig. 10). This implies that, if there are gas accumulations from Namurian -Visean source rocks, the conditions for preservation of gas generated 150 myr ago must have been favourable for a very long time, and that the reservoirs were not destroyed by diapirism and inversion tectonics in the Late Cretaceous. Maturity and hydrocarbon generation models in the southern part of the Dutch Central Graben for the Carboniferous source rocks (Westphalian) describe a major phase of hydrocarbon generation during the Late Jurassic and Early Cretaceous times (Verweij et al. 2009). On the graben shoulder (Schillgrund High), a transformation ratio of around 60% is calculated at the present day. Similar transformation ratios were calculated for the Carboniferous source rocks on the Schillgrund High, already reaching 50% during the Middle Permian (Heim et al. 2013). It is possible that gas accumulations exist below the Zechstein salt layer in Rotliegend sediments and/or volcanics at depths of generally more than 5500 m. These accumulations might have been affected by salt diapirism (.10 salt diapirs in the Central Graben) and Late Cretaceous inversion. This could lead to leaking and dismigration of gas through salt windows and faults or restructuration of reservoirs.
The calculated transformation ratios indicate that within the John Graben the Posidonia Shale starts generating hydrocarbons during Late Jurassic -Early Cretaceous times (Fig. 11a, b). During this time interval, almost maximum transformation ratios were reached just before Late Cretaceous inversion tectonics and associated tectonic uplift. This is also in agreement with the hydrocarbon generation results by Verweij et al. (2009) for the southern Dutch Central Graben. Hydrocarbon generation in the study area was resumed during the Paleocene because of continuous burial and continued until present.
For the base model, we assigned source rock properties for the Posidonia Shale only where the Middle Jurassic sediments are present, to be on the conservative side regarding the extension of the kitchen area (Fig. 11h). It is possible that the Posidonia Shale is present elsewhere as well, thus having a larger kitchen area. The results of this optimistic model are shown in Figure 11h (inset).
Locally, within the Central Graben, where Triassic -Jurassic fault systems were reactivated accompanied by salt tectonics during the Late Cretaceous inversion phase, the Hot Shale was not buried deep enough to reach a mature state for hydrocarbon generation (Fig. 12d, blue coloured areas). Within the Central Graben area, the Hot Shale Formation generally shows only low transformation ratios (,20%: Fig. 12e -h). High transformation ratios of up to 60% are calculated only locally in rim synclines around salt diapirs in the Central Graben, reflecting the high thermal conductivity of the Zechstein salt ( Fig. 12e -h). Thus, only local expulsion from the Upper Jurassic Hot Shale source rock can be expected, which might be enough to explain local oil shows in wells. The transformation ratio in the base model was calculated according to the kinetic TII, North Sea by Vandenbroucke et al. (1999). To assess the influence of the reaction kinetics on the transformation ratio we calculated two additional models using the TIIB (Pepper & Corvi 1995) and BH263 (Kimmeridge Clay: Di Primio & Horsfield 2006) kinetics. All three kinetics calculate transformation ratios of between 3 and 8% up until the Miocene. From the Miocene to present, a significant difference is calculated with the TIIB kinetics of Pepper & Corvi (1995), reaching 20% at the present, and the TII North Sea of Vandenbroucke et al. (1999) and BH263 of Di Primio & Horsfield (2006) reaching 10 and 12%, respectively, within the John Graben (Fig. 13). This exhibits the impact of kinetic models on hydrocarbon generation and source rock transformation ratios. In order to reach the main oil window and accordingly higher transformation ratios of about 50% within the Central Graben, the Hot Shale layer should have been buried at least 500 m deeper, as indicated by simulation results.
Maturity history and hydrocarbon generation (Step Graben System, central and NW Entenschnabel)
In the NW part of the study area, the Mesozoic formations are less thick, and the degree of uplift and erosion of Mesozoic strata was more pronounced compared to the Central Graben area (Fig. 8ac). This resulted in maturity variations over the area mainly attributed to differences in the burial depth history of the source rocks ( Fig. 9a -c). In the NW part of the Entenschnabel, the Namurian -Visean source rock shows a present-day maturity in the range of 0.85-3% Ro (Fig. 10c). The calculated transformation ratios between wells A-9-1 and B-11-2 range between 65 and 75% at the present day. Values of about 88% are reached in the Mads Graben and 80% in the Outer Rough Basin (Fig. 10h). We assigned the Namurian -Visean source rock as a fluvial and deltaic kerogen type III facies with coal-bearing sediments.
Lower Jurassic sediments, including the Posidonia Shale, are only preserved within the Mads Graben and are elsewhere widely eroded within the Step Graben System as a result of the Mid-Cimmerian erosional events. Only in the southern Mads Graben does a small part show transformation ratios above 20%.
The maturity of the Hot Shale source rock increases from the Central Graben area towards the NW. Within the Outer Rough Basin, the Hot Shale is buried deeper than in the Central Graben and therefore reaches the highest maturities there. This is in contrast to the other two source rocks (Namurian -Visean and Posidonia Shale) which reach highest maturities in the Central Graben. Together with the distribution of the Bo Member interpreted by Ineson et al. (2003), which extends into the Entenschnabel area and is verified by well DUC-B-1 (Fig. 14, dashed red line), a part of the Outer Rough Basin can be considered as a hydrocarbon kitchen. So far this part of the Outer Rough Basin has not been targeted by wells.
Migration of hydrocarbons
Detailed petroleum migration and volumetric analyses were beyond the scope of this paper because a more detailed model Fig. 13. Transformation ratio evolution of the Hot Shale layer in the John Graben extracted from the 3D basin model for three different reaction kinetics after Pepper & Corvi (1995), Vandenbroucke et al. (1999) andDi Primio &Horsfield (2006). From the Miocene to present, a difference is calculated with the TIIB kinetics (dashed line) reaching 20% at the present day, and the kinetics BH263 (Kimmeridge Clay, dotted line) and TII North Sea (solid line) reaching 12 and 10%, respectively.
3D BASIN & PETROLEUM SYSTEM MODEL (ENTENSCHNABEL) would be needed that includes faults and more petrophysical data of carrier and reservoir layers, which were not available. Nevertheless, the results of the migration model give important information about where to focus more detailed studies. The simulation results indicate that the expulsion of gas from the potential Visean -Namurian source rock initiated in the Late Carboniferous. Peak expulsion of gas occurred before the Late Cretaceous inversion. The Lower Carboniferous source rock extends over the whole study area and petroleum generation led to the formation of numerous gas accumulations in the Upper Rotliegend below the sealing Zechstein salt. The Upper Rotliegend is assigned as a typical sandstone lithology, thus forming an excellent reservoir layer. By increasing the lithology porosity of the Zechstein carbonate layer, the model produces gas accumulations at the location of the present-day A6-A gas field. This suggests that the present-day reservoir at the location of the A6-A gas field, which is located on the Mads High, was probably charged by the Namurian -Visean source rock. The sealing Zechstein layer prevents any migration of Carboniferous gas from the Rotliegend into Triassic sediments in our model.
The Posidonia Shale expelled about 90% of the generated petroleum into the overlying sediments in the Central Graben. Expulsion initiated during the Late Jurassic and increased until present.
Expulsion of hydrocarbons from the Hot Shale source rock started during the Late Cretaceous. A more detailed study that includes faults and their properties, as well as a complete migration simulation, is required to simulate migration and trapping.
Conclusions
Structural data from the Entenschnabel, which is the NW part of offshore Germany, are used for a 3D reconstruction of the burial and temperature history, source rock maturity, and timing of hydrocarbon generation. The study focused on three potential source rock intervals: the Lower Carboniferous (Namurian -Visean) coalbearing source rocks; the marine Lower Jurassic Posidonia Shale; and the Upper Jurassic Hot Shale: † The basin modelling, calibrated with vitrinite reflectance data from 16 wells and temperature data from six wells, resulted in a present-day basal heat flow of 52 mW m 22 for the whole model area. The thermal history of the Central Graben in the SE portion of the Entenschnabel area is distinctly different to that in the central and NW parts. Therefore, two different heat flow scenarios for the Step Graben System and the Central Graben, respectively, have been applied. The main difference is a Late Jurassic heat flow peak of 85 mW m 22 for the Central Graben, whereas the corresponding value during the Late Jurassic for the Step Graben System does not exceed 57 mW m 22 . † The Namurian -Visean source rock had already entered the hydrocarbon generation zones during Late Carboniferous times throughout the area. Within the John Graben of the Central Graben, the overmature state was reached in the Late Jurassic. The Carboniferous source rock charged the A6-A gas field in the northern Entenschnabel, and the gas accumulation could | 9,742.6 | 2017-03-17T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
Fiber-Based High-Power Supercontinuum and Frequency Comb Generation
Ultrafast optics has been a rich research field, and picosecond/femtosecond pulsed laser sources seek many applications in both the areas of fundamental research and industrial life. Much attention has been attached to fiber lasers in recent decades as they offering various superiorities over their solid-state counterparts with compact size, low cost, and great stability due to the inherent stability and safety of the waveguide structures as well as high photoelectric conversion efficiency. Fiber-based sources of ultrashort and high-peak/high-average optical pulses have become extremely important for high-precision laser processing while sources whose carrier-envelop offset and repetition rate are stabilized can serve as laser combs with applications covering many research areas, such as precision spectroscopy, optical clock, and optical frequency metrology. For the application as laser combs, four parts as fiber laser, broadband supercontinuum, nonlinear power amplification, and repetition rate stabilization must be concerned. This chapter is intended to give a brief introduction about the achievement of the four technologies mentioned above with different experimental setups, recently developed such as divided-pulse amplification (DPA) in emphasize. Moreover, detailed descriptions of the experimental constructions as well as theoretical analyses about the phenomena they produced are also involved.
Introduction
Ultrafast laser sources and their applications such as high-power supercontinuum and frequency comb have gained much attention in recent decades [1][2][3][4][5][6][7]. High-power fiber lasers spur a rapid growth of industrial applications including laser cutting, laser marking, and so on [8]. Moreover, supercontinuum and frequency comb are considered as the breakthrough of laser field for their applications covering precision spectroscopy, astronomical observations, and optical frequency metrology [9,10]. This chapter is intended to describe, from experimental point of view, the ultrashort pulse laser oscillators, high-power nonlinear fiber amplifiers, supercontinuum, and frequency combs. Section 2 shows the performance of two types of mode-locked lasers. The first one consisting of bulk and fiber optical components is mode-locked via nonlinear polarization rotation (NPR) mechanism at 1.03 μm. The other one, operating at 1.55 μm, is mode-locked by nonlinear amplified loop mirror (NALM) with polarization-maintaining (PM) fiber components in order to overcome environmental perturbation and thus maintain long-term operation. Section 3 introduces a practical method (spectral tailoring), which facilitates supercontinuum generation in single-mode fiber amplifier at 1.03 μm with a few picosecond laser pulses. The second part in this section introduced broadband supercontinuum generation (from 950 to 2200 nm) by injecting pulses with 72-fs temporal duration, 150-mW average power, and 60-MHz repetition rate at 1560 nm into 20-cm-long PM-HNLF. Section 4 gives a brief introduction of divided-pulse amplification (DPA). To generate transform-limited pulse at 1.55 μm, DPA with polarized pulse duplicating was employed to overcome the gain narrowing effect and control the nonlinear spectral broadening in anomalous dispersion Er-fiber amplifier. As high as 500-mW average power at 1560 nm is achieved by ×8 replicas. Moreover, the highest frequency-doubling conversion efficiency reached 56.3% by using a periodically poled lithium niobate (PPLN) crystal at room temperature. Section 5 discusses an all-optical control method via resonantly enhanced optical nonlinearity (or pump-induced refractive index change, RIC) for highprecision repetition rate stabilization. The standard deviation (SD) of repetition rate can be reduced to a record level of <100 μHz by using the RIC method in a PM figure-eight laser cavity.
Fiber laser
Fiber lasers offer several practical advantages, such as excellent spatial-mode quality, effective heat dissipation, and flexible optical path and, recently, are becoming attractive laser sources in both scientific researches and industry applications. Especially, mode-locked fiber lasers with ultrashort pulse duration and high-repetition rate have attracted a lot of attention for their applications in optical sensing, optical communication, optical metrology, and biomedical imaging and processing [11,12]. Therefore, various femtosecond/picosecond mode-locked lasers have been constructed and developed. As mode-locked lasers are often affected by environmental perturbations (mechanical vibration and temperature fluctuation), robust and stable oscillator with compact design are urgently needed. In this section, we present a compact femtosecond fiber laser at 1.03 μm by using integrated fiber optical components. The shortest dechirped pulse duration reaches 81 fs for a net cavity dispersion value close to −0.001 ps 2 . Another part of this section described a self-started Er-doped laser oscillator, which is mode-locked by NALM with PM-fiber configuration. By optimizing the net dispersion, the buildup time can be reduced from 8 min to ~ms order of magnitude.
Operation regime of mode-locked lasers
As well known, the main features of mode-locked fiber laser depend on the pulse evolution process, which is relevant to the group-velocity dispersion (GVD) and the nonlinearity in optical fibers. According to the net intra-cavity dispersion, the pulse-shaping process can be roughly distinguished into the four different regimes, such as soliton regime, stretch-pulse regime, parabolic-pulse regime, and giant-chirp pulse regime, corresponding to all-anomalous dispersion, normal-anomalous dispersion, all-normal dispersion, and large-normal dispersion, respectively. Due to the equilibrium between Kerr nonlinearity and GVD, pulses that propagate in all-anomalous dispersion laser cavity keep unchanged in the form of fundamental soliton [13,14]. While in dispersion-managed laser cavities, the negative dispersion is compensated by positive dispersion and thus stretch pulse forms. When the net cavity dispersion is optimized to zero, significant variations on pulse duration could be observed [15,16]. As pulse operates in the all-normal dispersion regime, where laser gain, selfphase modulation, and dispersion co-effect, spectral/temporal filtering effects force linear chirping in the pulse, so that similariton forms [17][18][19]. While in ultra-long laser cavity, giantchirped oscillating can be realized with ultralow repetition rate but at high-pulse energy [20][21][22][23]. In this section, we designed a compact ultra-fast Yb-doped fiber laser with integrated optical components. By integrating wavelength division multiplexer and optical isolator with collimators, the fiber loop was simplified. Self-started mode-locking could be realized by setting appropriate polarization angle of four intra-cavity wave plates. Due to the normal dispersion of fiber at 1.0 μm, transmission grating pair with 1250 l/mm was used to provide adjustable anomalous dispersion. As a result, 81-fs temporal duration with 65-MHz repetition rate and 0.5-nJ pulse energy was produced.
The mode-locking procedure can be explained in Figure 1. Two polarization controllers and a polarization-sensitive isolator (PSI) are used as the key elements for mode-locking. This combination acts as a virtual saturable absorber, which can absorb the low-intensity tail of pulse and transmit high-intensity part such that the pulse could be shortened. The pulse with linear polarization changes to elliptical polarization by twisting the polarization controller. As mentioned, self-phase modulation (SPM) or cross-phase modulation (XPM) can arouse energy coupling between two orthogonal polarizations. Moreover, serious nonlinear polarization rotation is produced by the high gain in the active fiber. Finally, another polarization controller is used to modify the polarization state to facilitate the central part of the pulse getting through the PSI [24].
In our experiment, a Yb-doped fiber laser shown in Figure 2(a) was firstly constructed without dispersion compensation elements. Three intra-cavity wave plates including two quarterwave plates, QWP1 and QWP2, and one half-wave plate, HWP, were set with appropriate polarization angles to realize self-started mode-locking. The pigtails of fiber components are Hi1060 fiber with GVD of ~26 fs 2 /mm and TOD of ~41 fs 3 /mm, while the GVD for the active fiber is 39 fs 2 /mm. The repetition rate and pulse duration were measured to be 70 MHz and 13 ps, respectively. By external-cavity dechirping, the pulse can be compressed to 170-fs duration, but with obvious pedestal. The autocorrelation of pulse before and after dechirping is shown in Figure 3 Secondly, a transmission grating pairs was used to manage the intra-cavity dispersion of the Yb-fiber laser, as shown in Figure 2(b). The quarter-wave plate, QWP3, was used to impose 90° polarization rotation on laser pulses by double-passing the grating pairs. Soliton, stretchpulse, and all-normal dispersion regime can be achieved by optimizing the distance between gratings. Figure 3(d) compares the various spectral shapes with different grating separations. As shown in Figure 3(c), the shortest pulse duration was measured to be 81 fs. The black curve in Figure 3(d) represents the broadest spectra with a 10-dB bandwidth of 100 nm. The uncompensated phase was mainly caused by the accumulated high-order dispersion in fibers as well as intra-and extra-cavity grating pairs.
The standard repetition rate of commercial available fiber laser is typically 80 MHz with an optional design from 20 to 250 MHz. In order to combine both high pulse energy and high average power, a 10-MHz repetition rate is the best choice for applications. When the repetition rate is lower than 10 MHz, a pulse picker has to be used between the laser oscillator and the succeeding amplifier.
In this section, we introduce a PM figure-eight laser cavity which is the best option for oscillator operated at 10-MHz repetition rate. Figure 4(a) shows the experimental setup of the figureeight laser cavity. The linear loop comprises of a 980/1550 nm wavelength division multiplexer, a segment of Er-doped fiber (PM-ESF-7/125, Nufern), an isolator, a 2-nm bandpass filter at 1550 nm, and an output coupler CP2 with a splitting ratio of 20:80. The active gain applied in the linear loop is to compensate the cavity loses and facilitate self-started modelocking. The band-pass filter is used to block longer wavelength (Raman self-frequency shift) and reduce the temporal width of the pulse to be self-consistent. Pulses from the linear loop are coupled into NALM via CP1 with a splitting ratio of 45:55.
Over-pump with three LDs was applied to provide enough power for self-started modelocking. Interestingly, the buildup time of mode-locking was found to be closely related to the net cavity dispersion. When the net dispersion was set to −0.115 ps 2 , as long as 8-min buildup time was observed. After optimizing the net dispersion to about −0.062 ps 2 , the time dramatically decreased to ~ms of magnitude. Furthermore, we recorded the mode-locked pulse trains triggered by a square wave with 5-Hz modulation frequency which is simultaneously used to drive LD3. Figure 4(c) shows two adjacent periods with 50-ms pump duration, and the corresponding buildup time was measured to be 53 and 6 ms, respectively, exhibiting certain randomness. From experimental results point of view, the mode-locking buildup time is a random value in a certain range, which is related to the net cavity dispersion.
Interestingly, multiple-pulse operation was observed as the mode-locking is established, as shown in Figure 4(b). Peak power clamping effect originating from sagnac mechanism resulted in the formation of pulse bunching [29]. Stable single pulse could be obtained by decreasing the pump power. In single-pulse regime, the 5-min power stability was measured to be 0.26%, as shown in the inset of Figure 4(d).
Broadband supercontinuum
Recent years, supercontinuum generation (SC) has attracted much attention for its applications in optical coherence tomography, stimulated emission depletion microscopy, dense wavelength-division-multiplexing (DWDM) optical networks, and frequency comb generation [30][31][32][33]. In this section, several nonlinear optical effects such as SPM, XPM, four-wave mixing (FWM), and stimulated Roman scattering (SRS) that facilitate SC generation would be firstly discussed. Secondly, spectral filtering method is demonstrated to be an effective way for broadband supercontinuum generation in picosecond region [34]. By spectral filtering, a linear-chirped picosecond pulse with a 1-nm bandwidth filter installed between two Yb-doped single-mode preamplifiers, pulse shortening, and high peak power is achieved, so that an octave-spanning SC with bandwidth of 650 nm from 750 to 1400 nm and 10-dB peak-to-peak flatness was obtained at an output average power of 190 mW. Thirdly, SC covering from 950 to 2200 nm is generated in a 20-cm-long PM HNLF by injecting 72-fs pulse with 150-mW average power and 60-MHz repetition rate at 1.56 μm. Furthermore, an inline f-2f interferometer, including a PPLN for frequency doubling and a PM-fiber delay line, is used to generate carrier-envelop offset signal (f ceo ).
Nonlinear effects in optical fibers
Most of nonlinear effects in optical fibers attribute to nonlinear refraction, which refer to the intensity dependence of the refractive index. Especially, the lowest order nonlinear effects in optical fibers originate from the third-order susceptibility χ (3) , which governs the four-wave mixing, Raman effect, third-harmonic generation, and polarization properties [24].
This section does not thoroughly focus on the discussion of theoretical issues. In simple, the refractive index of the optical fiber can be described by the following equation where n 0 is the linear part and n 1 |E(t)| 2 is the nonlinear part.
An interesting phenomenon of the intensity dependence of the refractive index change in optical fiber occurs through SPM. When the input pulse is of low intensity, the corresponding refractive index remains a constant n 0 . As the input pulse increases, the corresponding refractive index becomes nonlinear change with power intensity I. Hence, an additional phase shift is produced: This can be understood as an instantaneous optical frequency change from its central frequency: Therefore, new spectral components are generated and time dependent frequency chirping is produced.
Another most widely studied nonlinear effect is XPM, which leads to asymmetric spectral and temporal changes for two co-propagating optical fields with different wavelength or orthogonally polarization. The contribution of the nonlinear phase shift induced by XPM is twice that of SPM. Therefore, the nonlinear part Δn j induced by the third-order nonlinear effects is given by (j = 1, 2) Eq. (4) shows the refractive index of the optical media seen by an optical field inside a singlemode fiber depends not only on the intensity of that field but also on the intensity of the other co-propagating fields [35]. As the optical field propagates inside the fiber, an intensitydependent nonlinear phase shift shows up The first term is related to SPM while the second term is related to XPM.
Stimulated Raman scattering (SRS) is an important nonlinear process that can produce redshifted spectral components. Once the spectrum of the input pulse is broad enough, the Raman gain can amplify the long-wavelength components of the pulse with the short-wavelength components acting as pumps, and the energy appears red-shifted. The longer the propagat-ing fiber, the more red-shifted spectral components can be generated. The red-shifted components are called Stokes wave. The initial growth of the Stokes wave can be described by where I s is the Stokes wave intensity, I p is the pump-wave intensity, and g R is the Raman-gain coefficient, which is related to the cross section of spontaneous Raman scattering.
The Raman-gain coefficient g R (Ω) is the most important factor to describe SRS. Ω represents the frequency difference between the pump wave ω p and Stokes wave ω s . In the case of silica fibers, the Raman-gain spectrum is found to be very broad, extending up to approximately 40 THz. Assuming the pump wavelength is 1.5 μm and, peak gain is g R = 6 × 10 −14 m/W, the frequency downshift can be calculated to be 13.2 THz.
When supercontinuum is generating in an optical fiber, the SPM, XPM, and SRS are always accompanied by FWM. In optical fibers, FWM transfers energy from pump wave (ω p ) to two other waves in frequency domain, blue-shifted (anti-Stokes wave, ω as ) and red-shifted (Stokes wave, ω s ). Once the phase-matching condition Δk = 2k(ω p ) − k(ω s ) − k(ω as ) = 0 is satisfied, the Stokes and anti-Stokes waves can be amplified from noise or an incident signal at ω s or ω as , respectively [36,37]. Therefore, FWM process is used to produce spectral sidebands for supercontinuum generation.
Supercontinuum generation
SC is a powerful laser sources for many applications, such as nonlinear microscopy, optical coherence tomography, and frequency metrology [38][39][40]. Nowadays, more than one octave SC can be easily generated with a length of PCF, and the average power can reach tens of Watts [41,42]. When ultrashort optical pulses propagate through a PCF fiber, the combination of SPM, XPM, SRS, and FWM is responsible for the spectral broadening. Generally speaking, the feature of SC depends on whether the incident laser wavelength λ located is below, closed to, or above the Zero-dispersion wavelength λ D of the PCF. In the anomalousdispersion regime (λ > λ D ) where β 2 < 0, soliton affects. If the λ nearly coincides with λ D , β 2 ≈ 0, β 3 dominant and the phase-matching condition of FWM are approximately satisfied. While in the normal-dispersion regime, β 2 > 0, GVD and SPM effects dominant SC generation. From the time domain of view, SPM and soliton effects dominant SC generation for femtosecond (typically <1 ps) pump pulses, while FWM and SRS contribute to spectral broadening for tens of picoseconds pulses.
Spectrally filtered seed for broadband supercontinuum generation in single-mode fiber amplifiers
There are several methods to extend the SC spectrum. Considerable spectral broadening could be observed with high-power incident laser. High-average/high-peak powers facilitate CW and pulse SC generation [43][44][45]. Besides, the SC bandwidth could also be increased by tapering PCFs. A flat (3 dB) spectrum from 395 to 850 nm was achieved in a tapered fiber with a continuously decreasing ZDW [46]. In this section, we demonstrate an effective method for broadband SC generation, which is valid in normal-dispersion fiber amplifiers. By spectral filtering of upchirped pulse at 1028 nm with 1-nm bandpass filter, as broad as 650-nm bandwidth from 750 to 1400 nm within 10-dB peak-to-peak flatness is obtained with an output power of 190 mW.
The experimental setup is shown in Figure 5(a). The SC laser source is consisted of a picosecond mode-locked laser oscillator, a spectral filter, two-stage single-mode amplifiers, and 2-mlong PCF with ZDW at 1.02 μm. The laser oscillator operated in an all-normal-dispersion regime with repetition rate of 20 MHz. With 100 mW pumping power, 25 mW average output power laser is exported from the 30% port of the coupler. The pulse duration of highly upchirped pulse was measured to be 10 ps. A bandpass spectral filter with 1-nm bandwidth at 1036 nm is installed between two singlemode fiber amplifiers. The transparent wavelength of the filter could be tuned from 1024 to 1036 nm by varying the incident deflection angle. For the large up-chirp with 10-nm spectral width (see Figure 5(b)) and 10-ps temporal duration, corresponding to a time-bandwidth product of 28.3, pulse can be greatly shortened by the filter. The shortest pulse duration of 2.9 ps was obtained with filtering window at 1028 nm. After the second-stage amplifier, the laser pulses could be amplified to an average power up to 190 mW with 400 mW pumping power.
A 2-m length of silica-based PCF with ZDW at 1024 nm is directly spliced to the fiber end of SMFA2. Figure 5(c) shows the output spectrum with different filtering window. The spectra keep unchanged when the filtering window is at the shoulder of the spectrum, shown as the red curve (1024.9 nm) and blue curve (1033.4 nm) in Figure 5(c). When filtering window is located at the central wavelength of 1028 nm, the 10-dB bandwidth of SC is extended to 650 nm (from 750 to 1400 nm), shown as the pink curve in Figure 5(c). Besides, filtering windows above or below the central wavelength produce a less broad SC.
One octave supercontinuum for frequency comb generation
Broadband supercontinuum of bandwidth up to 1250 nm can also be provided by HNLFs with spectral-tailored femtosecond pump pulses produced by erbium-doped power amplifiers. The schematic diagram of the experiment is shown in Figure 6(a). The laser system consisted of an erbium-doped mode-locking fiber oscillator, a single-mode fiber amplifier (SMFA), and 20-cm-long PM-HNLF. To improve the mode-locking stability, an electric polarization controller (EPC) is utilized to replace the conventional mechanical polarization controller such that automatic and active control of mode-locking is accessible. By applying the voltage on three axes (x, y, and z) of EPC, accurate control of the temporal duration, spectral shape, f rep , and f ceo can be achieved [47,48]. With the help of a PBS and a FRM, a dual-pass single-mode fiber amplifier with bidirectional pump configuration was used to boost the laser average power to more than 150 mW average power and reduce the environmental disturbance on SMFA. The pulse duration at the output port was measured to be 2.84 ps. Additional PM-1550 fiber was used to dechirp the preamplified pulse to 72 fs (shown in Figure 6(b)). Therefore, considering a repetition rate of 60 MHz, the pulse peak power achieved as high as 34.7 kW. Three types of HNLFs, such as NL 1550-ZERO, PM-HNLF, and Zero-slope HNLF, were applied to generate the supercontinuum by splicing the HNLFs to the dechirping fiber directly.
As shown in Figure 6(c), 20-cm-long PM-HNLF with nonlinearity of 10.5 W −1 km −1 achieved the broadest spectrum, covering from 950 to 2200 nm, which is sufficient broad to produce f ceo signal. The HNLF type should be taken into consideration as it influences the SC generation. As shown in Figure 7, a collinear setup was established for the detection of f ceo signal. The SC generated by 20-cm-long PM-HNLF was coupled into free space via a lens (L1) with adjustable focal length. An inline f-2f interferometer, including a PPLN, several wave plates and lens, and a PM-fiber delay line, is used to produce the temporal overlapped components at 1.0 μm. The long-wavelength component of SC at 2092 nm was frequency doubled to match with the short-wavelength component at 1046 nm. After the PPLN, two lenses, L3 and L4, were used to couple the two components at 1046 nm back to PM-980 fiber. A half-wave plate, HWP2, is used to adjust the energy ratio on the fast and slow axes of PM-980 fiber. The pulse transmitted along the slow axis experiences a delay relative to the pulse on fast axis. With an optimized fiber length of 3.4 m, the differential delay between the fast and slow axes could be fully compensated [49]. Subsequently, a half-wave plate, HWP3, as well as a PBS were used to selected pulses to generate f ceo signal on APD. Finally, with 28-dB signal-to-noise ratio was generated by using this setup.
Nonlinear fiber amplifier
With the increasing applications in frequency metrology, THz generation, and in cataract surgery, the development for high-energy transform-limited pulse generation around 1.55 μm is still a fascinating area [50][51][52][53]. Owing to the limited available output power from laser oscillator, erbium-doped fiber amplifiers (EDFAs) are commonly used. Nevertheless, high-power amplification in EDFA is inevitably accompanied by several unwanted effects, such as SRS and amplified spontaneous emission (ASE), which would significantly deteriorate the temporal and spectral duration of pulse [54]. Chirped-pulse amplification (CPA) provides an effective way to decrease pulse peak power and avoid the nonlinearity in optical fibers [55][56][57][58]. In CPA, strong stretching and compression occurs to extract more energy and avoid nonlinear distortion as well as damage. However, CPA is inevitably accompanied by the gain-narrowing effect and therefore hardly produces pulse with temporal duration less than 400 fs [59].
Even though CPA has many advantages over the other techniques in amplifying pulses around 1.55 μm, ~100-fs pulse duration with above 10-nJ pulse energy is still a challenge because of the spectral gain-narrowing effects and nonlinear phase accumulation. Moreover, due to the involvement of bulk media, CPA is not suited for applications that require compact size and alignment-free laser source. A recent developed technique, divided-pulse amplification (DPA), opens up a new way for high-power laser pulse amplification [60][61][62]. In the configuration of DPA, the initial pulse is divided into a sequence of lower-intensity pulse with orthogonal polarization for successive replicas, and subsequently, the low-energy pulse is amplified and then recombined to create a high-energy pulse [61,63].
In this section, we mainly focus on DPA at 1.56 μm where pulse amplification and compression can be simultaneous carried out so that a separate compressor is no longer necessary. The schematic diagram of DPA is shown in Figure 8(a). The experimental setup is composed of a mode-locked fiber laser, a fiber stretcher, a single-mode fiber amplifier for preamplifying, and a pulse-divider as well as a double-clad fiber amplifier for main amplification. The Er-doped fiber laser with 80-MHz repetition rate shared the same configuration as Figure 6(a), which takes the advantage of EPC to actively control the mode-locking. A photodiode and an electric loop were applied to monitor and feedback control the EPC for long-term stable operation. The fiber oscillator consisted of 1.74-m SMF28-e fiber with dispersion parameter of 19 ps/nm/ km and 0.82-m Er-doped fiber with dispersion parameter of −51 ps/nm/km. There, the laser operated in the stretched-pulse regime and produced positively chirped pulses. As a result, with 200-mW pumping power at 976 nm, the laser oscillator produces 5-mW output average power with 1.5-ps pulse duration and 28-nm spectral bandwidth, corresponding to a timebandwidth product of 5.2.
A fiber stretcher is spliced to the output of the fiber oscillator to stretch laser pulse and control the quantity of frequency up-chirp. However, over-long fiber could inevitably introduce too much high-order dispersion which is hardly compensated by the pulse-compressing stage. For the current configuration, a double-pass fiber stretcher consisting of a fiber-coupled PBS with PM fiber at input/output port and non-PM fiber at common port, a segment of non-PM dispersion compensation fiber with 6.0-μm-mode field diameter and −38 ps/nm/km dispersion at 1550 nm, and a FRM is used to reduce the environmental perturbation.
In our experiment, 6-m dispersion compensation fiber was applied to stretch pulses from the fiber oscillator. A dual-pass bidirectionally pumped single-mode fiber preamplifier was used to boost the average power to more than 100 mW to ensure efficient operation of the subsequent amplifiers. A FRM reflected the incident pulse to suppress ASE noise and rotated the polarization of the pulses by 90° to cancel all birefringence effects in the dual-pass amplifier. A fiber-based polarization beam splitter (F-PBS) was used to couple the seed laser to the preamplifier and reflected preamplified pulses to subsequent components. The output characters of the preamplifier were shown as the blue curves in Figure 9(a) and (b). The FWHM temporal duration and spectrum bandwidth of the preamplified pulses is 4 ps and 15 nm, respectively, generating a time-bandwidth product of 7.4. Dramatic decrease in spectral bandwidth was observed due to the limited transmission bandwidth of WDM and FRM as well as spectral-narrowing effect in fiber amplifier.
Then, the concept of DPA was carried out to boost the laser to Watt-level average power. The preamplified laser is coupled into free space by collimator C1 and rotated to horizontal polarization to reach maximum transmission on PBS. The pulse division and combination were achieved by applying cascaded YVO 4 -based and PBS-based dividers with the help of a FRM to reflect the replicas passing through the same divider but in the opposite direction. Each divider (YVO 4 -based or PBS-based) can divide a single pulse into two cross-polarized replicas; hence, a single seed pulse could be temporally divided into 2 N (where N is the stage number of the divider) replicas. Ideally, each replica has identical pulse energy after division. As depicted in Figure 8(b), three YVO 4 crystals with lengths of 10, 20, and 40 mm divided the initial pulse into N = 8 replicas. A half-wave plate (HWP) was used to produce the desired polarization of input pulses. The first (2 1 ) and third (2 3 ) YVO 4 crystals had their crystal optical axes (OA) oriented in the same direction as the horizontal plane, while the OA of the second (2 2 ) YVO 4 crystal oriented at a 45° angle to the horizontal plane. The polarization-mode delay between ordinary and extraordinary waves in YVO 4 is 0.7 ps/mm at 1560 nm. The shortest crystal length for our system was chosen to split the input pulse into replicas with 7 ps separation, about twice of the seed pulse duration.
To mitigate the nonlinearity in main amplifier, the string of pulse (N = 8) was further divided by three PBS-based dividers, resulting in a final pulse number of 64. For PBS-based divider, each incoming pulse was divided into an s-polarized beam and a p-polarized beam. All ppolarized components were directly transmitted the PBS, while the s-polarized components were reflected to the folded delay line. For the sake of simplicity, the second PBS-based divider (2 5 ) had its p-polarized direction oriented 45° to the direction of the horizontal plane, while the first (2 4 ) and third (2 6 ) PBS-based oriented in the same direction as the horizontal plane, such that separate half-wave plates were no longer necessary.
Owing to the delay length of 10, 20, 40, 26.8, 53.6, and 107.2 mm, the 2 1 , 2 2 , 2 3 , 2 4 , 2 5 , and 2 6 stages approximately provided time delay of 7, 14, 28, 130, 260, and 520 ps, respectively. Figure 8(c) shows the measured autocorrelation trace of the pulse string which matches well with the designed time delays. The 7-ps interval between adjacent peaks in the same envelope was consistent with the expected time delay with 10-mm increment length of YVO 4 , and the ~140-ps spacing between two adjacent envelopes was consistent with the expected time delay introduced by the PBS-based divider.
Intuitively, for simultaneous pulse amplification and compression in EDFAs, a positively prechirping seed pulse is desired. Numerical simulations show that there exists an equilibrium position that can not only restrict excessive nonlinear effects to ensure high-quality temporal integrity but also produce sufficient optical nonlinearity to broaden the spectrum around the wavelength of 1.55 μm. The generalized nonlinear Schrödinger equation (7) with the split-step Fourier method was used to carry out the simulation [24].
where A = A(z, t) is the complex amplitude of the pulse envelope of pulses, α is the laser gain coefficient, β n is the dispersion parameter at ω 0 (1560 nm), and γ (3 W −1 km −1 ) is the nonlinear coefficient. The right-hand side of Eq. (7) models laser gain, dispersion, and nonlinearity. Pulse of 2.5-ps temporal duration and 19.9-nm spectral width (corresponding to 0.16 ps 2 prechirping on 180-fs transform-limited pulse) and a pulse energy of 0.05 nJ were applied in the simulation. The interplay of the SPM and group-velocity dispersion (GVD) as well as laser gain can lead to a qualitatively different behavior compared with that expected from them alone. SPM broadened the spectrum with increase in pulse energy, and simultaneously, the anomalous dispersion of the fiber compressed the new spectral components resulting in temporal shortening. Figure 10(a) compares the simulation results with different α but a fixed β 2 (−22 fs 2 / mm). It is clear that pulse compression operates in linear regime when the laser gain is low, then it enters in nonlinear regime when the laser gain gradually increased. The shortest transform-limited pulse duration decreased from 180 fs at 7.0 m (α = 0 dB/m) to 60 fs at 4.3 m (α = 3 dB/m). Figure 10(b) compares the pulse compression with different β 2 but a fixed α (3 dB/m). The maximum pulse energies with respect to fiber length of 4.65, 5.07, and 5.53 m reached 1.24, 1.66, and 2.28 nJ, respectively. Therefore, higher α and smaller |β 2 | are benefit to overcome spectral bandwidth limitation for high-energy pulse amplification. For reference, the blue curves in Figure 10(a) and (b) present pulse evolution with the same parameters.
Next, we focus on pulse amplification and compression in a fixed fiber length by way of guiding the subsequent experiment. About 5.0-m-long fiber with β 2 = −22 fs 2 /mm was introduced to simulate the output pulse duration and the time-bandwidth product (TBP) at different position along the fiber. As shown in Figure 10(c), when the total gain is smaller than 16 dB, the output pulse duration deceases linearly owing to the GVD and insufficient nonlinearity. As the total gain is greater than 24 dB, the output pulse duration dramatically decreases owing to strong nonlinear compression. Theoretically, pulse as short as 80-fs duration can be achieved with a total gain of nearly 28 dB. Although the pulse duration could be further decreased to 20 fs with 32-dB gain, considerable pedestal as well as wave breaking appears due to excessive nonlinearity. Meanwhile, the TBP of the pulse along the fiber gradually decreases from 4.1 at the input port to 0.5 at the output port. Furthermore, a PPLN with 20.9-μm poling period and 0.3-mm length was used for frequency-doubling the amplified laser and checking the available peak power at 1560 nm. A pair of lens was used to focus and collimate the input and output beam on the PPLN, respectively.
The output average power at 1560 nm and the corresponding SHG is shown in Figure 12. The highest SHG conversion efficiency was obtained as 56.3% with 302 mW incident power at 1560 nm. Further increasing the power at 1560 nm induced decay of conversion efficiency, shown as the black squares in Figure 12. To extract more energy from the double-pass amplifier, we will increase the number of the replicas from 8 to 16 and 32. The results for ×16 and ×32 replicas are still under investigation.
In conclusion, a divided fiber laser fiber amplifier delivering 500 mW average power at 1560 nm by the interplay between divided prechirped pulse amplification and nonlinear pulse compression. A small core double-clad erbium-doped fiber with anomalous dispersion carries out the pulse amplification and simultaneously compresses the laser pulses such that a separate compressor is no longer necessary. A numeric simulation reveals the existence of an optimum fiber length for producing a transform-limited pulse. Furthermore, frequency doubling to 780 nm with 170-mW average power is realized by using a PPLN at room temperature.
Repetition rate stabilization
Fiber-based frequency comb is recognized as the key breakthrough in the field of optics for it brings high accuracy in frequency domain as well as low jitter in time domain [49,[64][65][66][67]. Principally, as a frequency comb, two RF frequencies, f ceo and f rep , are required to be stabilized to external references. Therefore, the optical frequencies can be written as ν = m × f rep + f ceo , where m is a large integer of order 10 6 that indexes the comb line. Nevertheless, recent developments in adaptive dual-comb spectroscopy successfully employed free-running mode-locked lasers where the f ceo instabilities could be compensated by data acquisition and electronic signal processing [68,69]. Therefore, high accuracy f rep stabilization of passively ML lasers is of great importance.
The relatively mature method for f rep locking is to use a piezoelectric ceramic transducer (PZT) to control the geometrical length L of the laser cavity, and the best locking accuracy is in the range of ±0.5 mHz with the corresponding SD of 220 μHz [70]. However, the PZT-based stabilization encounters many limitations, such as significant positioning errors, hysteresis effect, bulky-design, and the need for time-consuming alignment.
In this section, we focus on the f rep stabilization by using optical pumping scheme which can be achieved via resonantly enhanced optical nonlinearity or so-called pump-induced refractive index change (RIC) in doped fibers. In optical pumping scheme, the f rep is stabilized by modulating the refractive index n, while keeping the geometrical cavity length L fixed. In the past, this method has been successfully applied in fiber switch where a low pump power and a short length doped fiber are sufficient for the switching [71]. Moreover, the validity of this concept has also been achieved in coherent combining and adaptive interferometry [72]. In 2013, Rieger et al. reported all-optical stabilized repetition rate by using the RIC-based method.
With the help of thermos-electric element, over 12-h long-term stabilization was achieved in an NPE-mode-locked Er-doped fiber laser, while the SD of repetition rate drift was measured to be 22 mHz. A recent experiment extends this concept to Yb-fiber laser and achieves 1.39-mHz SD of residual fluctuation in an hour measurement [73].
As reported in Ref. [74], a commercial available pump current supply can provide a minimum resolution of pump power as 1.5 μW and thus achieve a controlling accuracy of 0.05 Hz, which is more than two orders of magnitude than PZT-based method. Therefore, an interesting experiment worth to do is to use RIC-method to achieve high-precision f rep stabilization. So far, the RIC method has been fully investigated in NPR mode-locked lasers, which applied non-PM fibers and components [73][74][75], and the locking accuracy limited to ~mHz. Considering the environmental perturbation on non-PM fiber, a straightforward idea is to implement RIC method in a PM fiber laser. Therefore, the following part will discuss high-precision repetition rate stabilization by using RIC method in a PM figure-eight laser cavity.
The laser setup shown in Figure 13 is same as Figure 4(a), except the net dispersion of laser cavity. In the current experiment setup for all-optical repetition stabilization, a 56-cm-long Er 3+doped fiber (EDF2) is spliced asymmetrically in the NALM to act as a frequency controller, while the LD3, which is controlled by the error signal from frequency mixer, provides the feedback modulating pump power via WDM3 on EDF2. Besides, a segment of DCF38 is used to compensate the anomalous dispersion of PM1550 fiber. The dispersion of linear loop and the NALM was estimated numerically to be −0.208 and 0.025 ps 2 , producing −0.183 ps 2 net dispersion for the whole cavity. Self-started mode-locking in multiple-pulse regime can be achieved by over-pumping method, and stable single-pulse operation can be obtained by decreasing the pump power of LD1 and LD2. At fundamental repetition rate of 11.9 MHz, the figure-eight laser cavity delivers 1.5-mW average power via CP2.
The repetition rate was detected by PD3 and successively compared with standard reference (Rb clock) in a frequency mixer to produce the error signal. Subsequently, the error signal was filtered and amplified by low-noise voltage preamplifier with frequency cutoff at 1 MHz and a maximum voltage gain of 5 × 10 4 and further processed by a proportional-integral-derivative (PID) controller.
The long-term stabilization was depicted in Figure 14. As low as 27-μHz accuracy is achieved within 16-h measurement. The inset of Figure 14 magnifies the measured dates from 30,000 to 31,000 s and shows fluctuation range within ±0.1 mHz. Typically, thermal effect, Kerr nonlinear effect, pump-induced nonlinear effect, and random acoustic perturbations contribute to the precision of f rep stabilization. For our experiment, a temperature-controlled incuba- tor with a ripple of 0.2°C was used to take the laser cavity to isolate environmental perturbation. As for Kerr-nonlinearity, the RIC is proportional to the traveling power of resonant laser. Assuming 5-mW traveling power in NALM, the Kerr-induced RIC is estimated as 1.2 × 10 -7 /mW, having the same order of magnitude of the pump-induced RIC (2.1 × 10 −7 /mW). However, when the pump power of LD3 increased from 30 to 205 mW, only 1.6% of output power change was observed, which means little change on the dynamic process of pulse evolution in NALM. Thus, the Kerr-induced RIC is near ~1% of the RIC by pump-induced nonlinearity. Therefore, we postulate that the nonlinearity on the RIC of fibers owes to pumpinduced nonlinear effect and thermal effect rather than Kerr effect.
Conclusion
In this chapter, we first present several types of mode-locked fiber lasers, as well as their derivatives for SC generation. Second, an effective method named DPA was applied in Erdoped fiber laser system allowing simultaneous pulse amplification and compression so that additional pulse compressor is no longer needed. With ×8 replicas in DPA, as high as 500-mW average power was achieved and the highest SHG conversion efficiency was measured to be 56.3%. Third, an all-optical method, named as pump-induced nonlinearity, is applied to stabilize the repetition rate of a figure-eight Er-doped fiber laser, achieving as low as 27-μHz accuracy within 16-h measurement. | 9,234.4 | 2016-09-07T00:00:00.000 | [
"Physics"
] |
Research into Antibacterial Activity of Novel Disinfectants Derived from Polyhexamethylene Guanidine Hydrochloride
It is common knowledge that microorganisms cause biological damage to structures and facilities within various buildings and constructions. One of the most effective ways to increase the biological resistance of construction and industrial materials is the introduction of biocides into their composition. This article presents the results of research on the inactivation of various types of microorganisms with new disinfectants of the Teflex group derived from polyhexamethylguanidine hydrochloride. In the course of research, it was revealed that the specimens have biocidal (bactericidal), fungicidal and sporicidal activity when tested on bacterial suspensions and contaminated surfaces.
Introduction
The problem of increased durability of products and structures of buildings and constructions is currently paid considerable attention. It is known that more than 50% of the total volume of damage registered in the world is attributed to the activity of microorganisms. Almost all materials are subject to biological damage, including cement mortars and concretes, composite materials with binders, wood, etc. This is especially true when these materials are used in conditions conducive to the growth of microorganisms: in meat and dairy plants, in vegetable storehouses, livestock buildings, etc.[1, 2, 3, 4, 5]. Mold damage is also common on the interior walls of residential buildings, hospitals, religious buildings, architectural monuments and works of art.
Bacteria, filamentous fungi and actinomycetes constantly and everywhere inhabit the human environment, using organic and inorganic compounds as a nutrient substrate. In recent years, there has been an increase in the diversity and number of microorganisms that cause biological damage to materials and structures. The aggressiveness of known species has enhanced. IOP Publishing doi: 10.1088/1757-899X/1079/6/062017 2 Bio-contamination of buildings and structures disturbs ecological situation. The combination of extreme environmental changes, manifested in the form of various processes of infection and biodegradation of building materials and structures, poses a serious threat to intrastate measures aimed at the safety of humans' life and protection of their health. To expand the durability of building structures and improve the environmental situation, it is necessary to take measures that reduce or eliminate aggressive biological effects.
The following works deal with development of methods improving the bio resistance of building materials [6,7,8,9]. The increase in biological resistance can be achieved through the use of special binders, biocidal additives, etc. [10,11,12,13,14].
The current sanitary and epidemiological situation in the modern world requires a constant search for new, more effective, scientifically proved methods of disinfection, as well as for the development of highly active, cost-effective environmentally friendly disinfectants with a wide range of antibacterial (antimicrobial) activity.
The relevance of this work is determined by the necessity to develop such biocidal agents which do not pollute the environment; are able to resist microorganisms of various systematic groups (bacteria, moldy fungi, etc.); have a long protective effect, are available and cheap.
Of particular interest in this regard are polymer derivatives that include guanidine, which is part of the amino acids (arginine and creatine) and vitamin B. The guanidine molecule contains three active nitrogen atoms, which allows you to introduce almost any substituents into it and get the positive charge necessary for biocidal activity. The presence of a double bond expands the spectrum of action of this group of preparations [15].
Among above disinfectants are «Teflex» group agents recently developed by CJSC «Soft Protector». These preparations are water-soluble polymer biocides, derivatives of polyhexamethylene guanidine hydrochloride, with transition metal salts and other functional additives introduced into their composition. The products are environmentally friendly disinfectants of the latest generation, safe for humans and animals (4 class of hazard) have low exposure operating concentrations of the active substance, are pH neutral, without color and smell, low toxic, corrosion-resistant and have high storage stability.
The objective of research: to assess the disinfection activity of the Teflex group preparations on the basis of select preparation -«MultiDez» Teflex. The «MultiDez» disinfection properties were tested on bacterial suspensions and assessed as follows.
Materials and methods
Bacterial suspensions at a concentration of 5x108 CFU/ml were mixed in equal volumes with «MultiDez». In the control sample, sterile distilled water was added to the bacterial suspension instead of a disinfectant. Contact of microorganisms with the disinfectant was carried out under constant stirring in room conditions (temperature 20 ± 20C, relative humidity 50-60 %). The contact time was 1 hour. After completion of incubation, the action of the disinfectant was stopped by a neutralizer solution (30 g/l of polysorbate 80 + 3 g/l of lecithine). The number of viable microorganisms in the suspension was determined by double dilution followed by inoculation on solid Hottinger's nutrient agar. The disinfecting activity of the agent was evaluated in 24 hours by comparing the number of viable cells with the control sample.
The evaluation of sporocidal properties by radial diffusion technique was performed with respect to the lysed zone of microbial growth after applying drops of tested solutions of various concentrations to the B. Segei lawn. To obtain a lawn culture, B. cereus suspension at a concentration of 1x106 CFU/ml was applied to plates with Hottinger agar in the volume of 0.3 ml. The suspension was evenly IOP Publishing doi:10.1088/1757-899X/1079/6/062017 3 distributed with a spatula over the agar surface, the lawn was dried a little, then treatment disinfectant solutions with the following concentrations of the active substance were applied to the lawn surface by drip method -2%; 1,5%; 1,3%; 1%; 0,8%; 0,5%; 0,4%; 0,2%; 0,1%; 0,05% in the amount of 20 µl. Sterile distilled water applied in the same volume to the lawn was used as a control sample. The degree of disinfecting activity of the tested solutions was evaluated against lysed zones in 24 hours.
Sporocidal properties were tested and assessed through treatment of contaminated coupons. Contamination of test surfaces (sterile glass coupons with an area of 2.5 cm2) was as follows: a spore suspension of B. segeus at a concentration of 5x108 CFU/ml in a volume of 0.02 ml was applied to the coupon, evenly distributing the biological material over the entire area. The estimated contamination of coupons was ~1 x 106 CFU/cm2. The coupons were dried under room conditions (temperature 20 ± 20C, relative humidity 50-60 %) until the full visual drying out of spore suspension. Then they were treated with solutions of disinfectants, applyied to contaminated surfaces in the amount of 0,2 ml/coupon. The contact time of spores with disinfectants was 0,5 hours, 1 hour, 24 hours, 48 hours.
As a control sample, we used contaminated coupons treated in a similar way with sterile distilled water. After the completion of exposure, the number of viable spores was determined through washings from coupons using double dilution technique followed by inoculation on Hottinger's solid nutrient agar. The efficiency of sporocidal exposure was evaluated in 24 hours by comparing the number of viable spores on control samples (which were not exposed to disinfectants) and experimental coupons.
Sporocidal properties were assessed with electronic microscopy the following way. 1 ml. of spore suspension 5x108 spores/ml was mixed with 1 ml. of «MultiDez». In the control sample, 1 ml. of sterile distilled water was added to the spore suspension instead of a disinfectant. Contact of spores with disinfectants was carried out with constant stirring. The contact time was 0,5 hours, 1 hour, 24 hours, 48 hours. The disinfecting effect on Bacillus cereus spores after the contact time was suspended by washing in 10-time volume of sterile distilled water, followed by centrifugation. The resulting spore precipitate was used in preparing agents for electron microscopy.
Samples of B.cereus spores were placed for 10-12 hours in 2,5% glutaraldehyde solution in a phosphate buffer (0,15 M, pH -7.2), washed in the same buffer, and then put in 1% of OsO4 solution in water. After 4 hours of exposure, the samples were washed with water, dehydrated in ethanol and propylene oxide, and enclosed in Araldide. Ultrathin films were obtained using a diamond knife on an LKB ultratome and then contrasted with uranyl acetate and lead citrate. The finished preparations were examined using a JEM-100 C electron microscope. All experiments were repeated thrice.
Research results: Table 1 shows the results of disinfection treatment of bacterial suspensions of test cultures. The treatment time was 1 hour. The research results prove that the preparation had a wide spectrum of antimicrobial action. For vegetative forms, the number of viable microorganisms after 1 hour of treatment is reduced by more than 6 orders of magnitude. Sporocidal and fungicidal activity of the preparation is satisfactory, although somewhat lowerdid not exceed 5 orders of magnitude. Maximum resistance was detected for the Aspergillus niger culture.
When treating the lawn of the B. cereus culture with «MultiDez», it was revealed that the preparation suppressed the growth of the test culture in a wide range of concentrations (see table.2, Fig.1.). It was found that the preparation manifests disinfection activity against B. cereus spores on contaminated glass coupons. Thus in 24 hours after exposure to a 1% solution of «MultiDez», the number of viable spores decreased by 3 orders of magnitude compared to the control group. Therefore, the level of contamination of coupons has reduced by 99.9%. After 48 hours of treatment with a 1% solution of «MultiDez», no viable spores were found on contaminated coupons.
The figure below displays changes in the morphological structures of B. cereus spores exposed to a 1% solution of «MultiDez». On ultrathin spore slide mounts, exposed to MultiDez, the capsule was gradually loosened, its subsequent thinning and delamination in the form of concentric formations, as well as partial damage to the membranes and exosporium took place. Upon long exposures (24 and 48 hours), these processes expanded and led to significant destructive changes in the cell walls and other membrane structures of the sporefragmentation of membranes, destruction of the cortex and core of most of the spores, which eventually led to the final destruction of bacterial cells.
As of today, guanidine derivatives that combine good biocidal properties with relative low toxicity are one of the most prospective groups of disinfectants (GB 821113, 1959;SU 1184296, 1983, P. A. Gembitsky. Synthesis of metacid, Chem. Ind. 1984, No. 2, pp. 18-19; SU 1687261,1991). Their popularity largely rests on the fact that they are much more effective and safer than quaternary ammonium compounds, surfactants, phenol derivatives and chlorine containing disinfectants. Among the derivatives of guanidine, the most famous are polyhexamethylene guanidine (PGMG) salts (the most famous are chlorhexidine and polyhexamethylene biguanidine), in particular, hydrochloride (PGMG-CH), proposed for the control of bacterial contamination, as well as its derived compositions. However, existing technologies of PGMG-CH production lead to the formation of a rather toxic mixture due to the presence of a significant amount of impurities in its composition.
At the same time, the use of PGMG-CH derived compositions to combat the most resistant forms of microorganisms (mold fungi, spore forms, etc.) requires sufficiently high concentrations of active compound (up to 7% of its weight). The developed technology for obtaining PGMG-CH, which is the basis for the development of Teflex group preparations, has significantly increased the biocidal activity of disinfectants and decreased the environmental burden by reducing concentration of the active substance in the preparation. Improved disinfecting properties of Teflex preparations are achieved by creating a novel PGMG polymer composition and by introducing transition metal saltsnickel chlorite, manganese or iron sulfate, and others.
The above mentioned technological process ensures the creation of a new coordination compound of transition metals with polyhexamethylene guanidine chloride of a linear structure, able to interact with surface molecular complexes of microorganisms' cells and form a polymer layer on their surface. In this case, some of the amino groups of PGMG-CH interact with the cell surface by forming hydrogen bonds or by electrostatic interaction, through which PGMG-CH is attached onto the surface. The other part of the amino groups is initially bound to metal ions. 6 Therefore, the cell surface and sorbed PGMG-CH are a supramolecular complex in terms of coordination chemistry. It should be noted that such supramolecular complexes differ in principle from chemically modified complexes obtained through forming covalent bonds. Functional groups of supramolecular surface ensembles are not rigidly fixed on the surface and their mobility remains, including the one after the mechanism of lateral diffusion. The intermediate layer between the cell surface and polymer molecules' functional groups, through which the compound is fixed onto the cell surface, prevents its rigid interaction with the microorganism's surface, which allows to retain the reagents' coordinating ability and their bactericidal properties.
It is known that membranes are the target for polycationic polymers of the PGMG-CH type. As a result of membranes' deformation caused by the excessive concentration of positive charges accumulated on the external surface of a microorganism (an elementary electrical breakdown or loss of elastic properties may occur), the internal content of the cells leaks out. Then metals come into action, attacking the cell along several biochemical routes simultaneously, including autocatalytic formation of reactive oxygen/nitrogen forms, oxidation of thiol-containing proteins, etc., disrupting the normal biological functions of cells and causing their death. Therefore, the components being the part of the developed preparations produce a synergistic effect, that is a greater effect than the sum of the consequences produced by each separately. This allows us to significantly enhance the bactericidal properties of the disinfectant, without increasing the concentration of active components of the preparation. Differences in the mechanisms of the biocidal action between metals and polycationic polymer ensure the absence of formation of resistant forms of microorganisms when using the above disinfectants.
Conclusion
The presented technology for obtaining PGMG-CH made it possible to create agents with a wide spectrum of antimicrobial action and decrease the environmental burden owing to a lower concentration of the active substance in the preparation. The research findings proved the biocidal, fungicidal and sporocidal activity of the «Teflex» group preparations. The "MultiDez" preparation showed it efficiency when tested on bacterial suspensions and in disinfection of contaminated surfaces. The obtained positive results allow us to consider the "Teflex" group preparations as one of the most viable preparations for disinfection measures. | 3,263.4 | 2021-03-01T00:00:00.000 | [
"Engineering"
] |
Poroacoustic Traveling Waves under the Rubin–Rosenau–Gottlieb Theory of Generalized Continua
: We investigate linear and nonlinear poroacoustic waveforms under the Rubin–Rosenau– Gottlieb (RRG) theory of generalized continua. Working in the context of the Cauchy problem, on both the real line and the case with periodic boundary conditions, exact and asymptotic expressions are obtained. Numerical simulations are also presented, von Neumann–Richtmyer “artificial” viscosity is used to derive an exact kink-type solution to the poroacoustic piston problem, and possible experimental tests of our findings are noted. The presentation concludes with a discussion of possible follow-on investigations.
Introduction
What is known today as the "RRG theory" was put forth by Rubin et al. [1] in 1995. This phenomenological-based theory of generalized continua is thought capable of modeling dispersive effects caused by the introduction of a medium's characteristic length, which Rubin et al. denote as α. Under RRG theory, α is regarded as an inherent material property. From the modeling standpoint, this theory exhibits a number of appealing features, the two most important of which are the following: (i) it is only the pressure stress (i.e., isotropic) part of the Cauchy (i.e., total) stress tensor and the specific Helmholtz free energy that are modified, but these modifications are achieved by adding perburtative terms, which must satisfy certain constraint equations, to the constitutive relations of the former and latter; and (ii), no additional boundary nor initial conditions, beyond those required to solve classically formulated problems, are needed ( [1], p. 4063).
To date, RRG theory has only been applied to single-phase media; see, e.g., Ref. [2] and those cited therein. Hence, there is an obvious need to investigate the nature of the solutions, e.g., those of the traveling wave type, predicted by this theory in the case of multi-phase media.
Accordingly, the aim of this communication is to carry out a preliminary investigation of RRG theory in the context of acoustic problems involving propagation in dual-phase (specifically, fluid + solid) media-dual-phase media being, of course, the simplest case of multi-phase media. Employing both analytical and numerical methodologies, we consider linear and finite-amplitude poroacoustic propagation under the RRG-based generalization of what some refer to as the Brinkman poroacoustic model (BPM) (Although he does not refer to it as such, the general, multi-D, version of the BPM follows on setting C = 0 in Burmeister [3].). Here, it should be noted that the original version of the drag law on which the BPM is based reads (see, e.g., Refs. [4,5]) ∇P =μχ∇ 2 u − (µχ/K)u. (1) In System (3), ℘(> 0) is the thermodynamic pressure; (> 0) is the mass density of the gas; n is the specific entropy of the gas; the parameter γ denotes the ratio of specific heats, where γ ∈ (1, 5/3] in the case of perfect gases; we take α(> 0), which carries the unit of length, to be a constant (That is, we have assumed the simplest version of RRG theory; see (Ref. [1], Equation (20)).); the problem geometry dictates that, here and henceforth, u = (u(x, t), 0, 0), ℘ = ℘(x, t), and = (x, t); and a zero ("0") subscript attached to a dependent variable denotes the (constant) equilibrium state value of that variable, where we note that u 0 = (0, 0, 0).
Here, we observe that since the flow has been assumed homentropic, our RRG-based poroacoustic model is obtained by perturbing only the pressure tensor term in the BPM. Also, we record for later reference that c 0 = γ℘ 0 / 0 is the (constant) equilibrium state value of the sound speed, i.e., the speed of sound in the undisturbed gas; see, e.g., (Ref. [7], §4.3).
Finite-Amplitude Equation of
Motion: The Case µ := const.
Hence, on invoking the finite-amplitude approximation, and introducing the following dimensionless variables: where the positive constants L and U p respectively denote a macro-length scale characteristic of the propagation domain and the magnitude of the peak particle velocity in the gas, it is not difficult to establish (See, e.g., the derivation performed in (Ref. [8], §2), and note that (Ref. [8], Equation (10)) is the σ, δ := 0 special case of Equation (5) where here and henceforth all diamond ( ) superscripts have been suppressed for convenience. In Equation (5), which we note reduces to the corresponding EoM for the (1D) BPM on setting a 0 := 0, = U p /c 0 is the Mach number, where 1 is assumed; δ = νχL/(c 0 K) is the dimensionless Darcy coefficient, where ν = µ/ 0 is the kinematic viscosity of the gas; a 0 , the dimensionless version of α, is given by a 0 = α √ 2/L; we have set σ := χ/Re B , where Re B = c 0 L/ν is a Reynolds number, and whereν =μ/ 0 ; and β(> 1) denotes the coefficient of nonlinearity [9], which in the case of a perfect gas is given by In deriving Equation (5) we have assumed that δ, σ, a 0 , |s| ∼ O( ) and, in accordance with the finite-amplitude approximation, only nonlinear terms O( 2 ) have been neglected. Although derived under the finite-amplitude approximation, Equation (5) is still too complicated for treatment by analytical means. Fortunately, however, the nature of the problems to be considered below is such that we may employ the uni-directional approximation to reduce the order of Equation (5) by one and confine its nonlinearity to a single (quadratic) term. Omitting the details, we find that under, say, the right-running case of this approximation (See, e.g., Crighton's ( [10], p. 16) derivation of the acoustic version of Burgers' equation.), which in the present setting reads φ x −φ t , our EoM becomes, after making use of the relation u(x, t) = φ x (x, t) and simplifying, which on switching to the variables X = x − t and T = t is further reduced to If we once again make use of the right-running approximation, which now takes the form u T −u X , to re-express only the third order term in Equation (8), which is justified since a 0 ∼ O( ) (i.e., (a 2 0 /2)u TXX is a "small" term), then Equation (8) assumes its final form, specifically, a PDE which we term the damped Burgers-KdV (dBKdV) equation.
In closing this sub-section we stress that Equations (7)-(9) apply only to right-running waveforms; i.e., to problems wherein reflection (to the left) is not possible.
Comparison of Linearized EoMs: The Cauchy Problem
In this section we compare the BPM with its RRG-based counterpart under the linear approximation, which at the EoM level corresponds to setting := 0. We do so in the context of what is perhaps the best known problem from classical PDE theory.
To this this end, we consider the linearized version of Equation (9) in the setting of the following initial value problem (IVP), i.e., in the setting of the classical Cauchy problem: Here, we take f (X), our initial condition (IC), to be defined on the real line and such that its Fourier transform exists.
On applying the Fourier transform to both Equation (10a) and the IC, and then solving the resulting (first order) ODE, it is readily shown that where k is the Fourier transform parameter and a hat over a quantity denotes the Fourier transform of that quantity. In turn, applying F −1 (·), the inverse Fourier transform, to Equation (11) gives 3.1. The RRG Case: a 0 > 0 Using the convolution theorem, and letting Ai(·) denote the Airy function of the first kind, the RRG (i.e., a 0 > 0) case of Equation (12) can be recast in the more explicit form For obvious reasons, the following two special cases of f (X) are of particular interest: Here, d(·) denotes the Dirac delta function and b(> 0) is a (dimensionless) constant.
3.2. The BPM Case: a 0 := 0 If we assume instead the BPM, then the solution of IVP (10) is readily obtained on setting a 0 := 0 in Equation (12); for the two aforementioned cases of f (X), we find that
Remarks: RRG vs. BPM
With regard to the Gaussian IC, the primary difference between the linearized RRG and BPM cases is that the pulse profile corresponding to the former instantly becomes oscillatory about the X-axis, due to the Airy function in its integrand, while that of the latter maintains, for all T > 0, the shape and strict positivity of the initiating Gaussian. The clearly contrasting behaviors exhibited by these two models should, therefore, allow researchers to experimentally determine which of the two best describes propagation in a given poroacoustic system. Before examining it in its most general form, and for the benefit of those readers who are not well acquainted with the intricacies of nonlinear evolution equations, it is instructive to first review selected special cases of Equation (9). The right-running models discussed in the next three sub-sections, all of which, it should be noted, have applications beyond poroacoustics, will each have a role to play in the analysis performed in Section 4.4.
Since we have assumed δ 1, applying the Kryloff-Bogoliubov asymptotic expansion method to the dKdV equation yields, as Ott and Sudan [11] have shown, the large-T expression where we have taken N(0) = 1 (see Ref. [11]), and we let see also Ref. [12], as well as Ref. [13] (Ref. [13] contains a number of recently identified typographical errors; see Appendix A below.) and those cited therein. Equation (17) represents a damped, and decelerating, solitary waveform, and as such it cannot be a soliton in the classical sense [14]. Note, however, that the acoustic version of the classic soliton solution of the KdV equation (see Ref. [8]) is recovered as the limiting case
Case (II): Damped Burgers' Equation
This case, which corresponds to setting a 0 := 0, reads Equation (20) is the right-running EoM stemming from the BPM, and in this context it has recently been investigated by Rossmanith and Puri [15].
As shown by Nimmo and Crighton [16], this generalization of Burgers' equation does not admit a linearizing (i.e., Cole-Hopf type) transform. As shown by Malfliet [17], however, its TWS, which assumes the form of a damped kink, is readily approximated. To the order expressed explicitly in Ref. [17], the TWS of Equation (20) is given by Here, where λ B = 4σ/( β) is the shock thickness exhibited by the TWS given below in Equation (25); In Ref. [17], the parameter c, which herein has the value c = 2/λ B , is defined so that Equation (21) yields the limiting case Equation (25) and λ B are the TWS, which we note takes the form of a Taylor shock, and corresponding shock thickness, which was determined using Equation (2), respectively, admitted by the classic Burgers equation.
Numerical Results
Inspired by, and closely following, Zabusky and Kruskal's [14] analysis of the classic KdV equation, in this subsection we perform numerical experiments on Equation (9), and its special cases listed above as Cases (I) and (II), in the setting of the following initial-boundary value problem (IBVP) with periodic boundary conditions: u(X, 0) = cos(2πX), |X| < 1.
In (Ref. [14], Figure 1), snapshots of the evolution of the KdV's solution profile were displayed in units of (dimensionless) time T B , where Zabusky and Kruskal used T B to denote the "breakdown time" (i.e., the time at which finite-time gradient catastrophe occurs) of the solution to the Cauchy problem involving the classic (i.e., undamped) Riemann equation. In our analysis of IBVP (30), T * B shall play the role of T B .
The graphs presented in Figures 1-3 were computed and plotted using MATHEMATICA (ver. 11.2). Except for the value of β(= 1.2), which corresponds to diatomic gases (e.g., air) [9], all other parameter values were selected based on our desire to produce clear, illustrative, graphs and the need to satisfy the assumptions under which Equation (9) was derived.
From Figure 1 it is easy to see that, except for attenuation of the profile (caused by the Darcy term) and a slight phase shift, the dKdV profiles are qualitatively similar to those of the classic KdV equation in the setting of IBVP (30). And, as is also true in the case of the latter, reducing (resp. increasing) a 0 increases (resp. decreases) the number of pulses seen in Figure 1b.
In contrast, the plots shown in Figures 2 and 3 highlight the fact that, like that of the damped Burgers equation, the dBKdV profile suffers attenuation, again due to the Darcy term, and it also develops a "dull sawtooth" appearance as it shocks-up (to the right), but never breaks since σ > 0. More interesting, however, is the fact that for large-T, both the damped Burgers equation and dBKdV profiles are seen to re-assume the periodic form of the IC. As Figures 2c and 3c illustrate, both profiles evolve to become damped, and in the dBKdV case slightly phase-shifted (to the left), versions of the IC given in Equation (30c). This suggests that for sufficiently large values of T, one may employ the approximations u(X, T) ≈ u 1,2 (X, T), where and where we require 1,2 (T) > 0. Comparing the blue-broken curve in Figure 2c with its counterpart in Figure 3c we see that 2 here, for simplicity, we have assumed 1,2 (T) and ψ 1,2 (T) to be linear functions of T. In the setting of IBVP (30), then, the presence of the third order (i.e., RRG) term in the dBKdV equation gives rise to both a phase shift and slightly less attenuation vis-à-vis the damped Burgers equation.
While their usefulness may be limited to certain "windows" of T-values, the functions 1,2 (T) and ψ 1,2 (T) should be constructible based on Equation (31) and numerically generated, large-T, data sets using one of the many data-fitting methodologies found in the literature.
The RRG Case with "Artificial" µ
In 1950, von Neumann and Richtmyer (vNR) [23] introduced their artificial viscosity coefficient. In this section, we make use of this celebrated artifice not to regularize numerical schemes used to calculate shock profiles, as was vNR's aim, but rather to obtain an analytical solution to the poroacoustic version of the piston problem (Unlike Ref. [23], wherein Lagrangian coordinates were used, in this communication we employ the Eulerian description; see, e.g., (Ref. [24], §V-D-1) wherein vNR's system is recast under the latter.).
To this end, we return to the RRG case of System (3) and assume that but continue to regardμ as a positive constant. Here, we have expressed the length-scale factor in vNR's artificial viscosity coefficient as α, instead of some grid spacing ∆x.
For simplicity, we now assume that the porous solid in question is comprised of packed beds of rigid solid spheres, all of radius r(> 0), which are fixed in place. For such a configuration, the permeability is given by the well known Kozeny-Carmen relation [4]: As these spheres are scatters of acoustic waves, we take α to be proportional to the characteristic length now associated with our dual-phase medium; i.e., we take α = b 1 r, where b 1 (> 0) is an "adjustable" (dimensionless) constant ( [24], p. 233).
If, moreover, we limit our focus to kink-type waveforms, as physical intuition suggests, and have the piston located at x = −∞ and moving to the right along the x-axis, then u x < 0 and Equation (32) becomes where we have used the relation u = φ x . Returning to our dimensionless variables, and once again applying the finite-amplitude approximation, it is readily established that, under the aforementioned assumptions, the following (simpler) weakly-nonlinear PDE replaces Equation (5) as our bi-directional EoM: In Equation (35), which we observe applies only to the RRG case, we have set a 1 := b 1 r √ 2/L, where we require that a 1 ∼ O( ), and where the requirement δ 1 ∈ (0, 1) implies that b 1 must satisfy the inequality Assuming the gas at x = +∞ is in its equilibrium state, and thus motionless, and observing that in the present context U p is the dimensional speed of the piston, we let φ(x, t) = G(η), where η = x − v 1 t and the (dimensionless) shock speed v 1 is taken to be a positive constant, and then substitute into Equation (35). On integrating once with respect to η and then imposing/enforcing the asymptotic conditions g → 1, 0 as η → ∓∞, respectively, Equation (35) is reduced to the ODE where we note that the resulting constant of integration is zero. In Equation (38), g(η) = G (η), where a prime denotes d/dη; we have defined recalling that β is the coefficient of nonlinearity (see Equation (6)); and which we observe is the positive root of To apply the solution methodology employed in (Ref. [2], §2) to Equation (38), the following condition must be satisfied: In (Ref. [2], §2), satisfying Equation (42) required that the value of the Mach number be fixed, a constraint which clearly limits the usefulness of the TWSs presented in that article. Here, however, we shall use this restriction to our advantage; specifically, in the following sense: Since the value ofμ for a given poroacoustic flow is, in general, not known, and we are seeking a kink-type TWS, then the only possible solution of Equation (42) in the present context is where we observe that v 1 = v 1 c 0 is the dimensional shock speed and, moreover, that v 1 > 1 implies v 1 > c 0 . On imposing the usual wave front condition g(0) = 1/2, but otherwise referring the reader to (Ref. [2], §2) for details regarding its derivation, the TWS we seek is where K = tanh −1 −1 + √ 2 . Letting λ 1 = 1 /L denote the dimensionless shock thickness (Recall Equation (2)) admitted by Equation (44), it is easily established that where 1 is the corresponding dimensional shock thickness. Also, with regard to computing λ 1 , it should be noted that g (η * ) = 0, where η * (< 0) is given by and where it should also be noted that g(η * ) = 5/9. The usefulness of Equation (45) might be ascertained as follows. Assume that v 1 and 1 can both be determined, either directly or indirectly, from experimental measurements and, moreover, that both are (at most) slowly varying functions of time. With v 1 known, b 1 can, of course, be computed using Equations (36), (39) and (40). If this (inferred) value of b 1 satisfies the inequality in Equation (37), a 1 ∼ O( ) is also satisfied, and the measured value of 1 is in agreement with that computed from Equation (45) over, say, some span of time t ∈ T , then we can expect Equation (45) to prove useful as an approximation within the transition region of our kink-type traveling wave profile for t ∈ T .
Discussion: Possible Follow-On Studies
In addition to gaining a better understanding of how the solution of IBVP (30) behaves for large-T, in particular, determining to what extent (if any) the recurrence behavior seen in Figures 2c and 3c is related to the functional form of a given IC, future work on poroacoustic RRG theory could included the use of homogenization methods in problems wherein K and/or χ vary with position. Other possible extensions include the poroacoustic generalization of the study carried out in Ref. [25], wherein α was taken to be a function of (u x ) 2 , and also the case in whichμ is a power-law function of the shear rate tensor. Follow-on work might also include the study of poroacoustic signaling problems involving sinusoidal and/or shock input signals, as well as problems in which changes in entropy and temperature are taken into account. | 4,673.2 | 2020-03-14T00:00:00.000 | [
"Physics",
"Engineering",
"Mathematics"
] |
Wavelet analysis of gamma-ray spectra
Since September 11, 2001, there has been increasing interest in providing first responders with radiation detectors for use in the search for and isotope identification of potentially-smuggled special nuclear material (SNM) or radiological dispersal devices (RDDs). These devices are typically comprised of low-resolution detectors such as NaI, thus limiting their identification abilities. We present a new technique of wavelet analysis of low-resolution spectra for the use in isotope identification. Wavelet analysis has the benefit of excellent feature localization while, unlike with Fourier analysis, maintaining the signal frequency and time characteristics. We will demonstrate this technique with a series of gamma-ray spectra obtained from typical hand-held isotope identifiers, illustrate figures-of-merit to be applied to these results, and discuss future algorithm optimization.
I. Introduction
It has been reported by several sources that the handheld isotope identifiers currently being used by firstresponders have problems with their identification abilities. [1], [2] It can be difficult to implement complicated peak localization and identification algorithms due to limitations of the on-board computer memory, thus limiting their effectiveness in the field of nuclear emergency response. These algorithms are further hindered by problems such as gain drift, which is very common when the devices are moved between temperature extremes such as from an airconditioned vehicle to the warm outdoors. Furthermore, these algorithms become more complicated as the detector resolution increases making the identification of multiplelined sources difficult. A new algorithm is needed that requires minimal computer memory to be used to locate and identify peaks within a medium-or low-resolution gamma-ray spectrometer.
Wavelet analysis was presented in the mid-1980's to solve problems such as image compression, gamma-ray burst analysis, the study of earthquakes, and numerous other problems in signal analysis where the signal is aperiodic, noisy, transient, and so on. More recently, wavelets have been applied to problems such as feature detection and localization in both one-and two-dimensional signals. The concept of wavelet analysis has been applied by other researchers to the analysis of high-resolution gamma-ray spectra where the initial spectrum was denoised using the discrete wavelet transform (DWT) and then the peaks were located by subtracting a fit of the continuum from the signal. [3], [4] We present a new approach where the features of a spectrum are identified directly with a wavelet transform using the modulus maxima technique. [5], [6], [7] This eliminates the requirements of continuum fitting for the detection of peaks and takes full advantage of the capabilities of wavelet analysis without losing any data due to smoothing.
II. Wavelet Transforms and the Modulus Maxima
The technique of wavelet analysis involves observing a signal at a particular time (or, in this case, energy) simultaneously over a broad range of scales, (the wavelet pseudo-equivalent of frequency). Like Fourier analysis, the signal is broken down into frequency components. However, standard Fourier analysis suffers from the fact that it does not maintain the information about the location in a signal of the given frequency. The windowed Fourier transform, or Gabor transform, provides a basic frequency analysis as a function of position. However, the Gabor transform is of fixed time-frequency resolution. In the case of gamma-ray spectroscopy, to analyze a signal over a fixed number of channels across the entire spectrum would be prohibitive since, for most systems, the width of a peak in channels increases with increasing channel number.
Wavelet analysis permits the user to analyze the signal in the time-frequency (or energy-scale) domain simultaneously at different scales, which is referred to as "multiresolution analysis" (MRA). The wavelet transform of a signal x is given by where E is position (in our case, energy), s is the scale, and ψ E,s (t) is the mother wavelet. By adjusting both the scale and the location of the mother wavelet during the convolution, the transform coefficient, T , can be obtained that indicates the degree to which the mother wavelet matches the original signal. The result of the wavelet transform is then a three-dimensional array of position, scale, and wavelet coefficient, most effectively shown as an image called a scalogram where the shading indicates the magnitude of the wavelet coefficients, T (E, s). The exact details of the wavelet transform have been covered extensively by authors elsewhere. [5], [6] There are numerous choices for mother wavelets and the selection of the proper wavelet depends on the application. To be considered a wavelet, a function ψ ∈ L 2 must have finite energy: Also, ifψ(f ) is the Fourier transform of ψ(t), then which implies that the wavelet must have zero mean. ψ is then scaled by s and translated by E as: Thus the wavelet transform can be rewritten as: Several example wavelets are shown in Figure 1 and a sample scalogram of an input signal using the Mexican hat wavelet is illustrated in Figure 2.
As can be observed in Figure 2, the wavelet coefficients in the scalogram create a cone-like appearance whose boundaries converge at small scales on the location of an abrupt change such as a singularity or peak. This position, called the cone of influence, which is defined by: where E 0 is the location of the feature and C is a constant. In order to determine the precise location of E 0 , we employ the wavelet transform modulus maxima technique (WTMM). The WTMM is used to describe any point in the scalogram such that the wavelet coefficients are a local maximum: A modulus maxima line is any line that connects neighboring maxima points in the scalogram. Singularities or peaks are detected by finding the abscissa where the modulus maxima lines converge at fine scales. Therefore, the wavelet transform can focus on a localized singularity or similar feature with a zooming procedure that gradually reduces the scale.
However, to actually detect a singularity, it is not sufficient to simply follow the modulus maxima lines since small fluctuations such as noise in the signal could create a local maximum, especially at small scales. The singularity itself is characterized by the decay of the wavelet coefficients along maxima lines. This decay quantifies how regular the function is either over all space or at a particular location. One measure of the regularity of a signal at position E 0 is the value of the Lipschitz exponent (also referred to as the Hölder exponent) at that position. If ψ is continuously differentiable with compact support and real values, then for any > 0 , x is uniformly Lipschitz α if and only if there exists a constant, A , such that for where α is the Lipschitz exponent. This is a condition on the asymptotic decay of the wavelet coefficients as the scale approaches zero. If x has a singularity at E 0 , then the Lipschitz exponent characterizes the singular behavior. One must determine α for each line, which is generally calculated by fitting: to the magnitude of the wavelet coefficient along a modulus maxima line. If the slope of this line is calculated at E 0 , this determines the Lipschitz regularity. This is shown in Figure 3 for the cusp in the test signal of Figure 2. In this case, the singularity illustrated was Lipschitz 0.5. Note that large values of α indicate a more regular function whereas smaller values indicate more singular function. An actual singularity is found when 0 < α < 1.
III. Results on Gamma-Ray Spectra
This paper presents a proof-of-principles experiment using the WTMM and Lipschitz exponent to locate peaks within a spectrum. The data were taken either with a 15 x 15 x 7.5 mm 3 CdZnTe hand-held isotope identifier or a 3 in diameter x 3 in high NaI scintillator. The computations were performed using MATLAB (version 6) with the MATLAB Wavelet Toolbox (version 2) and the Wavelab Toolbox (version 802). [8], [9] The first step was to determine the wavelet transform of the spectrum, using a continuous, real wavelet transform. It may seem counterintuitive to have data that is sampled discretely analyzed with a continuous filter. However, to be clear, what is meant by "continuous" in this case involves the set of scales and positions on which the transform operates. The continuous wavelet transform (CWT) can operate at every scale, from that of the original signal to that of the user's choosing. It also is continuous in terms of shifting the mother wavelet smoothly over the entire signal. The cost in having a very large number of scales or shifted positions is a significant increase in the time required to complete the transform.
In order to calculate the CWT, it was first necessary to select a mother wavelet for the transform. It was observed that the choice of mother wavelet had a significant impact on the WTMM process in terms of the numbers and locations of modulus maxima lines detected. This was not unexpected since different wavelets have different frequency responses and thus different capabilities in the localization of peaks within a signal. Previous work used the Haar wavelet for analysis. [3] This particular wavelet is good at identifying rapid (high-frequency) changes in the signal, such as a those from a high-resolution peak in an HPGe spectrum. However, our goal was to locate peaks within medium-to low-resolution spectra. Therefore, we choose to experiment with two different wavelets that resembled more of the features we were looking for. We decided to use wavelets that resembled the second derivative of a Gaussian function. This decision was made to maintain similarity with other isotope identification algorithms that are based on Mariscotti's technique, which analyzes the second derivative of the signal within a small window or kernel. [10] Unlike Mariscotti's technique, the wavelet approach benefits from the use of MRA, thus permitting the analysis of peaks at different scales instead of being restricted to a constant kernel size that may not be optimal for all peaks. The wavelets chosen for this work both resembled the "Mexican Hat Wavelet" of Figure 1, which is the second derivative of a Gaussian. We used the Wavelab "Sombrero" wavelet: and the "DerGauss" wavelet: which is the second derivative of a Gaussian. Once the CWT of the spectrum was calculated, the modulus maxima lines were found. This algorithm was first performed on a 137 Cs spectrum taken with a CdZnTe detector. The results are shown in Figure 4. As can be seen from the scalograms, the two different wavelets result in very different wavelet coefficients. This does not matter for the subsequent determination of the modulus maxima positions since the algorithm, based on Equation 7, looks only for the places where the first derivative is zero. In reality, this could be either a local maximum or minimum -no further test is applied to determine which it is. What is important is that there are modulus maxima lines that are converging on the location of the 662 keV peak. There are differing numbers of lines converging on 662 keV due to the selection of the wavelet.
There are several other features to notice in these figures. First, in both cases there is a maxima line that crosses the entire scalogram. This is due to edge effects in the scalogram and can be reduced or eliminated by considering a more appropriate range of scales within the scalogram. These particular figures were plotted with a large number of scales to demonstrate the behavior of the CWT at very large scale. Additionally, several maxima lines are observable, particularly at smaller scales, that are not associated with the photopeak. These are due to the noise within the signal and other features such as the Compton edge and backscatter peak within the spectrum.
While the scalogram and plots of the modulus maxima lines yield interesting results on the potential peak locations within the spectrum, there are clearly places where these lines do not represent data of interest, such as the aforementioned noise and other spectral features. As such, it would be desirable to eliminate some of these modulus maxima lines that do not correspond to a true peak. To create such a filter, the Lipschitz exponent of the various spectral features was calculated for this spectrum. Table I shows the fit slope (i.e. the Lipschitz exponent) of the wavelet coefficient decay along maxima lines for lines associated with different types of spectral features. As is evident from the table, maxima lines associated with a true peaks had significantly larger Lipschitz exponents than those that were associated with the Compton continuum or noise. Based on this observation, it was possible to establish a filter for the modulus maxima lines based on Lipschitz exponent where peaks within the spectrum were identified if they met the requirement of α > 0.65. With this threshold, it is clear that in the cases of both wavelets, the maxima lines corresponding to the photopeak are preserved. In the case of the Sombrero wavelet, some of the edge effect lines will also be preserved as will the backscatter peak. However, this is not true for the DerGauss wavelet, which maintains only the two photopeak maxima lines and the edge effect line.
This filtering technique was then applied to a more complicated spectrum, which was a mixture of 137 Cs and 133 Ba. These results are shown in Figures 5 and 6 with the remaining modulus maxima lines indicating which lines had met the Lipschitz slope filtering requirement. The results of these figures show variations in the sensitivity of the two wavelets. For example, the Sombrero wavelet is capable of detecting the 302, 356, and 383 keV photopeaks of 133 Ba, but it also is more sensitive to the pseudo-peak where the spectrum begins around channel number 25. The DerGauss wavelet, on the other hand, is not as sensitive to this portion, but it fails to detect the 302 keV peak. It also detects the Compton edge from the 662 keV 137 Cs photopeak. This suggests that more work is required on the proper wavelet selection for this application.
In addition to the calculations performed on the CdZnTe spectra, this technique was also applied to NaI spectra to determine its abilities with low-resolution detectors. Due to the lower resolution of these systems, it was expected that α should be larger to indicate that the signal was approaching a more regular function. Table II shows how α varied as a function of spectrum feature. Based on these data, a slope filter was imposed of α > 1.4. Figures 7 and 8 show the results of analyzing a NaI spectrum of 137 Cs using this filter. As is evident from the table and figures, the edge effects were detected due to the fact that they had a significant slope. It is worth considering whether an upper limit on slope should be imposed. The impact of this additional filter will be explored in future work.
IV. Summary and Future Work
We have demonstrated a promising new technique involving the localization of peaks within a gamma-ray spectrum of low-to medium-resolution using the concepts of the wavelet transform with the modulus maxima technique. The modulus maxima lines can be used as a first-pass indication on the potential location of a peak. It is clear from the results that the Lipschitz exponent provides useful information on the nature of the modulus maxima lines to determine whether or not they actually are a peak. It was observed that there was a significant difference between the values of the Lipschitz exponent between different spectral features. It was shown that a filter could be established by requiring a minimum value of the slope. Using this filter, we were able to discriminate between photopeaks, Compton edges, and noise within the spectrum. In some cases, we were also able to distinguish the difference between a photopeak and a backscatter peak, although more work is required in this area.
There are still important areas to consider with this technique. First, while the wavelets chosen for this analysis were reasonably mathematically similar (both approximated the second derivative of a Gaussian), their results were rather different. This suggests that a detailed comparison of the results using different wavelets and what parameters (detector resolution, photopeak shape, etc.) are important for optimal wavelet selection are necessary. Second, in keeping with the eventual goal, a fitting algorithm needs to be employed to determine precisely where the modulus maxima lines corresponding to a peak intersect the abscissa. This will provide the means to identify the energy of the photopeak for comparison with a library to be used in isotope identification. Finally, it is desirable to use wavelets or some other technique to fit the continuum under each peak to provide estimates of the net area for each peak, which could be used to obtain crude isotopic ratio information. | 3,897.6 | 2004-10-16T00:00:00.000 | [
"Physics"
] |
Endogenous retroviral insertions drive non-canonical imprinting in extra-embryonic tissues
Background Genomic imprinting is an epigenetic phenomenon that allows a subset of genes to be expressed mono-allelically based on the parent of origin and is typically regulated by differential DNA methylation inherited from gametes. Imprinting is pervasive in murine extra-embryonic lineages, and uniquely, the imprinting of several genes has been found to be conferred non-canonically through maternally inherited repressive histone modification H3K27me3. However, the underlying regulatory mechanisms of non-canonical imprinting in post-implantation development remain unexplored. Results We identify imprinted regions in post-implantation epiblast and extra-embryonic ectoderm (ExE) by assaying allelic histone modifications (H3K4me3, H3K36me3, H3K27me3), gene expression, and DNA methylation in reciprocal C57BL/6 and CAST hybrid embryos. We distinguish loci with DNA methylation-dependent (canonical) and independent (non-canonical) imprinting by assaying hybrid embryos with ablated maternally inherited DNA methylation. We find that non-canonical imprints are localized to endogenous retrovirus-K (ERVK) long terminal repeats (LTRs), which act as imprinted promoters specifically in extra-embryonic lineages. Transcribed ERVK LTRs are CpG-rich and located in close proximity to gene promoters, and imprinting status is determined by their epigenetic patterning in the oocyte. Finally, we show that oocyte-derived H3K27me3 associated with non-canonical imprints is not maintained beyond pre-implantation development at these elements and is replaced by secondary imprinted DNA methylation on the maternal allele in post-implantation ExE, while being completely silenced by bi-allelic DNA methylation in the epiblast. Conclusions This study reveals distinct epigenetic mechanisms regulating non-canonical imprinted gene expression between embryonic and extra-embryonic development and identifies an integral role for ERVK LTR repetitive elements.
Background
The genetic contributions from both the sperm and oocyte are essential for successful development in mammals. Thirty-five years ago, seminal embryo manipulation experiments in mice showed that embryos with either two maternal or two paternal genomes die early in gestation [1,2], and it was postulated that the parental genomes were somehow differentially imprinted during gametogenesis. Shortly thereafter, three genes, Igf2r, H19, and Igf2, were identified to be expressed mono-allelically based on the parent of origin, revealing the first examples of "genomic imprinting" [3][4][5]. Importantly, the regulation of imprinted mono-allelic expression was found to be due to the asymmetric deposition of an epigenetic mark, DNA methylation, in gametes [6,7].
The study of imprinted genes has been integral to our understanding of epigenetic regulation of gene expression and has revealed the capacity for intergenerational transmission of epigenetic instructions from gametes to a newly formed embryo. Imprinting classically depends on locusspecific differences in DNA methylation established in the gametes [8,9], with the vast majority of germ line differentially methylated regions (gDMRs) being established on maternal alleles during oogenesis [10]. Upon fertilization, despite the widespread epigenetic reprogramming, which includes the erasure of DNA methylation, reallocation of histone modification patterns, and dynamic chromatin remodeling [11], imprinted gDMRs are protected from these reprogramming events. In the post-implantation embryo, as there is re-acquisition of genomic DNA methylation, gDMRs maintain their inherited mono-allelic status through the protection of the unmethylated allele [12].
Imprinted genes are essential for the regulation of mammalian development, placentation, and fetal growth. It has been proposed that imprinting arose as a consequence of the conflict between the paternal and maternal genomes within the conceptus in placental mammals to increase or restrict demand for maternal resources, respectively [13]. The barrier between the mother and fetus, the extra-embryonic tissues, perhaps unsurprisingly, has more expressed imprinted genes than most other tissues [14,15]. Furthermore, several observations suggest that imprinted gene regulation in extra-embryonic tissues may be dependent on a unique combination of multiple epigenetic layers, utilizing differential DNA methylation together with or in addition to histone modifications and long non-coding RNAs (lncRNAs) [16,17].
Histone modifications H3K27me3 and H3K9me2/3 have been associated with placental-specific imprinting of distal genes in the Kcnq1/Kcnq1ot1 and Igf2r/Airn clusters; however, this distal mono-allelic silencing is mediated by a non-coding RNA that is regulated by a canonical gDMR [16,18]. Intriguingly, a number of isolated placental-specific imprinted genes (e.g., Sfmbt2, Zfp64, Phf17, Smoc1, Pde10a) appear to have no associated gDMRs, suggesting they may be solely regulated by histone modifications [15,19,20]. Indeed, a recent study found that maternally deposited H3K27me3 can confer imprinted gene expression. However, this "non-canonical" imprinting appears to be predominantly transient in the early embryo, and the key mechanisms that maintain this form of imprinting are still unknown [21]. Notably, for the few genes with persistent mono-allelic expression in later development, mono-allelic expression becomes restricted to extra-embryonic lineages, suggesting that extra-embryonic tissues may be uniquely permissive for this additional form of imprinted gene regulation. Importantly, it remains to be shown whether (1) histone modifications have a unique allelic patterning in extraembryonic tissues conferring imprinted gene expression and (2) non-canonical imprinting is truly independent of maternally inherited gDMRs.
Study design
To evaluate the allelic regulation of gene expression in the embryo, we assessed the epigenetic modifications in C57BL6/Babr and CAST/EiJ reciprocal hybrid (denoted as B6/CAST and CAST/B6, in which by convention, the maternal strain is indicated first) embryonic day (E) 6.5 epiblast and extra-embryonic ectoderm (ExE). We assayed H3K4me3, H3K36me3, and H3K27me3 using ultra low-input ChIP-seq and DNA methylation using post-bisulphite adaptor tagging (PBAT), as previously described [22], from a pool of~2500 cells of either epiblast or ExE (Fig. 1a, b; Additional file 1: Figure S1 and S2). We additionally profiled these epigenetic marks in E6.5 hybrid embryos derived from B6 females with a double conditional knockout for Dnmt3a and Dnmt3b in oocytes, driven by Zp3-cre, crossed to CAST males (denoted matDKO/ CAST) (Additional file 1: Figure S1 and S2). Consequently, these matDKO/CAST embryos will inherit no maternal DNA methylation [9] but are able to sufficiently establish DNA methylation post-fertilization [23]. Allelic gene expression was evaluated in E7.5 epiblast and ExE of all hybrid crosses (Fig. 1a, b; Additional file 1: Figure S3). Details of biological replicates and datasets generated for this study are summarized in Additional file 2: Table S1.
Imprinted H3K4me3 is associated with imprinted gene expression
To identify imprinted domains in E6.5 embryos, we called H3K4me3 peaks on autosomes in the epiblast (N = 33,329) and ExE (N = 40,468) of B6/CAST and CAST/B6 embryos. H3K4me3 peaks with a minimum of 20 strain-specific SNP-spanning reads in at least 1 replicate of epiblast (N = 15,407) and ExE (N = 15,976) were evaluated for allelic bias using EdgeR (p < 0.05, corrected for multiple comparisons). A consensus set of allelic H3K4me3 peaks was identified for epiblast (N = 329) and ExE (N = 913), as those that were significant in both B6/ CAST and CAST/B6 crosses ( Fig. 2a; Additional file 1: Figure S4). The vast majority of allelic H3K4me3 peaks demonstrated strain-specific inheritance patterns (92%), with the remaining 8% of peaks showing parent-of-origin (imprinted) inheritance. In total, we identified 69 imprinted H3K4me3 peaks in ExE and 29 in the epiblast ( Fig. 2a; Additional file 1: Figure S4; Additional file 2: Tables S2 and S3). The majority (72.4%) of imprinted H3K4me3 peaks were located at an annotated gene promoter(s), and the remaining were assigned to the nearest gene within 10 kb, where applicable (Additional file 2: Tables S2 and S3). When compared to a list of known (and putative) imprinted genes (Additional file 2: Table S4), known imprinted genes comprised 77.8% and 96.2% of genes associated with an imprinted H3K4me3 peak in ExE and epiblast, respectively.
Non-canonical vs. canonical imprinted gene regulation
To determine which imprinted loci are dependent on maternally inherited gDMRs, we evaluated allelic H3K4me3 in post-implantation matDKO/CAST embryos. Using the EdgeR statistical approach described for the reciprocal hybrids, we identified H3K4me3 peaks that lost allelic bias in the matDKO/CAST (canonical maternal imprints) and those that remained imprinted (non-canonical imprints and canonical paternal imprints) ( Fig. 3a; Additional file 1: Figure S5). In epiblast, there were only 5 imprinted H3K4me3 peaks present in the matDKO/CAST (H19, IG-DMR, Meg3, Slc38a4, and Gab1); the former 3 are regulated by paternal gDMRs, thus leaving 2 that could be classified as non-canonical (Additional file 2: Table S3). In ExE, we identified 3 H3K4me3 peaks associated with known paternal gDMRs (H19, Igf2, and Meg3), 17 that we classified as non-canonical (including all 4 previously reported non-canonical imprinted genes [21]), with a remaining 49 canonical maternally regulated imprinted H3K4me3 peaks that were lost in matDKO/CAST ( Fig. 3a; Additional file 2: Table S2). These data support previous reports that non-canonical imprinting is largely restricted to the extra-embryonic lineage [21]. Experimental design and data evaluation. a Schematic of experimental design demonstrating the collection of reciprocal hybrid post-implantation embryos for ultra low-input ChIP-seq, bisulphite-seq, and RNA-seq. Two replicates of H3K4me3, H3K27me3, and H3K36me3 ChIP-seq were each done using a pool of either E6.5 epiblasts (N = 4) or ExE (N = 8), approximating an input of~2500 cells. Two 10% inputs were taken from each pool of embryos, one for a ChIP-seq input control and the other for low-coverage bisulphite-seq. RNA-seq was done on matched single E7.5 epiblast (N = 3) and ExE (N = 3). b Screenshot of E7.5 gene expression; E6.5 H3K4me3, H3K36me3, and H3K27me3; and E6.5 DNA methylation for B6/CAST epiblast and ExE. H3K4me3 is enriched at gene promoters, H3K36me3 along gene bodies of expressed genes, and H3K27me3 at transcriptionally silent promoters. The epiblast is highly methylated with exception of promoters, while ExE shows the expected lower global levels of DNA methylation. The box highlights the Sfmbt2 gene, which shows tissue-specific expression in ExE. ChIP-seq enrichment (RPKM) is shown for 1-kb running windows, with a 100-bp step (scales in square brackets), while gene expression and DNA methylation are shown using 2-kb running windows, with a 500-bp step Two non-canonical imprinted H3K4me3 peaks identified in ExE were on the maternal alleles (Pde10a and Cd81) and were localized at the large Igf2r/Airn and Kcnq1/Kcnq1ot1 imprinted clusters, and thus have distinct regulatory mechanisms to non-canonical H3K4me3 peaks on paternal alleles [15]. Thus, all subsequent analyses have been done on the 15 non-canonical imprinted paternal H3K4me3 peaks identified in ExE.
Non-canonical imprinted H3K4me3 peaks localize to endogenous retroviral LTRs
We evaluated whether canonical and non-canonical imprints in ExE are enriched for similar genomic features. While canonical imprinted H3K4me3 peaks were strongly enriched for CGIs (88%), non-canonical imprinted H3K4me3 peaks were enriched for regulatory sequences of repetitive elements, the most significant of which was the long terminal repeats (LTRs) of endogenous retroviral (ERV) (93%) (Fig. 3b), specifically endogenous retrovirus-K (ERVKs) (Additional file 1: Figure S5).
Genomic and epigenetic features associated with noncanonically imprinted ERVK LTRs
We then sought to determine (1) whether sequence or genomic features underlie the tissue specificity of extra-embryonic ERVK LTR promoters and (2) why a . Peaks with allelically biased H3K4me3 were identified using EdgeR statistic (p < 0.05, corrected for multiple comparisons). Significant peaks were then classified into strain-specific allelic H3K4me3 if their allelic enrichment switched in the reciprocal cross, denoted as B6-specific (green) and CAST-specific (turquoise). Significant peaks were identified as imprinted if the allelic enrichment was consistent between reciprocal crosses, denoted as paternal (blue) or maternal (red). Enrichment is quantitated as read count normalized to library size, correcting for peak length. b Heatmap showing allelic bias (log 2 (pat/mat)) for E6.5 ExE H3K4me3 at H3K4me3 peaks identified in E6.5 ExE. Allelic bias (log 2 (pat/mat)) for E6.5 ExE H3K36me3, E7.5 ExE gene expression, and E12.5 placenta (P) gene expression is shown for associated nearby genes (Additional file 2: Table S2). Reciprocal hybrids are denoted as B/C (B6/CAST), C/B (CAST/B6), F/C (FvB/CAST), and C/F (CAST/FvB). White boxes indicate where there was insufficient data (ChIP-seq < 20 SNP-spanning reads in all replicates, RNA-seq < 5 SNP-spanning reads in all replicates). ChIP-seq data was quantitated is as in a, RNA-seq data was quantitated as read count over exons. H3K4me3 peaks were excluded if there was no gene within 10 kb or the associated gene was uninformative in all datasets. H3K4me3 peaks overlapping more than one gene promoter are duplicated in the H3K4me3 column. Novel imprinted genes are marked with an asterisk. c Screenshot of allelic enrichment for H3K4me3 and H3K36me3 in E6.5 ExE and gene expression in E7.5 ExE for B6/CAST and CAST/B6 at the known imprinted gene Peg3. Box indicates the location of the maternal gDMR. ChIP-seq data is quantitated using enrichment normalized RPKM for autosomal 1-kb running windows with a 100-bp step (scales in square brackets); paternal (blue) and maternal (red) enrichments are shown on mirrored axes. Gene expression is quantitated as log 2 (RPKM) for 500-bp running windows with a 50-bp step subset is non-canonically imprinted. To evaluate these questions, we identified all ERVK LTRs that fell within ExE H3K4me3 peaks that were active promoters in extraembryonic tissues (N = 40), which included the 8 noncanonical imprinted ERVK LTRs and 32 ERVK LTRs without imprinted expression (Additional file 2: Table S6). Using these 40 extra-embryonic ERVK LTR promoters, we assessed the sequence composition, sequence motifs, proximity to genes and promoters, ERVK LTR classes, and LTR length.
In contrast to ERVK LTRs genome-wide, we found that extra-embryonic ERVK LTR promoters had relatively high CpG content (Fig. 4a) and were more likely to be in close proximity and on the same strand as an annotated transcription start site (TSS) (Fig. 4b). Similar to the majority of ERVK LTRs in the genome, extraembryonic ERVK LTR promoters were mostly solo LTR elements (417 ± 19 bp) (Additional file 1: Figure S6), which had lost their associated retroviral genes [24]. Solo LTRs that act as enhancers in extra-embryonic tissues have been reported to be enriched in transcription factor motifs ELF5, EOMES, and CDX2 [25]; however, we did not identify motifs that were enriched among extraembryonic ERVK LTR promoters using an unbiased approach. Furthermore, we did not find significant enrichment specifically for ELF5, EOMES, or CDX2 motifs.
As non-canonical imprinting has been associated with maternal H3K27me3 inherited from the oocyte [21], we evaluated whether epigenetic marks (H3K4me3, H3K27me3, and DNA methylation) in the maternal oocyte were associated with the transcriptional status of ERVK LTR promoters in extra-embryonic tissues. Non-canonical imprinted ERVK LTR promoters were indeed significantly associated with oocyte H3K27me3 (p < 0.001) (Fig. 4c). Remarkably, H3K4me3 in the oocyte significantly differentiated those ERVK LTRs that were transcriptionally active in extra-embryonic tissues compared to inactive (p < 0.001) (Fig. 4c).
Fig. 3 Non-canonical imprinted H3K4me3 in ExE demarcates imprinted ERVK LTR elements with extra-embryonic-specific imprinted expression. a
Allelic ratio for H3K4me3 at canonical maternally regulated imprinted H3K4me3 peaks (N = 49) canonical paternally regulated imprinted H3K4me3 peaks (N = 3), and non-canonical imprinted H3K4me3 peaks (N = 17) in B6/CAST, CAST/B6, and matDKO/CAST E6.5 ExE. Informative H3K4me3 peaks were quantitated using read counts corrected for library size, and relative allelic ratios were calculated (allelic ratio = mat/(mat + pat)). b The percentage of non-canonical imprinted H3K4me3 peaks with paternal allelic bias (N = 15) and canonical imprinted H3K4me3 peaks (N = 52) that were overlapping each category of genomic feature, including CpG islands (CGIs) and classes of repetitive elements. Each pair-wise comparison was done using chi-square statistic, with a significance threshold adjusted for multiple comparisons using Bonferroni correction. c Allelic expression of transcribed ERVK LTRs within a non-canonical imprinted paternal H3K4me3 peak (N = 8, Additional file 2: Table S5) is shown in extra-embryonic tissues at E12.5 (placenta and visceral endoderm (VE)). Reciprocal hybrids are denoted as F/C (FvB/CAST) and C/F (CAST/FvB). White boxes indicate where there was insufficient data (< 5 SNP-spanning reads in all replicates). d Heatmap showing expression levels across extra-embryonic and embryonic tissues of transcriptionally active ERVK LTR elements within a non-canonical imprinted paternal H3K4me3 peak (N = 8). The nearest gene is denoted in brackets next to the ERVK LTR identifier Non-canonically imprinted ERVK LTR promoters mediate imprinting of nearby protein-coding genes We found examples of non-canonically imprinted ERVK LTR promoters driving transcription of non-coding RNAs, but also mediating imprinting of protein-coding genes. One such example is the non-canonically imprinted ERVK LTR (RLTR15) located in intron 1 of the Gab1 gene. Gab1 shows imprinted paternal expression in E7.5 ExE; yet, the promoter of Gab1 has bi-allelic enrichment for H3K4me3 (Fig. 5a). Rather, the intronic RLTR15 is demarcated by imprinted paternal H3K4me3 (Fig. 5a) and is non-canonically imprinted (Fig. 5b) with enrichment for H3K27me3 in the oocyte (Fig. 5c). We find that RLTR15 acts as an alternative promoter for the Gab1 gene on the paternal allele specifically in the placenta (Fig. 5d, e), with intron-spanning reads demonstrating that the ERVK LTR is spliced onto exon 2 (Additional file 1: Figure S7).
Together, these analyses suggest that ERVK LTR elements can directly mediate imprinted gene expression in extra-embryonic lineages. To demonstrate this using a genetic approach, we designed CRISPR/Cas9 sgRNAs to excise RLTR15 in vivo (Additional file 1: Figure S7). We targeted B6/CAST hybrid zygotes which were implanted in foster mothers, and we subsequently collected E12.5 extra-embryonic tissues (placenta and yolk sac) and whole embryos (N = 42 embryos). We were able to obtain one embryo (F4E5) that was targeted on the paternal CAST allele, although genotyping revealed that the deletion was mosaic (Additional file 1: Figure S7). Nevertheless, allelic RNA-seq analysis of F4E5 compared with three controls (Additional file 1: Figure S7) demonstrated that Gab1 specifically showed a partial loss of imprinting in E12.5 placenta and yolk sac ( Fig. 5f; Additional file 1: Figure S7). Another intriguing example is imprinted gene Slc38a4. Slc38a4 has a maternal gDMR at its promoter [26] but paradoxically was recently reported to have non-canonical imprinted gene expression in extra-embryonic lineages [21]. Furthermore, we identified a non-canonical imprinted H3K4me3 peak overlying the Slc38a4 gDMR promoter (Additional file 2: Tables S2 and S3), raising questions as to whether the Slc38a4 promoter is regulated canonically or non-canonically. To investigate this further, we assessed allelic RNA-seq patterns in the epiblast and ExE from B6/ CAST, CAST/B6, and matDKO/CAST embryos in detail. B6/CAST and CAST/B6 epiblast and ExE showed the expected imprinted paternal expression (Additional file 1: Figure S8). In the matDKO/CAST epiblast, loss of the maternal DNA methylation at the gDMR resulted in bi-allelic expression (Additional file 1: Figure S8), consistent with canonical imprinting. However, in the matDKO/CAST ExE, while there was an increase in the expression of the maternal allele, there was still a twofold paternal bias in the expression (Additional file 1: Figure S8), an observation consistent with non-canonical imprinting.
In ExE, in addition to the non-canonical H3K4me3 peak at the annotated Slc38a4 promoter, there were four upstream non-canonical H3K4me3 peaks, all of which were located over ERV LTR element insertions (Additional file 1: Figure S8). In particular, one ERVK LTR~75 kb upstream (MLTR31F_Mm) was highly expressed in ExE and showed non-canonical imprinted expression of a spliced transcript from the paternal allele (Additional file 1: Figure S8). However, we found no evidence that this upstream ERVK LTR was acting an alternative promoter for Slc38a4, as there were no intron-spanning reads extending to the first or second exon of Slc38a4 in E7.5 ExE or E12.5 placenta. Together, these data suggest that the annotated Slc38a4 promoter is predominantly canonically imprinted by DNA methylation in embryonic lineages, while in extra-embryonic lineages, it appears that the non-canonically imprinted upstream ERVK LTRs may modulate the activity of the paternal allele of the Slc38a4 promoter, resulting in non-canonical imprinted gene expression.
We evaluated publically available gene expression [27], DNA methylation [28], and H3K4me3 and H3K27me3 histone modifications [22] in GV oocytes to determine whether the germ line pattern of maternal epigenetic modifications across the Slc38a4 locus is consistent with this finding. Indeed, the annotated promoter is fully methylated in GV oocytes, spanned by an oocyte-specific transcript emanating from multiple mammalian apparent LTR retrotransposon (MaLR) elements upstream (Additional file 1: Figure S8), as has been previously reported [27,29]. In contrast, the upstream non-canonical imprinted H3K4me3 peaks are enriched for H3K27me3 in GV oocytes (Additional file 1: Figure S8). Thus, it appears that independent ERV LTR insertions upstream of the Slc38a4 locus, one specifically active in oocytes and the other specifically active in extra-embryonic tissues, may have enabled genomic imprinting to have evolved twice at this locus, using both canonical and non-canonical mechanisms. While this finding needs to be confirmed genetically, it would represent, to our knowledge, the first such example of recurrent evolution of imprinting mechanisms reported to date.
Epigenetic regulation of non-canonical imprints in postimplantation embryos
It has been shown that non-canonical imprinting in the early embryo is mediated by the inheritance of maternal H3K27me3 from the oocyte [21]. Therefore, we were (See figure on previous page.) Fig. 5 A non-canonically imprinted ERVK LTR drives imprinted expression of Gab1 in placenta. a Screenshot of allelic gene expression, H3K4me3, H3K36me3, and H3K27me3 in B6/CAST and CAST/B6 ExE. ChIP-seq data is quantitated using enrichment normalized RPKM for 1-kb running windows with a 100-bp step (scales in square brackets); paternal (blue) and maternal (red) enrichments are shown on mirrored axes. RNA-seq data is quantitated as RPKM for 1-kb running windows with a 100-bp step. The box denotes the location of the non-canonical imprinted H3K4me3 peak associated with the known non-canonical imprinted gene Gab1. b Screenshot of allelic gene expression, H3K4me3, H3K36me3, and H3K27me3 in matDKO/CAST ExE, quantitated as in a. c Screenshot of H3K4me3, H3K27me3, and DNA methylation in GV oocytes. One-kilobase running windows with a 100-bp step were used; ChIP-seq data was quantitated as RPKM (scales in square brackets). d Screenshot showing allelic gene expression in F/CAST and CAST/F E12.5 placenta across the Gab1 locus. The box depicts the non-canonical imprinted paternal H3K4me3 peak containing an imprinted transcriptionally active ERVK LTR element (RLTR15). RNA-seq data is quantitated as log 2 RPKM for 1000-bp running windows with a 100-bp step. e Read count for maternal (red) and paternal (blue) transcription is shown for the non-canonically imprinted RLTR15 and exon 1 of the Gab1 gene in E12.5 and E16.5 embryonic (Li, liver; He, heart; Br, brain) and extra-embryonic (Pl, placenta; VE, visceral endoderm) tissues. Only intronspanning reads were used, and two-tailed t test was used to statistically compare the allelic expression (***p < 0.0005). f Barplot shows the allelic gene expression (allelic ratio = mat/(mat + pat)) for the Gab1 gene in B6/CAST E12.5 yolk sac, placenta, and whole embryos. F4E5 carried CRISPR-targeted deletion of non-canonically imprinted RLTR15 on the paternal allele and was compared to wild-type (WT) controls (N = 3). Two-tailed single sample t test was used to compare the F4E5 value to the WT mean (*p < 0.05). Error bars show standard deviation surprised to find that non-canonical imprinted EVRK LTRs did not show enrichment for maternal H3K27me3 in E6.5 ExE (Figs. 5a and 6a). Although there was a subtle bias of H3K27me3 towards the maternal allele in ExE at non-canonical imprinted H3K4me3 peaks (p = 0.02), when we identified regions with imprinted H3K27me3 in ExE using the EdgeR statistical approach (Additional file 1: Figure S9), only one non-canonically imprinted H3K4me3 peak was associated with imprinted H3K27me3. Furthermore, we found that the vast majority of imprinted H3K27me3 in post-implantation ExE was localized to two large imprinting clusters (Kcnq1/Kcnq1ot1 and Igf2r/Airn) and entirely dependent on maternal gDMRs (Additional file 1: Figure S9 and S10).
To determine whether another repressive epigenetic mark replaced maternal H3K27me3 in the post-implantation embryo, we assessed the allelic DNA methylation. We generated high coverage bisulphite sequencing data from ExE and epiblast of E7.5 reciprocal B6 x CAST hybrid embryos enabling us to obtain sufficient read depth over ERVK LTRs. These data revealed that non-canonical imprinted EVRK LTR promoters become DMRs in ExE, with the maternal allele becoming methylated (Fig. 6a), whereas both alleles were methylated in the epiblast (Additional file 1: Figure S11). Using publicly available bisulphite and RNA sequencing data from C57BL/6 germ cells and early embryos [30], we demonstrate that these regions are definitively tissuespecific secondary imprints acquired in the postimplantation de novo DNA methylation wave specifically in ExE (Fig. 6b). The acquisition of bi-allelic DNA methylation in the post-implantation epiblast corresponds to the silencing of these ERVK LTR promoters (Fig. 6c).
Conversely, using publically available ChIP-seq data [31], we observed the loss of maternal enrichment for H3K27me3 at non-canonical imprints during preimplantation development ( Fig. 6d; Additional file 1: Figure S11). Thus, non-canonical imprints do not maintain allelic H3K27me3 beyond early pre-implantation embryonic development, supporting that the regulation of allele-specific expression of non-canonical imprinted genes is superseded by DNA methylation in postimplantation development.
Discussion
In this study, we evaluated allelic histone modifications, DNA methylation, and gene expression to investigate the epigenetic regulation of imprinted genes in the postimplantation embryonic and extra-embryonic lineages. We identified non-canonical imprints that are definitively independent of maternally inherited DNA methylation in ExE and find that these are located preferentially at active ERVK LTR insertions. Furthermore, we find that while non-canonical imprinted genes inherit allelic H3K27me3 from the oocyte, this allelic enrichment is transient and their epigenetic regulation is superseded by secondary imprinted DMRs specifically acquired in extra-embryonic lineages (Fig. 7). Our findings not only reveal that noncanonical imprinting can be mediated by ERVK LTR insertions, but uncover the epigenetic mechanisms responsible for their persistence in extra-embryonic tissues.
The majority of the non-canonical imprinted H3K4me3 peaks we identified overlaid mono-allelically expressed ERVK LTR promoters, which mediated transcription of non-coding RNAs (e.g., Platr20, upstream of Slc38a4) or acted as alternative promoters to form chimeric mRNAs with nearby genes (e.g., Gab1, Smoc1). At the Gab1 locus, we demonstrated that spliced transcripts from the ERVK LTR are exclusively expressed from the paternal allele, while the upstream canonical exon 1 is transcribed bi-allelically. Furthermore, when we genetically targeted the Gab1 EVRK LTR promoter, despite only obtaining a mosaic deletion, we were able to disrupt the imprinted gene expression of Gab1. Together, these findings demonstrate that ERVK LTRs are a key genomic feature mediating non-canonical imprinting in murine extra-embryonic development.
ERVs have also been reported to function as enhancers specifically in the placenta through the acquisition of binding sites for developmental transcription factors, and it is thought that the uniquely hypomethylated state of the extra-embryonic tissues may enable transcriptional regulation by repetitive elements [25,32]. We did not find any evidence for shared transcription factor binding motifs among active extra-embryonic ERVK LTR promoters; however, we found that they were predominantly CpG-rich solo LTRs. There are several epigenetic modifiers containing CxxC domains that bind unmethylated CpGs, such H3K4 methyltransferases [33]; thus, high CpG content may be key to their role in transcriptional regulation. Solo LTRs, in particular, may be co-opted as transcriptional regulators in development because their lack of viral genes may enable them to escape the KRAB-ZFP silencing [24]. Notably, ERVs genome-wide are under-represented within 5 kb of promoters and specifically in the sense orientation [34]. We find that extra-embryonic ERVK LTR promoters are not only in close proximity to TSSs, but in particular in the sense orientation. Together, these findings suggest that promoter activity of ERVK LTRs in extra-embryonic tissues may be attributable to sequence composition and opportunistic positioning in the genome.
Notably, ERV-derived placental enhancers and oocyte promoters were found to be species-specific [25,29], and thus, we may expect that non-canonical imprinted regions, similarly co-opting ERV insertions, may also be species-specific. Indeed, preliminary studies in human embryos found five paternally expressed genes that may be regulated by maternal H3K27me3 [35], none of which have been reported to be imprinted in mice. These findings are reminiscent of placental-specific imprinted gDMRs in humans, which were also found to be speciesspecific [36,37]. Furthermore, a recent study demonstrated that species-specific gDMRs were a consequence of unique ERV insertions, which initiated gDMR-spanning transcription in oocytes [29]. Together, these findings support that ERV activity in the placenta and in the oocyte may be a key driver in the recent evolution of non-canonical and canonical imprinting in extra-embryonic tissues. Non-canonical imprinting is mediated by inheritance of H3K27me3 from the oocyte and was suggested to maintain a few non-canonical imprints into the extraembryonic development [21]. In our study, we identified all four previously reported non-canonical imprinted genes [21], in addition to several novel domains. Furthermore, we demonstrated conclusively that non-canonical imprinted genes are mono-allelically expressed independent of inherited maternal DNA methylation. However, we find that maternal enrichment for H3K27me3 does not persist beyond pre-implantation development at non-canonically imprinted loci but rather is replaced by maternal DNA methylation in post-implantation ExE. Conversely, noncanonical imprinted ERVK LTRs become bi-allelically silenced in embryonic lineages by the acquisition of bi-allelic DNA methylation. The mechanisms underlying the transition in repressive epigenetic states on the maternal allele are unclear, and why the allelic specificity would persist in ExE, but not in the epiblast, remains to be explored.
Despite the lack of allelic H3K27me3 at non-canonical imprinted loci in post-implantation ExE, we find a role for imprinted H3K27me3 at other genomic regions. We identified four silenced imprinted genes (Plagl1, Slc22a3, Pde10a, and Magel2) where the active allele was demarked by bivalent chromatin in E6.5 ExE, which subsequently resolved to imprinted gene expression in E12.5 placentae. Thus, bivalent chromatin may enable the temporal regulation of imprinted gene expression in extra-embryonic development, similar to that which has been observed in embryonic lineages [38,39]. We also find large allelic H3K27me3 domains at the Igf2r/Airn and Kcnq1/Kcnq1ot1 loci in extra-embryonic tissues in vivo and identify a number of novel imprinted genes distal of the Igf2r/Airn cluster. Furthermore, we show that the maternal gDMRs at these loci are required to prevent bi-allelic acquisition of H3K27me3. These findings support the observations from trophoblast stem cells in vitro that have shown lncRNAs regulated by the Igf2r/Airn and Kcnq1/Kcnq1ot1 maternal gDMRs mediate the recruitment of PRC2 and spreading of H3K27me3 in cis, in an X chromosome inactivation-like mechanism of silencing [40][41][42].
Conclusions
Our study of imprinted genes in in vivo post-implantation extra-embryonic development has provided novel insights into non-canonical imprinted gene regulation, which are otherwise masked in bulk genomic data from inbred strains and are difficult to assess in human populations due to the sparsity of genetic polymorphisms. We reveal that the majority of non-canonical imprints are localized to solo ERVK LTR repeats, which act as imprinted transcription initiation sites for non-coding RNAs and chimeric mRNAs in extra-embryonic tissues. Importantly, we find that the regulation of non-canonical imprinted regions transition from inherited maternal H3K27me3 to secondary imprinted DMRs specifically in extraembryonic lineages. These findings highlight the unique mechanisms regulating imprinted gene expression in the placenta and the potential importance of the still unexplored role of these non-canonical imprints in regulating placentation and fetal growth.
Sample collection
Reciprocal natural timed matings were set up between C57BL6/Babr and CAST/EiJ animals (denoted as B6/ CAST and CAST/B6), and embryos were collected on embryonic days 6.5 (E6.5) and 7.5 (E7.5). Natural timed matings were set up between Dnmt3a floxed/floxed, Dnmt3b floxed/floxed, Zp3+ve B6/129 females (resulting in ablation of DNA methylation in the oocyte) [9] and CAST males (denoted as matDKO/CAST). The epiblast (Epi) and extra-embryonic ectoderm (ExE) for each embryo were manually separated. E6.5 epiblast (N = 4) and ExE (N = 8) samples were pooled (an estimated~2500 cells), washed in PBS, and then flash-frozen in 10 μL of nuclear lysis buffer (Sigma-Aldrich). Single E7.5 epiblast and ExE samples were individually frozen in 10 μL of buffer RLT Plus (Qiagen). In vivo CRISPR targeting C57BL6/J females were superovulated and crossed with CAST/EiJ males. Zygotes were recovered the next day and electroporated with two sgRNAs for the Gab1 RLTR15 ERVK element (200 ng/μL each) (Additional file 1: Figure S7) and CAS9 protein (500 ng/μL). Embryos were then implanted into NMRI pseudo-pregnant females. The embryos were dissected on E12.5, and the following tissues were Fig. 7 Summary of epigenetic regulation of non-canonical imprinting in embryonic development. Schematic diagram showing the allelic epigenetic regulation of a non-canonically imprinted gene by an ERVK LTR element (top) and the dynamic regulation of non-canonically imprinted ERVK LTRs across pre-and post-implantation development (bottom). In the pre-implantation embryo, inherited H3K27me3 from the oocyte silences the maternal allele. In the post-implantation embryo, maternal H3K27me3 transitions to imprinted maternal DNA methylation in extra-embryonic lineages, thereby retaining the imprinted paternal expression of the ERVK LTR. Alternatively, in the embryonic lineages, both the maternal and paternal alleles acquire DNA methylation, consequently silencing the ERVK LTR transcription. The maternal allele is shown in red, and paternal allele is shown in blue. In the allelic enrichment plot, the solid line is the level of H3K27me3, and the dashed line is the level of DNA methylation. Embryonic day (E) is shown on the x-axis for each respective stage of embryogenesis collected: (1) visceral yolk sac/amnion, (2) placental disc with as much decidua removed as possible, (3) embryo, and (4) tail clip for genotyping. Tissue samples were washed in cold PBS and flash-frozen in 50 μL of RLT+ buffer (Qiagen). Tissues were collected from a total of 42 E12.5 embryos from 6 females across 2 independent experiments.
DNA from tail clippings were genotyped using MyTaq master mix (Bioline) with primers: F -AGCCCAATCT CACAACAGTT, R -CGGACCAGGTGAACATGTTG. Bands corresponding to the wild-type (847 bp) and knockout (320 bp) alleles were gel extracted and sent for Sanger sequencing to identify the targeted allele (Additional file 1: Figure S7). One effectively targeted sample (F4E5) and three wild-type controls (F4E1, F4E3, and F5E6) were selected for RNA sequencing.
Low-input mRNA sequencing library preparation
Stranded mRNA-seq libraries were generated for E7.5 embryos: B6/CAST Epi (N = 3) and ExE (N = 3), CAST/ B6 Epi (N = 3) and ExE (N = 3), and matDKO/CAST Epi (N = 3) and ExE (N = 3). Total RNA was extracted using a TRIzol extraction method, as previously described [27]. In brief, samples were homogenized in 400 μL of TRIzol (Invitrogen) and phase-separated by adding 80 μL of chloroform:isoamyl alcohol (Sigma-Aldrich), mixed, and centrifuged at 4°C for 15 min. The aqueous phase was transferred to a new tube, 1 μL GlycoBlue and 300 μL of ice-cold isopropanol were added and mixed. Samples were incubated for 10 min and then centrifuged for 10 min at 4°C. The pellet was washed once with 75% ethanol, air-dried, and then resuspended in 5 μL of RNase-free water. Twenty microliters of lysis/binding buffer was immediately added to each RNA sample, and oligo (dT) 25 capture of mRNA was done using Dynabeads mRNA DIRECT kit (Life Technologies). The protocol was implemented as per manufacturer's instructions including the additional steps for elimination of rRNA contamination. Maxymum Recovery tubes (Axygen) were used, and volumes were adapted for the low amount of starting material: 5 μL of Dynabeads Oligo (dT) 25 were used for each sample, mRNA capture was done in a total volume of 50 μL of lysis/binding buffer, washes were done using 100 μL Washing Buffer A or 50 μL of Washing Buffer B, and a final elution volume of 5 μL 10 mM Tris-HCl. The total volume of mRNA was then immediately advanced into the library preparation protocol, using the SMARTer Stranded RNA-seq kit (Clontech), which is optimized for as little as 100 pg of RNA. The protocol was completed as per the manufacturer's instructions, and 14 amplification cycles were used for all samples. The quantification of all libraries was done using the High DNA Sensitivity Bioanalyzer 2500 (Agilent) and Illumina library quantification kit (KAPA). Libraries were sequenced using 50 bp singleend on the Illumina HiSeq 2500 RapidRun, multiplexing 12 samples per lane. Libraries were evaluated for quality in SeqMonk using RNA-seq QC and duplication plots, resulting in 1 replicate of B6/CAST ExE being excluded due to high duplication.
Following alignments, all sequencing data was then split allele-specifically using SNPsplit (v0.3.3) [46]. In brief, sequencing reads were mapped to a Mus musculus (GRCm38)-derived genome, where SNPs between hybrid strains (C57BL6 and CAST/EiJ, or FvB and CAST/EiJ, or C57BL6 and PWK) had been masked by the ambiguity nucleobase N (N-masked genome). Aligned reads were then sorted into one of three BAM files: C57BL6 (genome 1), CAST/EiJ (genome 2), or unassigned. The females carrying conditional Dnmt3a/Dnmt3b double knockout (matDKO) were predominantly C57BL6; however, there was approximately 15% of 129 alleles remaining in the strain. Therefore, these data were run through a unique pipeline to allow for a complete mapping of the maternal allele. All datasets generated from matDKO/CAST embryos were first aligned to a B6/CAST N-masked genome, as above, but with the difference that all SNPs which were in common between 129 and CAST (~2 million) had been excluded. The data was then split against C57BL6 and CAST/EiJ, as above. FastQ reads of the unassigned fraction of reads were then recovered from the original FastQ files, and in the second step, these reads were then aligned to a 129S1/CAST genome (generated with SNPsplit_genome_preparation). Alignments were then SNPplit between 129 and CAST/EiJ, and the 129-specific reads were then combined with the C57BL6-specific reads (from step 1) to comprise a complete maternal allelic set. Raw sequencing reads and allelically mapped BAM files have been deposited into the Gene Expression Omnibus database (GSE124216).
ChIP-seq peak calling
Peak calling was done for B6/CAST and CAST/B6 epiblast and ExE H3K4me3, H3K27me3, and H3K36me3 using chromstaR, a multivariate peak-calling approach based on a multivariate hidden Markov model, using the default parameters [47].
Allelic histone enrichment and gene expression analyses
Read counts for maternal and paternal H3K4me3 or H3K27me3 were quantitated over B6/CAST and CAST/ B6 H3K4me3 peaks for either epiblast or ExE. H3K4me3 or H3K27me3 peaks were combined and de-duplicated between the B6/CAST and CAST/B6 epiblast or ExE, to generate a complete list of peaks from both hybrid crosses for each tissue. Peaks were then filtered for those with a minimum read count of 20 in at least 1 allelically mapped biological replicate. Peaks with allelically biased H3K4me3 or H3K27me3 were then identified using EdgeR statistic (p < 0.05, corrected for multiple comparisons). Significant peaks were then classified into strain-specific allelic H3K4me3 or H3K27me3 if their allelic enrichment switched in the reciprocal cross. Significant peaks were identified as imprinted if the allelic enrichment for a parental allele was consistent between reciprocal crosses.
Read counts for maternal and paternal H3K36me3 or RNA-seq were quantitated over autosomal genes. Genes were then filtered for those with a minimum read count of 20 in at least 1 allelically mapped biological replicate of B6/CAST and CAST/B6 H3K36me3 or a minimum read count of 5 in at least 1 allelically mapped biological replicate of B6/CAST and CAST/B6 (or F/CAST and CAST/F for E12.5 placenta) RNA-seq. Genes with allelically biased expression were then identified using EdgeR statistic (p < 0.05, corrected for multiple comparisons). Significant genes were filtered for those that were associated with an imprinted H3K4me3 peak.
ChIP-seq quantitation
For the quantitative display of allelic ChIP-seq data for a single histone mark and allelic ratios, read counts per running window or peak (where applicable) were averaged between biological replicates. Read counts were normalized to library size, excluding X, Y, and mitochondrial chromosomes, using size-factor normalization in SeqMonk. However, for the comparison of allelic enrichment for H3K4me3 and H3K27me3 across preimplantation development (Additional file 1: Figure S11B and C), raw read counts were used. For this comparison, correcting for library size is not appropriate, as there is a known discrepant abundance of H3K27me3 between the maternal and paternal alleles [31]. When ChIP-seq data of multiple histone marks are displayed together in a screenshot, enrichment normalized reads per kilobase per million (RPKM) was used. Enrichment normalization performs an initial additive translation of the data based on a low data percentile (40th percentile) representing non-zero but unambiguously unenriched points, followed by a multiplicative expansion of the data to a second high percentile (99th percentile) representing the most highly enriched regions. Enrichment is therefore scaled between these two points, but following the relative enrichment levels seen in the untransformed data.
Transcript analysis of non-canonical H3K4me3 associated ERVK LTRs
Coordinates for repetitive elements for the mouse GRCm38 genome build were generated using Repeat-Masker. Active ERVK LTRs were identified as those that fell within an H3K4me3 peak in E6.5 ExE, with ≥ 5 reads on the same strand at the LTR repeat in at least 2 replicates of RNA-seq data from E7.5 ExE, E12.5 visceral endoderm [15], and/or E12.5 placenta [15] with at least 1 intron-spanning read indicative of spliced RNA. These were then filtered for those that were sites of transcription initiation with no apparent upstream intron-spanning reads (N = 40), of which 8 were within non-canonically imprinted paternal H3K4me3 peaks (Additional file 2: Table S5) and 32 were classified as extra-embryonic active ERVK LTRs (Additional file 2: Table S6). The presence and directionality of reads spanning annotated introns, potential introns between annotated and upstream novel exons, and between upstream novel exons were analyzed to determine those ERVK LTRs that were initiating mRNA chimeras or non-coding RNAs (Additional file 2: Tables S5 and S6).
Sequence motif analysis of extra-embryonic ERVK LTR promoters DREME (version 5.0.5) was used to identify transcription factor motifs among ERVK LTRs that are transcriptionally active in extra-embryonic tissues (N = 40) compared to a background set with comparable length and class. AME (version 5.0.5) was used to evaluate enrichment for transcription factor binding sites for CDX2, EOMES, and ELF5. | 9,536.8 | 2019-10-29T00:00:00.000 | [
"Biology"
] |
Using Singular Value to Set Output Disturbance Limits to Feedback ILC Control
Iterative Learning Control is an effective way of controlling the errors which act directly on the repetitive system. The stability of the system is the main objective in designing. The Small Gain Theorem is used in the design process of State Feedback ILC. The feedback controller along with the Iterative Learning Control adds an advantage in producing a system with minimal error. The past error and current error feedback Iterative control system are studied with reference to the region of disturbance at the output. This paper mainly focuses on comparing the region of disturbance at the output end. The past error feed forward and current error feedback systems are developed on the singular values. Hence, we use the singular values to set an output disturbance limit for the past error and current error feedback ILC system. Thus, we obtain a result of past error feed forward performing better than the current error feedback system. This implies greater region of disturbance suppression to past error feed forward than the other.
Introduction
Learning control is an effective tool in the field of control. The combination of learning control along with artificial intelligence provides much new advancement in the field of robotics, manufacturing and transportation [1] [2]. Among a variety of learning control techniques, Iterative Learning control ILC arises and executes the same control task repeatedly with finite time duration [3]. Feedback control system states that, "the system whose output is controlled using its measurement as a feedback signal" [4]. The feedback signal is compared to the ref-erence signal in order to generate an error signal which is filtered by a controller and in turn produces the system's controlled input.
Feedback ILC is a well known controlling method to enhance the performance of system in repetitive mode. The idea of an ILC is to build up a series of controlling input ( k u ) such that the error ( k e ) tends to decay on repeated iterations or to an acceptable error tolerance [5]. A trial in ILC represents a complete task for predefined time duration. A reference (r(t)) is assumed to have a time duration governed by 0 t T ≤ ≤ < ∞ , in which T represents the length of the trial.
Once a process is completed, the data fetched is available to reckon the control input for the following iterative process. Robot manipulators perform repetitive operation of pick and place at finite duration. Gantry robot application is used to collect an object from a fixed location point and transfer it to another location within a predefined period [6]. Then the robot returns to its normal position of start to perform the specified task repeatedly. The aim is to perform a predefined task repeatedly as many times as possible, without the need for resetting. Similar operations are performed in other applications like Microelectronics manufacturing, chemical batch processes and petrochemical processes. The integer 0 k ≥ shows the trial number and ( ) k y t the end result on trial k. Here we focus on how to limit the single-input-single-output systems with universality to multi-input multi-output systems (MIMO). Furthermore, the error on trial k is With the presence of previous trial information, the current trial input can be formed as non-causal temporal information.
The early development of ILC was reported by [5]; a derivative type of ILC is introduced as where ϒ is the learning gain. Since then, an extensive effort was taken to introduce several developments on ILC, see [7] for example.
There are two types of ILC development, one based on the presence of system dynamics matrix. The other is based on the development of control input law where the dynamic matrix is excluded, such as the phase-lead ILC [8]. The latter case suffers from lack of control performance, thus the first comes as good solution.
This paper considers completing the disturbance scenario of current error feedback state ILC depending on [9] and [10]. The introduction work considers repeated disturbance acting on system input. The work in [11] presents modified work that includes past and current error feedback ILC.
Uncertainty in control is a common issue to investigate, as well as disturbances. Several reported works investigate the above issues as [12] [13]. [13] for example, discusses load disturbance for state feedback ILC in past error feed forward. [14] gave an extended work to the ILC design for the current error feedback and past error feed forward by adding the external instability conditions on the load. This paper investigates the output disturbance condition for past and current error feedback. Several conditions are erected to ensure system stability and performance enhancement. This shows a developed system for load disturbance as in [14].
Further we revise [11], and then new conditions are obtained for past error and current error feedback. Finally, a conclusion is given, and a possible future work is clarified.
Background
Initially we edit the ILC design initiated in [11], by taking a . The known fact of ILC is that, the design processes a single trial in a defined time and after its goes back to its initial state for the next trial to be started. A single trial with a pre-determined time can be used to show a system dynamics over a single trial. This is illustrated as In the above Equation (1), 0 1 i N ≤ ≤ − where N is the number of trials. Because of the resetting condition used, it is well appreciated to take the first value 0 o X = . The Equation (1) is formatted in two different dimensions, one of which is reflected earlier at the initialization of ILC for continuous time domain and discrete field. The other creates an essential base to the ILC interest, due to its character of sorting data. Many ILC models are completely depending on changing the discrete illustration as an index trial notation which is a one notational form, see [7]. Hence, the modified statement begins with including the input and output super vectors; u and y respectively on the trial index System stability is a keynote criterion for ILC design systems; so a response connection is established to balance the iterative process. Hence the overall dynamics can be expressed as where S denotes a lower triangular Toeplitz matrix. The down parameters in the matrix are Markov parameters, which can be shown as To keep the vector form in discrete space, the reference ( ) r t is defined as In a process of measuring an inaccuracy, the ILC system uses a predefined inaccurate consonant as a forcing function which is indulged to the old iterative input to produce the consecutive iteration input signal. This design follows the reference trajectory precisely along with the trial index as it moves towards infinity.
[15] illustrated a periodic signal of length N which is described in the discrete-time formation as The N N × matrix w F is shown as The control issue in ILC state feedback model can be elaborated further. We need to identify the robust controller The main focus is to create a controller ( ) K z in a way that, the full closed-loop system is completely stable without any conditions. Henceforth the tracking error k k e r y = − is zero along with the trial domain thus; the two rules are firmly stable.
To create an ILC controller in several design schemes, [11] extended the design reported by [9] [10]. The first one is with the state feedback.
And the second was through output injection. Both design schemes have variable stability conditions and it depends on the design scheme whether it uses current error feedback or past error feed forward. Thus, this balancing condition is attained The stability condition of feedback model is In both the cases of feed forward and feedback type of models, Earlier the design implemented in [11] did not consider the scenario at which, the disturbance may act on the system load. And the design implemented in [13] included the scenario with past error feed forward only. Here we compare the current error feedback with [13], which is the past error feed forward.
Output Disturbance Limitation in Singular Values for State-Feedback ILC
Initially, [13] explained the system illustrated in (1) The measured output Much needed presumptions made about k d and k n is as follows: 1) their mean is zero, weakly stationary random variables with bounded variance; 2) they are in phase with one another; and 3) they are collinear in-between trials.
Before analyzing output disturbance limitation conditions, we take into account the disturbance limitation to guarantee the system performance. A stable condition (5) for the state feedback design with past error feed forward and the output (7) creates a path using singular values as, a high confining region would result in the following condition as it was obtained in [14]: This condition will be the guidance to form the new output disturbance condition that assures system stability in front of output disturbances acting on the system output. Consider the equation which led to (9), Adding the measurement part to the equation will lead to, Above illustration makes clear that the maximum singular value of the output interference implying on the current iteration has to be minimum than the maximum singular value of the difference of the summation of all previous iteration results eigen value further subtracting the sum of past iterative load changes, the first input feedback and the minimum singular value to the concluding iterative control response. Hence, the sweep where the output interference implicating on any trial k is highly confining and has a minute deviation with regards to its maximum singular value.
For current error feedback, the output interference restriction problem is obtained in a similar format. Initially to begin with the stability condition as shown in (6). The load disturbance may occur at any occasion in trial k and it is non-repetitive as well as the output disturbance. So it should be in shape that includes its weight of direction such that its result is examined and contained. Hence considering singular value analysis, the maximum singular value illustrating the interference should be confined at a stable region. The investigation including the singular value will lead to a conclusive illustration as And this can be rewritten as The conclusive illustration (12) clearly defines that; the highest singular value of the output interference should always be higher than the sum of a complete iterative output singular value. And it also should not involve the least singular value of the sum of previous conclusive signals, previous interferences, first output, the highest singular value of previous interference and First output. And also the expression should not be greater than 1. As it was pointed out in [14], it is very hard to attain the desired result with a feeble resource like past error feedforward which involves a highly attainable region of interference discretion.
The output (12) states firmly that the positivity of the previous error feed forward is because of its compact structure, feasible stability conditions and output disturbance limitation conditions.
Conclusion
Past error and current error feedback ILC schemes have been revisited. Output disturbance condition has been introduced in both cases. The results obtained verify the superiority of the previous error feed forward over current error feedback. This is achieved because of the obtained region of disturbance suppression. As it is shown, the previous error feed forward case is having greater suppression in the region of disturbance when correlated with current terror feedback. In future, a simulation model will be designed along with, the reader might join all developed dis-condition, uncertainty condition, and control law development in one reported work to present a complete design. | 2,753.8 | 2022-01-01T00:00:00.000 | [
"Mathematics"
] |
Autoimmune Diabetes Is Suppressed by Treatment with Recombinant Human Tissue Kallikrein-1
The kallikrein-kinin system (KKS) comprises a cascade of proteolytic enzymes and biogenic peptides that regulate several physiological processes. Over-expression of tissue kallikrein-1 and modulation of the KKS shows beneficial effects on insulin sensitivity and other parameters relevant to type 2 diabetes mellitus. However, much less is known about the role of kallikreins, in particular tissue kallikrein-1, in type 1 diabetes mellitus (T1D). We report that chronic administration of recombinant human tissue kallikrein-1 protein (DM199) to non-obese diabetic mice delayed the onset of T1D, attenuated the degree of insulitis, and improved pancreatic beta cell mass in a dose- and treatment frequency-dependent manner. Suppression of the autoimmune reaction against pancreatic beta cells was evidenced by a reduction in the relative numbers of infiltrating cytotoxic lymphocytes and an increase in the relative numbers of regulatory T cells in the pancreas and pancreatic lymph nodes. These effects may be due in part to a DM199 treatment-dependent increase in active TGF-beta1. Treatment with DM199 also resulted in elevated C-peptide levels, elevated glucagon like peptide-1 levels and a reduction in dipeptidyl peptidase-4 activity. Overall, the data suggest that DM199 may have a beneficial effect on T1D by attenuating the autoimmune reaction and improving beta cell health.
Introduction
The two major forms of diabetes mellitus, type 1 and type 2 (T1D and T2D respectively), affect more than 380 million people worldwide [1]. Approximately 5-10% of diabetic patients are afflicted with T1D [2]. Recent epidemiological studies indicate that the world-wide incidence of T1D has been increasing by 2-5% annually [3].
T1D is an autoimmune disease for which there are few therapeutic options other than life-long insulin injections [4]. Insulin administration, however, does not prevent T1D patients from eventually developing co-morbidities such as retinopathy, nephropathy and cardiovascular disease [2]. Novel therapies to address the underlying autoimmune cause of T1D are an urgent unmet need.
The serine protease tissue kallikrein-1 (KLK-1) and its cleavage products lys-bradykinin and bradykinin, are critical components of the kallikrein-kinin system (KKS) [5,6]. The KKS exerts physiological effects through binding of kinin peptides to the bradykinin 1 and bradykinin 2 receptors [7,8]. In addition to the blood pressure lowering effects to balance the renin-angiotensin system [9], the KKS is proposed to improve insulin sensitivity [10,11].
KKS activity has been associated with both positive [12] and negative effects [13] in certain autoimmune diseases. Although there is evidence that administration of porcine and rat tissue kallikrein-1 possess beneficial immune-modulating properties [14], no report to date has investigated the effect of administration of human tissue kallikrein-1 protein in an autoimmune T1D model.
The current exploratory study was designed to evaluate the effects of recombinant human KLK-1 (DM199) protein on the autoimmune progression of T1D in the non-obese diabetic (NOD) mouse. The NOD mouse has been used extensively in T1D studies with a specific focus on the role of T cell-mediated autoimmunity [15]. Two populations of T cells are particularly relevant to T1D pathogenesis. CD8 + cytotoxic T cells (CTLs) are primarily responsible for the killing of insulin-producing beta cells [16], whereas the CD4 + CD25 + Foxp3 + T regulatory cells (Tregs) suppress the activity of CTLs and attenuate the autoimmune attack [17]. Therapies designed to decrease CTL activity and/or increase activity of Tregs, may be effective in treating T1D [18,19]. Here, we show that chronic treatment of NOD mice with DM199 delays the onset of T1D and attenuates the autoimmune response as evidenced by modulation of the relative populations of CTLs and Tregs in the pancreas and pancreatic lymph nodes. The resulting protection of insulin-producing beta cells was associated with DM199 dose-specific improvements in whole-body glucose disposal, serum C-peptide and glucagon like peptide-1 (GLP-1) levels, and inhibition of serum dipeptidyl peptidase-4 (DPP-4) activity.
Reagents
All chemicals were purchased from Fisher Scientific (Suwanee, GA), unless stated otherwise.
Recombinant DM199 preparation
DM199 was produced from Chinese hamster ovary (CHO) cells expressing a gene encoding the full-length pre-pro-protein for human tissue kallikrein-1 (NP_002248.1). Following harvest and clarification, the supernatant containing secreted pro-KLK-1 was treated with recombinant trypsin (Roche Diagnostics, Germany) to generate active KLK-1. The active KLK-1 protein (DM199) was purified under aseptic conditions through multiple column chromatography and filtration steps as previously described [20]. Briefly, trypsin-digested KLK-1 was purified through an Octyl Sepharose 4 FF column, followed by affinity purification on a Benzamidine Sepharose FF column. Following buffer exchange, the eluate was purified through a DEAE Sepharose column (all columns were from GE Healthcare, Piscataway, NJ). The DEAE Sepharose eluate was concentrated, buffer exchanged into PBS (pH 7.4) and sterile-filtered. DM199 was .95% pure by densitometric analysis of SDS-polyacrylamide gels, with ,0.5 EU/ml endotoxin levels by the LAL endotoxin test.
The specific activity of DM199 was measured in vitro by cleavage of the substrate D-Val-Leu-Arg-7 amido-4-trifluoromethylcoumarin (D-VLR-AFC, FW 597.6) (Sigma-Aldrich, St. Louis, MO or AnaSpec Inc., Freemont, CA). When D-VLR-AFC was hydrolyzed, the free AFC produced in the reaction was quantified by fluorometric detection (excitation 360 nm, emission 460 nm). DM199 activity was determined by comparing the relative activity of a DM199 sample to the porcine kininogenase standard acquired from the National Institute for Biological Standards and Control (NIBSC Product No. 78/543). For this standard, the assigned potency is 22.5 international units (IU) per 20 mg ampoule of porcine pancreatic kininogenase.
Animal Research & Ethics Statement
This study was carried out in strict accordance with the recommendations of the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. All animal procedures were approved by the Sanford Research Intuitional Animal Care and Use Committee (Protocol #22-12-10-13B). Animals were housed in the Sanford Research Laboratory Animal Facility with food and water provided ad libitum. All efforts were made to minimize animal suffering during the experiments. At the end of each experiment, animals were euthanized by CO 2 asphyxiation.
Animals and experimental animal procedures
Female NOD/ShiLtJ (NOD) mice were purchased from the Jackson Laboratory (Bar Harbor, ME) at 4 weeks of age. Treatment was initiated in NOD mice at 6 weeks of age. Animals were screened for urinary glucose using Bayer Diastix strips (Tarrytown, NY) twice-weekly for the first six weeks of treatment; thereafter the frequency of testing was increased to daily. In animals with a positive urine glucose test, non-fasting blood glucose was measured using a Bayer Ascensia Elite 1 one-touch blood glucose monitor (Tarrytown, NY). The onset of diabetes was defined as non-fasting blood glucose concentrations greater than 250 mg/dL for two consecutive days.
Effects of DM199 were studied in three consecutive experiments. In the first experiment (Cohort 1), 6 groups were treated as per Table 1 for up to 18 weeks, or until the onset of diabetes. In this experiment a subset of animals from groups 1-4 were sacrificed after 4 weeks of treatment and evaluated for the formation of insulitis. Newly diabetic animals were sacrificed immediately, while all remaining animals were sacrificed at the end of the 18-week treatment period. Upon sacrifice, spleens, pancreatic lymph nodes and pancreata were removed and processed for analyses.
In the second experiment (Cohort 2), 6 groups of animals were treated (Table 1) for up to 10 weeks, or until the onset of diabetes. All non-diabetic animals still remaining in the study after 9 weeks of treatment received i.p. injections of synthetic nucleotide EdU for 5 consecutive days for postmortem assessment of b cell proliferation. At the end of 10 weeks all animals were sacrificed, spleens, pancreatic lymph nodes and pancreata were removed and processed for analyses.
In the third experiment (Cohort 3), 2 groups of animals were injected daily with either DM199 at 100 U/kg or PBS for 9 weeks, for measurement of active GLP-1 levels and DPP-4 activity in sera. Safety markers -blood pressure and toxicology Blood pressure measurements were performed on randomly selected mice from each group in Cohort 1 using a NIBP Multi Channel Blood Pressure System (IITC Life Science Inc. Woodland Hills, CA). Animals were acclimated to the restrainers for several days prior to data collection. Animals were placed into the analyzer restrainer immediately after treatment, tail cuffs with pulse wave detection sensors were placed on restrained animals, and the blood pressure parameters were measured.
All animals were monitored weekly for overall health, behavior, and bodyweights for potential toxicological effects. Post mortems on Cohort 1 were conducted for visual signs of organ pathology and confirmed by histopathology assessment of liver, heart, kidney and brains after 18 weeks of treatment.
Intraperitoneal glucose tolerance tests (IPGTT)
Subgroups of NOD mice from cohort 1 were fasted overnight, injected with 2.5 g/kg of glucose in PBS intraperitoneally (i.p.), and blood glucose was measured at 0, 20, 40, 60, 90, and 120 minutes after glucose challenge. Blood was collected via tail vein into heparinized capillary tubes and blood glucose measured with a Bayer Ascensia Elite 1 one-touch blood glucose monitor.
ELISA measurements of GLP-1, C-peptide, insulin, total and active TGF-b1 levels During the in-life portion of each experiment, sera were collected from 100-200 ml of blood withdrawn from the retroorbital venous sinus of each non-diabetic animal. At the end of the in-life portion of each experiment, sera were collected from blood withdrawn from the hearts of animals immediately after sacrifice. For evaluation of active GLP-1, blood samples were collected in the presence of a DPP-4 inhibitor (Millipore, St. Charles, MO). All samples were stored at 280uC and then analyzed by ELISA-based kits for the determination of mouse insulin (Millipore), active GLP-1 (ALPCO Salem, NH) C-peptide (ALPCO), total TGF-b1 (R&D Systems, Minneapolis, MN), and active TGF-b1 (BioLegend), according to the manufacturer's instructions.
Assessment of DPP-4 activity in sera
Sera samples were collected from Cohort 3 animals after nine weeks of treatment and were stored at 280uC. Serum DPP-4 activity was determined by monitoring the proteolysis kinetics of the synthetic substrate (H-Gly-Pro-AMC, R&D Systems) that generates a fluorogenic peptide. Serum samples (10 ml) were diluted in 25 mM Tris (pH 8.0) and the substrate (20 mM) was added to the reaction mixture to a final volume of 100 ml. Reactions were read at excitation and emission wavelengths of 380 nm and 460 nm, respectively, in kinetic mode for 60 minutes using a SpectraMax M5 microplate reader (Molecular Devices, Sunnyvale, CA).
Immunohistochemical studies
Pancreata were weighed, embedded into OCT compound (Fisher Scientific), flash-frozen, and cryo-sectioned into 5 mm-thick sections for placement on Superfrost glass slides. Slides were fixed with 100% acetone at 220uC for 5 min, air-dried and stored at 2 Figure 1. DM199 treatment reduces type 1 diabetes incidence. The fraction of non-diabetic NOD mice remaining over an 18-week treatment period (Cohort 1) is shown. Animals with non-fasting blood glucose .250 mg/dL for two consecutive days were considered diabetic and were removed from the study. None of the mice in any treatment groups developed diabetes in the first 6 weeks of the study. Groups of NOD mice were injected with vehicle or DM199 i.p. as indicated in the legend; n, initial number in each group. *P,0.05 in comparison to control using a log-rank test. doi:10.1371/journal.pone.0107213.g001 80uC until staining. For insulin immunostaining with Hematoxilin and Eosin (H&E) counter-stain, sectioned samples were first stained with rabbit polyclonal anti-insulin antibody (ProteinTech, Chicago, IL) and insulin was detected using the avidin-biotinperoxidase immunohistochemistry kit LSAB 2, (DAKO, Carpinteria, CA.) according to the manufacturer's protocol for the 3amino-9-ethylcarbazole substrate-chromogen solution. Sections were then counter-stained with H&E and visualized under Nikon 90i microscope. For immunofluorescent staining, tissue sections were incubated with 1% BSA in PBS overnight at 4uC with 100 ml/section of guinea pig anti-mouse insulin (ab7842; Abcam, Cambridge, MA), followed by Alexa Fluor 594-labeled donkey anti-guinea pig Fc secondary antibody (Jackson Immunoresearch, West Grove, PA). Slides were mounted with DAPI-containing Vectashield (Vector Laboratories, Burlingame, CA) and examined under a Nikon 90i fluorescent microscope (Nikon Instruments, Melville, NY). Images were captured with a photometric Cool-SNAP HQ2 camera, using the Nis-Elements AR software. b cell mass was calculated by measuring the total insulin-positive stained area of each islet on the section, which was then divided by the total pancreas area of the section, and resultant value multiplied by the total pancreas weight. Three slides per mouse on average were analyzed in a blinded fashion, with the sections being a minimum of 150 microns apart.
Analysis and grading of insulitis
The severity of insulitis was determined on the pancreatic sections fluorescently stained for insulin, as described above (with a minimum of 9 islets/mouse), and graded in a blinded fashion as: Grade 0-intact islets, no infiltrating cells; Grade 1-peri-insulitis, infiltrating leukocytes are located around islet mass, do not penetrate islet ''capsule''; Grade 2-insulitis, leukocytes clearly penetrating into islets, reducing b cell mass by about 25%; Grade 3-heavy insulitis, infiltrating leukocytes reduce b cell mass by about 50%; Grade 4-destructive ''end-stage'' insulitis, where virtually no b cells are left within the islet infiltrate. The overall insulitis score for each group was determined as the average insulitis grade, weighted by the percentage of islets with the observed grade.
Assessment of beta cell proliferation
Pancreata from animals in treatment Cohort 2 that received EdU injections, were assessed for b cell proliferation via detection of EdU incorporation into the DNA of replicating cells, as previously described [21]. Slides containing pancreatic sections were prepared as described under 'Immunohistochemical studies', washed in PBS, incubated for 20 min at room temperature with staining mixture (10 mM; Cy5-labeled azide, 1 mM CuSO 4 , 0.1 M ascorbic acid in 100 mM Tris-HCl pH 8.5). After staining, slides were rinsed in TBS with 0.5% Triton X-100, washed, then blocked with 1% BSA, and processed for insulin staining. Beta cell proliferation was determined by counting the EdU and insulin copositive cells using Nikon NIS Elements and Image J. The proliferation rates were calculated on an average of three slides per mouse, with sections being a minimum of 150 microns apart. A minimum of fifty islets/mouse were analyzed and all slides were analyzed in a blinded fashion.
Analyses of islet infiltrate composition
Assessment of islet infiltrate composition was performed by immunofluorescent staining of pancreatic sections for CD4 and CD8 T cell surface markers (using an average 15 sections/mouse). Slides were first stained for CD4 using primary rat anti-mouse CD4 (BioLegend, Clone GK1.5) antibody followed by a secondary donkey anti-rat Cy5-labeled detection antibody (Jackson Immunoresearch). After extensive washing, the slides were incubated for 1 hr with FITC-labeled rat anti-mouse CD8 antibody (BioLegend, Clone 53-6.7), washed and mounted in DAPI-containing Vectashield. As absolute numbers of islet-infiltrating T cells can vary widely between individual islets depending on the size of infiltrate, attempts to normalize this variance involved calculating the CD4 + /CD8 + T cells ratios for each islet.
Immune cell isolations and FACS analyses
Pancreatic lymph nodes (PLNs) and spleens were mashed in PBS to release single cells into suspension. Single-cell suspensions of splenocytes were exposed to ACK lysis buffer (Fisher Scientific) to eliminate red blood cells, centrifuged and re-suspended in PBS. Cells in all samples were counted with a hemocytometer and diluted with FACS buffer (2% FBS; 0.1% sodium azide in PBS) to a concentration of 1 million cells per 100 mL. The Fc receptors were blocked for 20 minutes on ice with anti-CD16/CD32 block solution (eBioscience) prior to staining of cells with antibodies.
For Treg analyses, single cell suspensions were stained with anti-Foxp3, anti-CD4, anti-CD127, and anti-CD25 fluoro-labeled antibodies. Specifically, cells were incubated on ice with FITC- For analysis of cellular DPP-4/CD26 levels, isolated spleens were repeatedly mashed in PBS to release cells. Single-cell suspensions of splenocytes were exposed to ACK lysis buffer (Fisher Scientific) to eliminate red blood cells, centrifuged and re-each treatment group. (B) Average insulitis scores +/2 SEM for the two study cohorts. (C) Histomorphometric quantification of pancreatic beta cell mass. *P,0.05 vs. control using a one-way ANOVA analysis with Tukey post-hoc multiple comparisons test. doi:10.1371/journal.pone.0107213.g002
Western Blot Analysis
Splenocytes, 5610 5 , isolated from mouse spleens, were incubated for 2 h at 37uC in 100 ml RPMI serum-free medium supplemented with or without DM199 (0.1 U). Cells were then washed with PBS and lysed in SDS sample buffer. The cell lysates were subjected to electrophoresis through a 4-12% NuPAGE Bis-Tris polyacrylamide gel (Life Technologies, Grand Island, NY) followed by the transfer onto nitrocellulose membrane (Millipore). The membranes were then blocked in 1% Casein in PBST for 1 h followed by overnight incubation with rat monoclonal anti-DPP-4/CD26 antibody (1:500; R&D Systems). Membranes were washed and treated with HRP-conjugated anti-rat (1:5000, Jackson Immunoresearch) secondary antibody followed by chemiluminiscent substrate (Super Signal West Dura Extended Duration Substrate, Thermo Scientific, Rockford, IL) for detection according to manufacturer's instructions. The membrane was scanned on a UVP Biospectrum 500 imaging system (UVP Upland, CA).
Analysis of Indoleamine 2,3-dioxygenase (IDO) mRNA levels
Dendritic cells were isolated from splenic cell suspension using CD11c Microbeads (Miltenyi Biotec Inc., San Diego, CA) following the manufacturer's suggested protocol. Total RNA was isolated directly from cells using Direct-zol RNA MiniPrep kit (Zymo Research, Irvine, CA) as described in the manufacturer's protocol. Residual DNA was removed after on-column DNase treatment. RNA integrity was confirmed on 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA). 250-1000 ng of RNA was reverse-transcribed using GoScript Reverse Transcription System (Promega Corp., Madison, WI) as described in the
Statistical analysis
Group statistics were computed using GraphPad Prism (GraphPad Software, La Jolla, CA). Differences in means between groups were analyzed by one-way ANOVA followed by Tukey's post-hoc tests for significance. Diabetes incidence was plotted as Kaplan-Meier survival curves and significance determined by a log-rank test. Data are presented as mean +/2 SEM.
Treatment-dependent effects on type 1 diabetes incidence
The time course of spontaneous T1D development in NOD female mice is reasonably predictable, with the first animals presenting diabetes at approximately 12 weeks of age, and the incidence reaching 60 to 80% by 20 weeks of age [22][23][24]. To investigate the effects of DM199 on the development of T1D, six groups of six-week old NOD female mice were exposed to a doseranging and treatment-frequency varying DM199 treatment regimen (as per Table 1) for up to 18 weeks. Control animals developed diabetes at the expected rate with incidence reaching 75% by 24 weeks of age. In contrast, DM199 reduced the incidence of disease in a dose and treatment-frequency dependent manner. The 2 U/kg/day and 100 U/kg once-weekly doses had no significant effect on the incidence rate. The 40 and 100 U/kg/ day and 100 U/kg tri-weekly doses significantly reduced the incidence of diabetes at the end of 18 weeks of treatment compared to the control group (Fig. 1).
DM199 treatment was generally well tolerated. No significant signs of altered general appearance or distress were noted in any mice at any point during any of the studies. Hypoglycemia was not detected in any animal. At 4 and 18 weeks of DM199 treatment, there was no change in either systolic or diastolic blood pressure compared to placebo treated mice (Fig. S1A). A small decrease in net weight gain was observed in the animals treated with 100 U/ kg/day of DM199 compared to control (Fig. S1B). Postmortem examination after 18-weeks of treatment showed no visual signs of internal organ pathology. Histopathological assessment did not reveal any changes in liver, heart, kidney and brain in any of the experimental groups (data not shown). DM199 treatment effects on insulitis and b cell mass Development of T1D in NOD mice is characterized by autoimmune destruction of the beta cells as evidenced by insulitis and loss of beta cell mass. To elucidate the anti-diabetic effect of DM199, pancreatic islet insulitis was evaluated in subgroups of mice after 4, 10 and 18 weeks of treatment. The degree of infiltration in pancreatic sections was graded, and the average insulitis scores for each group were calculated as previously described [24]. Compared to control, DM199 reduced the proportions of islets with destructive insulitis (grades 2, 3 and 4) for each treatment regimen studied at the indicated time points ( Fig. 2A). At all time points, the highest percentage of islets with low insulitis grade (0 and 1) was observed with the 100 U/kg/day dose of DM199. After 4 and 10 weeks, mice dosed with 40 and 100 U/kg/day of DM199 showed a significantly lower average insulitis score compared to control mice (Fig. 2B). After 18 weeks, all DM199 treatment arms except 100 U/kg/once-weekly showed significantly lower average insulitis scores compared to the control (Fig. 2B).
The effects of DM199 treatment on b cell mass and function were assessed on all non-diabetic animals at the end of 18 weeks (Cohort 1) and 10 weeks (Cohort 2) of treatment. Compared to controls, a significantly greater b cell mass was detected in mice receiving 100 U/kg/daily DM199 dose after 10 weeks of treatment, and in mice from all treatment regimens with the exception of the 2 U/kg/day DM199 dose group after 18 weeks of treatment (Fig. 2C).
Physiological response and disease markers
Having observed a preservation of beta cell mass, we studied other markers associated with glucose control. The physiological response to a glucose load was assessed via an intraperitoneal glucose tolerance test (IPGTT) at 14 and 18 weeks of treatment (Fig. 3A). At 14 weeks the 100 U/kg/day and 100 U/kg/onceweekly doses significantly reduced the glucose area under the curve (AUC) compared to control. No dose showed any significant improvement at 18 weeks of age and the glucose AUC for all groups appeared to increase by 18 weeks.
Fasting C-peptide and insulin serum levels were measured as indicators of b cell function. [25]. C-peptide levels remained unchanged in all treatment groups during the first 5-6 weeks of treatment. A dramatic increase in fasting C-peptide was observed between weeks 6 and 9 in the 40 and 100 U/kg/day DM199 dose groups. In contrast, only a gradual increase in fasting C-peptide was observed between weeks 6 and 18 for the 2 U/kg/day and 100 U/kg/tri-weekly dose groups. At the end of week 18, C-peptide levels in animals treated with 40 and 100 U/kg/day of DM199 were approximately 12-fold higher compared to control animals (Fig. 3B). Fasting serum insulin levels, on the other hand, gradually declined in controls and all treatment groups over the 18-week study period with no difference observed across treatment groups compared to control (data not shown).
To assess if the observed improvement of beta cell mass (Fig. 2C) can be at least partially attributed to the DM199-driven b cell proliferation, we examined incorporation of synthetic EdU nucleotide into the DNA of insulin-positive islet cells in Cohort 2 experimental animals after 10 weeks of DM199 treatment (Fig. 3C). The averaged numbers of insulin and EdU doublepositive cells were between 1.5 and 3 per islet in animals from all experimental groups except mice treated with DM199 at 100 U/ kg daily which displayed mild but significant increase in b cell proliferation.
The incretin hormone GLP-1 and its inactivating enzyme DPP-4 are two well-documented targets for intervention in both T1D and T2D. Elevated GLP-1 levels have been reported to be associated with increased b cell mass in a variety of experimental systems [26,27], and GLP-1 is rapidly inactivated by DPP-4 [28]. We therefore measured serum levels of active GLP-1 and serum DPP-4 activity in mice treated with 100 U/kg/day of DM199. At 3, 6, and 9 weeks, active GLP-1 levels were significantly higher in the treated mice compared to controls (Fig. 3D), while serum DPP-4 activity was significantly reduced after 9 weeks in treated mice compared to controls (Fig. 3E). We hypothesized that elevated GLP-1 levels could be due in part to DM199-medated proteolytic destabilization of DPP-4. We analyzed the in vitro effect of DM199 treatment of splenocytes expressing DPP-4 and showed that DM199 substantially reduced both cell-surface and total cell-associated levels of membrane-tethered isoform of DPP-4, also known as CD26 (Fig. S2).
Modulation of CD8 + and Treg lymphocyte populations
The characteristic features of untreated diabetes progression in NOD mice are the increased infiltration of CD8 + cytotoxic T cells into the islets and pancreatic lymph nodes (PLNs) together with a concomitant decrease in the frequency and activity of CD4 + CD25 + Foxp3 + Tregs in the same compartments [17]. We investigated the effect of DM199 treatment for 10 and 18 weeks on lymphocyte populations in the islets, PLNs and spleens.
Immunohistochemical staining followed by morphometric analysis of pancreatic sections was used to evaluate treatmentdependent changes in CD4 + and CD8 + T cell infiltrate populations. Figure 4A shows representative images of stained pancreatic islets, indicating a treatment-dependent reduction in the relative number of CD8 + T cells and a concomitant increase in the relative number of CD4 + T cells. At 18 weeks, treatment with all doses of DM199 except for the 2 U/kg/day dose, significantly increased the CD4 + /CD8 + ratios compared to control (Fig. 4B). In Cohort 2 after 10 weeks, significant increases in the CD4 + / CD8 + ratios were observed with the 40, 100 U/kg/day and 100 U/kg/tri-weekly DM199 doses compared to control (Fig. 4B). Flow cytometric analysis revealed that the proportion of activated CD44 + CD8 + CTLs in PLNs was significantly decreased at 10 and 18 weeks with 40 and 100 U/kg/day of DM199 treatment compared to control. Significant decreases in the proportion of activated CTLs were also observed at 18 weeks with 2 U/kg/day and 100 U/kg/tri-weekly treatments (Fig. 4C). In parallel we observed significant increases in splenic populations of naïve CD44 2 CD8 + T cells (Fig. S3A) and of CD4 + CD25 + Foxp3 + Tregs (Fig. S3B) after 10 and 18 weeks of DM199 treatment. We also observed significantly elevated expression of indoleamine 2,3 dioxygenase (IDO) mRNA in splenic dendritic cells (Fig. S3C).
To assess whether DM199 administration affected the distribution and frequency of Tregs, we evaluated relative numbers of CD4 + CD25 + Foxp3 + CD127 2 Tregs in the PLNs. At 18 weeks, Tregs levels were significantly higher with the 40 and 100 U/kg/ day and 100 U/kg/tri-weekly DM199 dose groups compared to the control (Fig. 5A). In Cohort 2, after 10 weeks, only treatment with 100 U/kg/day of DM199 showed a statistically significant increase in the relative PLN Treg population. Representative staining of an islet infiltrate showing CD4 + Foxp3 + T cells after 10 weeks of DM199 treatment is shown in Figure 5B. Analysis of the proportion of Tregs in the islet infiltrates revealed that treatment with the 100 U/kg/day dose of DM199 produced a significant increase in the proportion of Tregs amid the total islet-infiltrating CD4 + population (Fig. 5C).
Effect on TGF-b1 levels
Transgenic NOD mice with tissue-specific over-expression of TGF-b1 show a phenotype of increased Tregs and decreased activated CTLs within pancreatic infiltrates and in PLNs [29,30] Treatment with DM199 resulted in a similar modulation of T cell subpopulations. Therefore, we examined levels of total TGF-b1 in circulation of Cohort 1. No significant differences in total TGF-b1 levels in peripheral blood were detected in any group of experimental animals throughout DM199 treatment (Fig. 6A). In a subsequent study (Cohort 2) we examined the effect of DM199 on total and active TGF-b1 levels in sera. Animals treated with 100 U/kg/day of DM199 displayed significant elevation in active TGF-b1 at 4, 6 and 10 weeks compared to controls (Fig. 6B).
Discussion
Human tissue kallikrein-1 (KLK-1) is a ubiquitous serine protease that cleaves kininogen to generate the kinin peptide Lys-bradykinin, which is further processed to bradykinin. Kinin peptides exert effects on several physiological systems including blood pressure regulation, glucose homeostasis [31], cardiac function [32], renal function [33], as well as pain and inflammation [34]. Several reports however, indicate that KLK-1 can mediate these physiologic effects directly and independent of its kininogenase activity [35][36][37].
In autoimmune diseases and T1D, KLK-1 activity has been associated with both beneficial and detrimental effects. Nagy et al. reported that KLK-1 suppressed delayed type hypersensitivity in a dermatitis model [14], while Cassim et al. suggested that KLK-1 aggravated inflammatory rheumatoid arthritis [13]. In streptozotocin (STZ)-induced diabetic rats, adenoviral KLK-1 gene therapy improved blood glucose levels [38]. The current study examined the preventive effects of chronic administration of recombinant human tissue kallikrein-1 (DM199) in the development of T1D in the NOD mouse model. NOD mice spontaneously develop T1D through autoimmune destruction of b cells, analogous to the mechanism of pathogenesis in humans.
Overall, chronic administration of various doses of DM199 over an 18-week period was well tolerated in animals with no evidence of toxicity, histopathological changes or abnormal physiological responses such as hypotension or hypoglycemia. Tissue kallikrein is best characterized by its vasodilatory effects and delivery of the kallikrein gene in rat models of hypertension elicits long-term reductions in blood pressure [39]. However, in normotensive, myocardial/reperfusion injury rat models, kallikrein gene delivery shows no effect on blood pressure, yet significantly ameliorates other markers of disease progression [40,41]. We speculate that the lack of a blood pressure-lowering effect (Fig. S1A). may be a model-dependent phenomenon and/or a consequence of the method of KLK-1 delivery. The NOD mice in our study are normotensive at 4 weeks and only mildly hypertensive at 18 weeks of DM199 treatment. To the best of our knowledge, only two studies have investigated blood pressure changes following administration of KLK-1 protein. Uehara et al. infused subdepressor doses of rat urinary KLK-1 for 4 weeks in hypertensive rats [42], and Bledsoe et al. infused high doses of KLK-1 for 2 weeks in rats with renal injury [43]. In both studies, KLK-1 ameliorated several markers of disease progression, yet no significant effect on blood pressure was observed.
A DM199 dose and treatment frequency-dependent delay in the onset of T1D associated with attenuation of the aggressiveness of insulitis was observed. The attenuation of insulitis corresponded with preservation of beta cell mass with no corresponding increase in insulin and no hypoglycemia. There was an approximate doubling in the IPGTT glucose AUC across all groups between weeks 14 and 18 of the study, which likely reflects a progression of T1D.
The rise in C-peptide levels over the 18-week period observed in all DM199 treatment groups is not clearly understood. C-peptide is a cleavage byproduct of pro-insulin and is secreted in a 1:1 ratio with mature insulin. C-peptide has a longer serum half-life than insulin and is therefore used as a surrogate marker of insulin levels. We observed a DM199 dose-dependent increase in fasting Cpeptide throughout the 18-week treatment period without a concomitant increase in fasting insulin. Although C-peptide and insulin are secreted in a 1:1 ratio, the disparity between the fasting C-peptide and insulin levels is not surprising. It has been well documented that female NOD mice develop insulin autoantibodies starting at approximately 5 weeks of age [44]. We hypothesize these auto-antibodies mask circulating insulin from full quantitative detection in our assay. The elevated C-peptide levels correlate with higher beta cell mass and proliferation, and correspond with the delay in the onset of T1D.
GLP-1 and DPP-4 are two well-documented targets for intervention in both T1D and T2D and are potential substrates for DM199 serine proteolytic activity. At the 9-week time point, control animals are beginning to show evidence of hyperglycemia. We noted a significant increase in active GLP-1 levels at 3, 6 and 9-weeks. Therefore we evaluated DPP-4 activity in the high daily dose group at the 9-week time point. We noted a significant decrease in the serum activity levels of DPP-4 that correlated with the observed GLP-1 activity. Further studies on the significance of these effects on DPP-4 and GLP-1 are warranted.
Progression of T1D is characterized by increased pancreatic islet and lymph node infiltration of CTLs and the loss of Treg cell activity [15,45]. CTLs are effector cells responsible for beta cell destruction leading to loss of glucose control and are an attractive target for treating T1D [18]. Inhibition of CTL migration has been reported to be effective in preventing and treating T1D in the NOD mouse model [46]. Thus we examined the effects of DM199 on CTLs in PLNs and islets.
During T1D pathogenesis, PLNs, and to a certain extent, the spleen, are the major sites of antigenic priming of diabetogenic CTLs. Lower numbers of activated CTLs found in PLNs of animals treated with medium and high daily doses of DM199 may correspond to the less aggressive diabetogenic process and may constitute part of the protective mechanism of DM199 treatment. This suggestion is supported by the demonstration that DM199 treatment reduced the aggressiveness of insulitis. The decrease in the CTLs in PLNs and islets, combined with the increase in naïve CD8 + T cells and Tregs in the spleens, suggest that DM199 may be affecting CTL priming, survival, and/or trafficking to the compartments important for the pathogenesis of T1D. More comprehensive studies addressing the direct and indirect effects of DM199 on CTL function are warranted, to better understand the basis for CTL redistribution.
Tregs represent another target for therapeutic intervention [19]. Normalization of the PLN-specific Treg population in diabetic NOD mice was shown to correlate with the recovery of euglycemia, and suggested a potential therapeutic approach for T1D [45]. We observed a marked increase in the relative percentage of Tregs amid the total islet-infiltrating CD4 + population and augmented Treg populations in the PLNs of DM199 treated mice. Interestingly, DM199-treated mice showed increased frequencies of Tregs in the spleen and elevated expression of indoleamine 2,3-deoxygenase (IDO) mRNA in splenic dendritic cells. IDO activity has been reported to be associated with maintenance of Treg suppressor state [47], and has also been shown to prevent effective CTL priming [48]. However, the exact relationship between DM199 treatment, splenic Treg populations and IDO activity is at present unclear. Our results suggest that DM199 may be protective by redistributing Tregs between the primary and secondary lymphoid organs.
TGF-b1 is known to play an important role in T1D pathogenesis in NOD mice. Elevated TGF-b1 levels stimulate the expansion of Tregs and enhance their activity [29,30]. Biologically active TGF-b1 is generated by proteolytic cleavage of a latency-associated peptide from inactive, latent TGF-b1. A recent report suggests that the proteolytic activity of KLK-1 might contribute to activation of latent TGF-b1 in vitro [49]. We observed an increase in active TGF-b1 with the highest dose of DM199 (100 U/kg/day) over a 10-week treatment period, and concomitant increases in Treg populations in the spleen and PLNs. We postulate that enhanced levels of the active TGF-b1 cytokine and of CD4 + CD25 + Foxp3 + T suppressor cells, could account for the protective effects of DM199 treatment on the progression of insulitis. Future studies should focus on measuring active TGF-b1 levels over a longer treatment period, more precise characterization of Treg sub-populations, and the proteolytic effects of DM199 on cell surface markers on T cell subpopulations within the spleen, islets and PLNs.
In summary chronic treatment with DM199 in NOD mice was well tolerated, and delayed the onset and reduced the incidence of type 1 diabetes. DM199 appears to have a prophylactic effect on reducing the CTL cell number in the PLN and islets, while also increasing the relative numbers of Tregs in the PLN and splenic compartments. While a detailed analysis of the changes in immune cell function is beyond the scope of this paper, more extensive in vitro and in vivo studies are needed to delineate the direct and/ or indirect immunomodulatory effects of DM199. Analysis of key cytokines such as IL-2, IFNc, IL-10, IL-17 and TGF-b1, and correlation of their levels to localization of various immune cell sub-populations, could provide a clearer picture of the mechanistic basis of the immunomodulatory effects of DM199. Future studies to address the translatability of DM199 as a potential T1D therapeutic could also include administration of DM199 at the onset or later stages of the disease. | 8,306.2 | 2014-09-26T00:00:00.000 | [
"Biology",
"Medicine"
] |
Unified properties of supermassive black hole winds in radio-quiet and radio-loud AGN
Powerful supermassive black hole (SMBH) winds in the form of ultra-fast outflows (UFOs) are detected in the X-ray spectra of several active galactic nuclei (AGN) seemingly independently of their radio classification between radio quiet (RQ) and radio loud (RL). In this work we explore the physical parameters of SMBH winds through a uniform analysis of a sample of X-ray bright RQ and RL AGN. We explored several correlations between di ff erent wind parameters and with respect to the AGN bolometric and Eddington luminosities. Our analysis shows that SMBH winds are not only a common trait of both AGN classes but also that they are most likely produced by the same physical mechanism. Consequently, we find that SMBH winds do not follow the radio-loudness dichotomy seen in jets. On average, a comparable amount of material accreted by the SMBH is ejected through such winds. The average wind power corresponds to about 3 per cent of the Eddington luminosity, confirming that they can drive AGN feedback. Moreover, the most energetic outflows are found in the most luminous sources. We find a possible positive correlation of the wind energetics, renormalized to the Eddington limit, with respect to λ Edd , consistent with the correlation found with bolometric luminosity. We also observe a possible positive correlation between the energetics of the outflow and the X-ray radio-loudness parameter. In general, these results suggest an underlying relation between the acceleration mechanisms of accretion disc winds and jets.
INTRODUCTION
Active Galactic Nuclei (AGNs) are extremely luminous astrophysical objects (i.e.L bol up to 10 48 erg/s) located at the centers of some galaxies, powered by the accretion of matter onto supermassive black holes (SMBH, i.e.M BH ą 10 6 M d ).The term AGN comprises a wide range of objects that were historically separated by their observational features regarding a specific electromagnetic wavelength (see review by Padovani et al. (2017)).As more AGNs with intermediate characteristics are still being discovered, these classes are constantly being revised.Currently, this "zoology" is partly explained in the framework of the Unified Model (Antonucci (1993), Urry & Padovani (1995)), where the accretion rate and the orientation with respect to the line of sight of the disc are the variables that lead to different spectral features (Netzer (2015) and references therein).A still poorly understood aspect is their duality in the radio band.AGNs are classified as Radio-Loud (RL) when the ratio of radio to optical emission (R), defined as the flux density at 5 GHz over the one at 2500 Å, is Rě10, whereas Radio-Quiet (RQ) when 0.1ďRď1 (Kellermann et al. (1989)).In RL AGNs the bulk of the emission is due to synchrotron radiation produced by a collimated relativistic jet, and present evidence suggests that its physical origin lies in the proximity of the central SMBH (see Blandford et al. (2019)).Furthermore, it appears that the BH spin and the kinetic power of the jet are closely linked (Chen et al. (2021)).On the other hand, RQ AGNs spectra are dominated by the accretion disc emission.However, there is an increasing evidence of a gradual distribution of AGN radio power, instead of a sharp division between the two classes.In terms of their overall prevalence, RL AGNs are less common than RQ, accounting for approximately 10 ´15% of the AGN population.This distribution seems to be related to the SMBH mass function (Graham et al. (2007)) as a significant correlation is observed between the radio luminosity (L R ) and the M BH (L R 9M 2.5 BH (Franceschini et al. (1998), McLure & Jarvis (2004), Best et al. (2005)) suggesting that strong radio emission is connected to AGNs with a greater M BH .In support of these studies, Laor (2000) found that nearly all PG AGNs with M BH ě 10 9 M d are RL, while those with M BH ď 3 ˆ10 8 M d are RQ.
Following the first detections of fast ionized outflows (Chartas et al. (2002), Pounds et al. (2003) and Reeves et al. (2003)), a growing amount of work has gone into looking for AGN outflows in various interstellar medium (ISM) phases in the last decade.At this time, winds are indeed detected at various distances from the AGN innermost regions and with distinct ionization states: (i) at sub-pc scales through the detection of blueshifted highly ionized Fe K-shell transitions with velocities "0.1 c or even higher, i.e.Ultra-Fast Outflows (UFOs) (see King & Pounds (2015), Tombesi et al. (2010); (ii) at pc scales via warm absorbers (WA) and broad absorption lines (BAL) (King & Pounds (2015), Tombesi et al. (2013), Serafinelli et al. (2019), He et al. (2019), Vietri et al. (2022)) and the blueshift of the C IV emission line (Gaskell (1982), Vietri et al. (2018)); (iii) at kpc scales through different gas phases such as ionized gas (i.e.[OIII] emission lines, Harrison et al. (2012), Cresci et al. (2015)) and molecular gas (e.g Feruglio et al. (2010), Bischetti et al. (2019)).An exhaustive review of the topic can be found in Laha et al. (2021).Theoretical models of AGN-driven outflows (e.g.Faucher-Giguère & Quataert (2012)) suggest that, in an energy conservation scenario, the kinetic energy of nuclear fast outflow is transferred to the ISM and drives the kpc scale flows.Outflows are considered one of the fundamental mechanisms by which the central SMBH interacts with its host galaxy, providing an efficient tool to regulate star formation, cooling flows, and drive correlations between the M BH and the host properties (Voit et al. (2015); Gaspari et al. (2019)).Direct observations of the interaction between ISM and AGN winds have been collected so far (Cano-Díaz et al. (2012) and references therein), revealing a spatial anti-correlation between outflows and actively star-forming regions.Moreover, Fiore et al. (2017) found strong correlations between L bol and the cold and ionized wind mass outflow rate and kinetic power, showing that in galaxies hosting powerful AGN driven winds, the depletion timescale and the molecular gas fraction are 3-10 times shorter and smaller than those of main-sequence galaxies with similar star-formation rate, stellar mass and redshift.
Systematic studies of the X-ray spectra for a sample of zď 0.1 AGN revealed that about 40% of them have highly ionized UFOs with average velocities between 0.1 and 0.3 c (e.g.Tombesi et al. (2010), Gofford et al. (2013)).Radiation and magnetic driving, or more likely a combination of the two, are the mechanisms suggested to explain the acceleration and launching of the X-ray absorbing material to mild relativistic velocities (e.g.Fukumura et al. (2010), Fukumura et al. (2014)).In the first scenario, the gas initially rises upward from the disc and is then pushed outward radially, accelerated by radiation pressure (i.e.Compton scattering and/or UV line absorption).However, this acceleration mode requires luminous AGN (L bol >0.1L Edd ) and the upper limit on v out is " 0.2c.In magnetohydrodynamic driven models, an outflow can be released from the disc depending on the magnetic field configuration, i.e. when the angle between the poloidal component and the disc reaches a certain threshold.A fraction of the accreting plasma can then be launched with a quasi-Keplerian velocity profile and accelerated along the magnetic field lines (in the so-called 'magnetic tower' effect).This mode requires an accretion disc that is strongly magnetized, which is similar to the relativistic jet's initial condition.More generally, we refer to such outflows as 'micro winds', as they can also be generated without the presence of a magnetized disc, in different scenarios (e.g., via radiative feedback).These outflow and radiation pressure can prevent further accretion onto the SMBH disrupting the inflow material and leading to a self-regulating mechanism.Moreover, as the possible trigger of multi-scale outflows (Faucher-Giguère & Quataert (2012)), UFOs are the starting point to understand the feedback and feeding processes that characterize the AGN-host galaxy interaction (Gaspari et al. (2020)).A state-of-the-art theoretical scenario is that SMBH feeding and feedback are recursively shaped by Chaotic Cold Accretion (CCA; e.g., Gaspari et al. 2013;Gaspari & S ądowski 2017;Maccagni et al. 2021;McKinley et al. 2022;Olivares et al. 2022).In CCA, the cold gas condenses out of the galactic hot halo and recurrently rains onto the micro-scale AGN, triggering ultrafast outflows.Being self-similar, CCA is expected to occur regardless of radio activity or AGN classifications.
The main objectives of this work are to: (a) analyze the presence of UFOs in RL and RQ AGNs to better understand the underlying differences between the two classes of the same physical phenomenon and (b) characterize the physical parameters of UFOs and their correlation with the AGN bolometric luminosity, L bol .A systematic search for X-ray UFOs has been reported currently only in a sample of local (zď0.2) RL AGNs (Tombesi et al. (2014)).In order to perform a comparative statistical study with respect to RQ AGNs, here we need to consider only UFOs detected in local sources.For statistical studies of X-ray UFOs focused only on high-z RQ AGNs, we refer the reader to other recent works (e.g., Chartas et al. (2021a); Matzeu et al. (2023)).The model and assumptions made to infer the physical parameters of the disc winds are explained in Section 2, along with the description of the AGN sample and additional considerations on the possible sources of uncertainty.In Section 3 we provide a discussion of the result, and in Section 4 we summarize our conclusions.Throughout this paper, we assume a flat ΛCDM cosmology with (Ω M , Ω Λ ) = (0.3,0.7) and a Hubble constant of 70 km s ´1 Mpc ´1.
DATA ANALYSIS
In this work, the physical parameters of the UFOs detected in Tombesi et al. (2014), i.e. a sample of RL AGN observed with XMM-Newton and S uzaku, are thoroughly examined.Moreover, the same parameters are derived for the RQ AGN samples of Tombesi et al. (2011), Tombesi et al. (2012) and Gofford et al. (2015) using the same methods.These works in the literature are based on a large and well-selected sample of sources and allow us to homogeneously compare the disc winds properties of the two types of AGN and study possible correlations.Information on M BH , the AGN unabsorbed luminosity in the X-ray band (L x ), defined over a range E"2 keV -10 keV, and the outflow equivalent hydrogen column density (N H ), ionization parameter (logξ) and the velocity (v out ) was used to obtain other crucial physical parameters, as explained below.
Outflow Parameters
The AGN ionizing luminosity L ion , defined in an energy range E"13.6 eV -13.6 keV, was estimated for the RL sample of Tombesi et al. (2014) using L x and assuming a typical power-law continuum emission with a photon index Γ " 1.8, which is a value commonly measured in the X-ray spectra of RL AGNs (see Nandra & Pounds (1994) and references thereafter).Instead, the L ion of the AGNs in the RQ AGN samples of Tombesi et al. (2011), Tombesi et al. (2012) and Gofford et al. (2015) is directly gathered from the respective works.As X-ray outflows are relatively compact, an upper limit on the line of sight (LOS) projected location can be derived from the definition of ξ " L ion {nr 2 (Tarter et al. (1969)), and requiring that the thickness of the absorber does not exceed its distance from the BH, N H » n∆r ă nr (e.g.Crenshaw & Kraemer (2012)): We specify that in those cases where the tabulated wind properties have only lower limits, which is generally true for ξ and N H , the conservative limit was adopted in the computation of the outflow parameters.An estimate of the lower limit for the UFO distance from the BH can be derived considering the escape distance relative to the outflow velocity.The escape velocity at a distance r is v esc " a 2GM BH {r.In the approximation v out " v esc , i.e. the outflow velocity measured along the LOS is equal to the escape velocity at that distance, then: These equations allow us to estimate the lower and upper limits of the outflow location, although with relatively large uncertainties.Better constraints are currently limited by the quality of the data and available models, and thus this is a conservative approach to estimate the wind location (e.g., Crenshaw & Kraemer (2012); Tombesi et al. (2012) and Tombesi et al. (2013); Gofford et al. (2015); Laha et al. (2021); Chartas et al. (2021b)).
The mass outflow rate, 9 M out , is defined as the mass flux carried by the outflow and is a critical parameter to understand the energetics of these phenomena.9 M out is related to the geometry of the system and, as such, requires modeling of the outflow structure that can only be approximated.The standard formula adopted in a thin spherically symmetric scenario is (Gofford et al. (2015)): where the product C g " Ωb ď 1 is called the "global filling factor" and accounts for both the fraction of the solid angle occupied by the outflow (Ω) and how much of the volume is filled by the gas (b); m p is the proton's mass; n and v out are the outflow density and velocity, which can be assumed constant for a thin shell.Due to its dependence on gas ionization and clumpiness, the estimate of b is complex.At low intermediate ionization states, the flow is likely to be clumpy or filamentary.This is supported by CCA models (e.g.Gaspari et al. (2013)) which produce an intrinsic chaotic clumpiness due to top-down multiphase condensation rain.In high ionization states, as in the case of UFOs, b can be considered to be largely smooth.We adopt a b » 1 and a mean value for Ω of Ω " 0.4, as can be estimated from the fraction of Fe K outflows observed in sample studies (see Tombesi et al. (2010) and Gofford et al. (2013)).Thus, the global filling factor is C g » 40%.
Using the estimate of the lower and upper limits for the outflow distance, the mass outflow rate is given by: 9 M max out " 4πC g m p L ion ξ ´1v out (4) Assuming that the outflow has reached a steady terminal velocity by the point at which it is observed, the instantaneous mechanical power can be estimated as L out " p1{2q 9 M out v 2 out (see also Gaspari & S ądowski (2017)).Therefore, substituting in the previous equation, the following relations can be simply obtained: The rate at which the outflow transports momentum in the host galaxy environment is given by 9 p out " 9 M out v out .This physical quantity can also be regarded as the force that the outflow exerts over the interstellar medium or the force that is required to accelerate the outflow to its current state:
Other parameters
In addition to characterization of the outflow physical properties, other intrinsic features of the AGNs were also derived.The values of the bolometric luminosity for each source are taken from the literature or, where absent, determined as L bol " k bol ˆLx , where k bol is the bolometric correction factor.The latter is defined for each source as: where a = 15.33˘0.06,b = 11.48˘0.01and c = 16.20˘0.16(Duras et al. (2020)).The reference list for each source is provided in Table 3.The momentum rate of AGN radiation is then obtained as 9 p rad " L bol c , and the mass accretion rate on the SMBH is obtained as 9 M acc " L bol ηc 2 where a standard accretion efficiency of η " 0.1 was considered.Lastly, the Eddington ratio is simply computed as λ " L bol L edd .In order to obtain further insights into the correlation between different wavelengths and the properties of disc winds, we also compute the X-ray radio loudness R x of each source as the ratio between radio luminosity at 5 Ghz and L x , i.e.R x " L R {L x .We collected from the literature the available radio fluxes at 1.4GHz and derived the flux at 5GHz as S 5 " S 1.4 ˆ10 0.7ˆlog 10 p5{1.4q .The radio luminosities and the respective references for the radio flux are reported in table 2 of Appendix B. The disc wind parameters are first derived in physical units and then normalized to the individual M BH as explained below: ‚ The distances are converted in units of the Schwarzschild radius, where r S " 2GM BH c 2 ; ‚ The mechanical power is normalized to the Eddington luminosity L Edd , where L Edd » 1.3 ˆ10 38 p M BH M d q erg s ´1; ‚ The mass outflow rate is normalized to 9 M acc ; ‚ The momentum rate of the outflow is normalized to 9 p rad .
Indeed, despite the fact that the observed sizes of AGNs, which are usually determined by M BH , vary, they also appear to share a similar physical structure, as suggested by the Unified Model (Urry & Padovani (1995)).In particular, a higher L bol often implies a higher M BH and this extreme radiation field is responsible for both the sweeping of the accreting gas, due to its radiation pressure, and dust sublimation.Therefore, it is intuitive to assume that, based on the bolometric luminosity, the same gaseous structure describing an AGN may be discovered at different distances from the center.Deriving scale-invariant physical parameters that can describe the system independently of L bol , is essential to understand the phenomenology of the outflow and their relation to the AGN types.In our study, we manage this task by performing the aforementioned normalization of the outflow parameters.
Description of the dataset
The initial sample consists of the RQ and RL AGNs described in Tombesi et al. (2012), Tombesi et al. (2014), andGofford et al. (2015).Among these observations, for each source, only X-ray outflows with velocities higher than »1% of the speed of light were selected.This threshold allows us to identify the outflows most likely to be launched from the accretion disc (e.g., Tombesi et al. (2010) and Gaspari & S ądowski (2017)) and not from further distances.In Appendix A, the estimated physical parameters of UFOs are reported.From Table A Tombesi et al. (2012).Subsequently, the final values of the UFO parameters for each AGN were derived as an average between different observations of the same source.The results reported in Appendix B show the mean between the upper and lower limits of the UFO parameters, where the uncertainty is the range given by the previous formulae.The classification of sources in RQ and RL AGN was also highlighted.Table 1 shows the mean values of the UFO parameters for the whole RL and RQ AGN sample investigated in this work.
Possible uncertainties and biases
We underline again that, in the following analysis, the confidence range for each parameter is given as the interval between its maximum and minimum values.Moreover, the formulae shown in Sec.2.1 consist mostly of physical limits and approximations.One of the main issues when considering uncertainties is that logξ, N H and v out may be subject to possible systematics and effects of sample selection.For instance, different assumptions on the velocity broadening of the lines and the gas elemental abundances can generate variations in the estimated N H (see Tombesi et al. (2013)) par 4.3).Therefore, gathering results from analyses reported in many different papers is not recommended, as the diverse analysis methods and assumptions employed could increase the scatter in the obtained values.A physical factor to take into account is the possibility of intrinsic inhomogeneities and variability in the absorbers that are not described by the models mentioned above.Moreover, we also note that the radio flux at 1.4GHz, used to calculate R x , is subject to uncertainties which depend on the area adopted for the source measurement.
Additional parameters that could contribute to intrinsic uncertainties are the angle with respect to the line of sight and the opening angle, although statistical studies mitigate this problem, showing a UFO detection rate of " 40% for both RQ and RL AGNs (e.g., Tombesi et al. (2010), Tombesi et al. (2014)).Finally, the reported RL AGNs with disc winds are classified as FRII radio sources, known to be X-ray brighter than FRIs (e.g., Hardcastle et al. (2009)).This is a selection effect due to the need for a higher signal-to-noise ratio to detect spectral features that allow us to probe only the X-ray brighter population of RL AGNs, which in turn has an effect on our sample R x distribution.Moreover, FRII are in general more active sources with greater accretion rate than FRI, therefore, UFOs are expected to be observed more in the former (Best & Heckman (2012).
Given the previously explained premises, we are still confident that our results are indicative of the main physical conditions of the population of sources here investigated.
DISCUSSION
In this work, we present a systematic analysis of the physical parameters of UFOs detected in a large sample of AGN consisting of 27 sources, 7 of which are RL and 20 RQ.In the following, we develop our study assuming that the 7 RL AGNs that host UFOs are representative of the key characteristics of their class.The conclusions reached are collected in the tables in Appendix A and are discussed in the next paragraph.The analysis then concentrates on potential correlations between the average values of the outflow parameters given in the tables in Appendix B.
Common origin for disc winds in X-ray bright RQ and RL AGNs
We start with a comparison of the average parameters for every source in both AGN classes (see Tab. 1).The confidence range for each parameter is derived as the standard deviation of the measures.
A statistical comparison between the parameter distributions of the two classes is carried out using the two-sample Kolmogorov-Smirnov test (Smirnov (1939)).The linear regression p-value for each independent variable tests the null hypothesis that the two populations are drawn from the same underlying distribution.In particular, the p-value identifies the point where the integral of the conditioned probability density of the variable reaches a certain value.Therefore, the p-value can be understood as the confidence level with which the null hypothesis is rejected in favor of the alternative.A standard confidence level for rejecting the null hypothesis is 95%, i.e., if the p-value is less than 0.05.It should be noted that the p-value is suitable to confirm or deny the null hypothesis but offers no other information on the distributions in the latter case.The KS-test p-values for each outflow parameter are collected in Tab. 1 A partial overlap can be observed by comparing the normalized distributions of M BH , L bol and λ Edd for the two AGN classes (see Fig. 1).In particular, RL sources are systematically clustered towards the higher-end of the parameter space, which is in line with the known trends due to a slightly larger M BH .The KS-test rejects the null hypothesis only for L bol distributions with a p-value"0.01.Indeed, even if the means of the two distributions are consistent i.e. log(L bol ) RL " 45.4 ˘0.5 and log(L bol ) RQ " 44.7 ˘0.8, the RL distribution is more skewed and asymmetric.Although for M BH the null hypothesis cannot be rejected, the confidence level still lies at "84%.These results underscore that there are differences in the intrinsic properties of two AGN populations.This point is crucial in the following analysis, where we compare the physical properties of the outflows.M out and 9 p out are shown in Fig. 3.A clear superposition is observed in the parameter space of the two classes, and indeed the KS-test can not reject the null hypothesis.Of particular interest is the pronounced correlation between the v out distributions, where the confidence level in rejecting the null hypothesis is only 1%.These results strongly suggest a common underlying origin for the disc winds, most likely caused by the same physical mechanism(s) in both X-ray bright RQ and RL AGNs.In particular, if we did not know the radio jet properties of these sources, the two classes would be virtually indistinguishable from the point of view of accretion and wind properties alone.This indicates that the accretion disc properties of luminous RQ and RL AGN are rather similar, being both radiative efficient and capable of producing powerful winds, as expected in a self-similar CCA scenario (the multiphase feeding rain does not distinguish between jet or non-jetted feedback; Gaspari & S ądowski 2017).The dichotomy in the radio jet properties, instead, may be dominated by other parameters, such as the SMBH spin parameter.It is interesting to note that the similarities, and lack of a clear dichotomy, between RQ and RL AGN in our study is supported by the fact that BLRGs in our sample are optically classified as high-excitation galaxies (HEG).This sub-class of RL AGN is most likely associated with the presence of cold accreting material, similarly to the Seyfert case for RQ AGN.Instead, radio-galaxies classified as low-excitation galaxies (LEG) would be powered by hot gas, and typically exhibit lower Eddington ratios than HEG.The high temperature of the accreting gas in LEG would account for the lack of "cold" structures, i.e. molecular torus and broad line region, for the reduced radiative output of the accretion disk, and for the lower gas excitation (e.g., Best et al. (2005); Buttiglione et al. (2010)).This distinction is then more based on the feeding of the SMBH, instead of the morphological classification based on the large-scale radio jet and its feedback.Fig. 4 shows the correlation between the wind kinetic power and the bolometric luminosity for both RQ and RL AGNs.We adopt a hierarchical Bayesian model for linear regression, linmix (Kelly 2007;Gaspari et al. 2019), to fit the data, taking into account the relative confidence range for each point.The form of the regression is logpyq " α `βlogpxq `ϵ , where α, β and ϵ are the intercept and slope coefficients and intrinsic random scatter of the regression, respectively.In this approach, the intrinsic scatter is treated as a free parameter and ϵ is assumed to be normally distributed with mean zero and variance σ.The regression model is then fit using Markow Chain Montecarlo (MCMC) sampling where the estimated parameters are obtained as a mean value of the chain.The algorithm also yields an estimate of the intrinsic correlation between the variables, "x-y corr", which comes from the knowledge of the posterior distribution of each physical parameter.This output has values in the interval [-1,1] where positive values point at a positive correlation and viceversa.The derived values are reported in Tab. 2. We observe an equivalent linear correlation in log-log space between L out and L out for both AGN types, which is to say that the kinetic power of the outflows is directly proportional to the luminosity for both classes.The two linear regressions models are consistent, however it is not possible to determine a significant relation with the constraints on the RL AGNs sample compared to those of the RQ, due to the current limited number of objects.
Our analysis up to this point suggests that accretion disc winds are not only a common trait of both types of AGNs, but are also most likely produced by the same physical mechanisms and conditions.Consequently, the radio-loudness dichotomy seems not to be a good tracer for the presence or lack of outflows, nor does it contain information on the main parameters of the wind.3.2 Disc wind characteristics in the entire AGN sample In the previous section, we deduced that the overall physical parameters of the outflows are consistent between RQ and RL AGNs.Therefore, hereafter, we will consider them both as a single population.In Tab. 3 the mean parameters for the UFOs in the entire sample are shown.
The distributions of logN H and logξ span ranges of 21 ălog(N H )ă 24 and 1 ălog(ξ)ă 6, respectively.Their mean values are reported in Tab. 3 and are consistent with the idea that disc winds are characterized by a substantial column density and the gas is highly ionized.Moreover, it can also be observed that the average outflow velocity, v out , is mildly relativistic, with an average value of »15% of the speed of light.
The Eddington ratio can be considered a useful proxy for the state of an AGN accretion disc.The bolometric luminosity in our sample is lower than the Eddington luminosity, with a mean value logλ Edd " ´1.0 ˘0.5, suggesting that the accretion and radiation emission, which may be a possible mechanism to accelerate winds, have found an equilibrium.While the possibility of a highly luminous and transient phase, which would eventually sweep away all the matter in the proximity of the BH setting forth the end of the AGN phase, cannot be ruled out, it should also be considered that the Eddington limit is an approximation for spherical emission and, as such, could underestimate the critical value of a beamed emission.
The distribution of the radial distance from the BH expressed in physical units (cm) spans six orders of magnitude, due to the wide interval of observed M BH and v out .However, this distribution becomes narrower when normalizing it to the respective r s , with a mean value of log(r out {r s ) " 3.0 ˘0.7, i.e. approximately " 0.0003 ´0.03 pc.This suggests that Fe K absorbers are swirling closer to the BH than the traditional soft X-ray warm absorbers, which are frequently observed at pc scales and beyond (e.g Crenshaw et al. ( 2003), Blustin et al. (2005) and Kaastra et al. ( 2012)).Our results agree with those found in the literature, in which the general consensus is that the launch of highly ionized outflows is triggered in the inner regions of accretion discs (e.g., Proga & Kallman (2004), Schurch et al. (2009), Sim et al. (2008) and King (2010)).
The mean mass outflow rate ranges from 0.01 to 1 M d /yr.However, when normalizing it to 9 M Edd , the interval is " 0.1 ´1 9 M Edd .Most of the sources in our sample present a 9 M out ă 9 M Edd , demonstrating the consistency of using the Eddington limit as an upper limit or, at the very least, as an extreme value to which compare our results.The observed disc winds are able to sweep and transport large amounts of gas, corresponding to an average mass outflow rate of "25% the Eddington accretion rate.The mass outflow rate distribution is also examined as a function of the mass accretion rate, 9 M acc , where its average value is 0.4 ˘0.7, as shown in Tab. 3. The data is mostly consistent with the unitary value, so that 9 M out and 9 M acc have the same order of magnitude.This means that what flows in at the micro scale is roughly comparable to what is re-ejected back in outflows1 .In other words, the feeding and feedback are tightly and efficiently self-regulated.This accretion and ejection cycle is a fundamental element in AGN research, as also evidenced by the growing body of literature (Fiore et al. (2017); Gaspari et al. (2020) and references therein) that describes how this process most likely controls the SMBH -host galaxy system.This interaction may also be one of the explanations to the relation between M BH and the velocity dispersion of the galaxy's bulge (e.g see Pounds (2014)).Therefore, our results are consistent with the present theory of a connection between the activity of the AGN and the host-galaxy medium.In particular, our results are well consistent with the predictions of a CCA self-regulated duty cycle, which has been shown to drive ultrafast outflows of the order of " 0.1c velocities and mass outflow rates comparable to the disc inflow rates (Gaspari & S ądowski 2017).
The kinetic power of the outflows is quite high, with values between » 10 42 and 10 46 erg s ´1, and a normalized mean value of log(L out {L Edd ) " ´1.7 ˘0.7.The latter corresponds to »3% of the Eddington luminosity.According to models of AGN feedback, a wind power of "0.1-1%L Edd must be converted into mechanical power in order to drive significant effects on the co-evolution of both SMBH and the host galaxy (see Di Matteo et al. (2005); Hopkins & Elvis ( 2010)).Our results show the the most energetic disc winds in both RQ and RL AGNs, may indeed have an high impact on AGNs feedback.
The average value of the normalized momentum rate (or force) of the outflows, i.e. 9 p out " 3 9 p rad is provided in Tab. 3. The two physical parameters have the same order of magnitude, suggesting that radiation pressure may be an essential component in the acceleration of these winds.One candidate for the acceleration of such highly ionized outflows is Compton scattering of the UV and X-ray continua.However, due to the relatively small cross section of highly ionized gas, this process is not very efficient and requires both a high luminosity and column density.Moreover, the existence of outflows with greater ratios points at the possibility that another acceleration mechanism besides radiation-pressure may be present, with MHD effects as it potential origin (Fukumura et al. (2010)).
In light of our results, given that 9 p out " 9 M out v out and 9 p rad " L bol {c " ηc 9 M acc , then β " v out {c " η.Therefore, a positive correlation seems to hold for the velocity of the outflow and the massenergy conversion efficiency, where the latter is a spin and accretion rate dependent parameter (Thorne (1974)).This relation is apparently consistent with the observations, as the mean velocity observed for UFOs is v out "0.1c Tombesi et al. (2010) while η is usually estimated to be "0.1.However, the magnetic driving mechanism, suggested to partially explain the acceleration of disc winds, is tied to the BH spin, while the latter was assumed null in the previous η assumption, leading to an inconsistency.In Madau et al. (2004) the radiative efficiency as a function of the BH normalized accretion rate and spin is obtained from numerical integration of the relativistic slim disc equations, showing that high values of η are attained for highly spinning BH with low 9 M acc .Still, to further constrain the correlation between v out and η, high quality data and progress in the models adopted to parametrize both UFOs and the conversion efficiency in AGNs are required.
Correlations with bolometric luminosity
In order to further understand the physics behind our results, the correlation between the outflow physical and scale-invariant properties and the AGN bolometric luminosity was also analyzed.A similar attempt was previously performed by Gofford et al. (2015).We use the same linear regression model adopted in Sec.3.1 to fit the data.Our results are gathered in Tab. 4.
We first point out that the regression slope is generally considered a good proxy of the correlation between independent, in our study L bol , and dependent variables.However, in our regression algorithm the linear correlation coefficient, x-y corr, is directly estimated through MCMC sampling.This parameter yields the true correlation between the variables and is more reliable than the regression slope.In fact, in complex distributions a weak linear fit, given by a positive β, may correspond to no correlation, given by a x-y corr consistent with 0. While we adopted a linear regression model to fit our data, some distributions may differ from a simple linear trend.This information is partially contained in the σ coefficient which quantifies the intrinsic scatter of the data with respect to the linear regression model.As different σ values may affect the uncertainties of the best fit parameters, we also investigate the possible scaling relations between the two.We found no statistical evidence of a correlation between σ and the uncertainties, i.e. an higher σ does not lead to an higher uncertainty in the best fit parameters.
Although the intrinsic scatter on M BH is relatively high, i.e. 0.52˘0.08,a positive correlation with L bol can be clearly observed, with x-y corr" 0.83 ˘0.08, highlighting the fact that more massive AGNs possess greater luminosities, hence stronger feeding/CCA rates.A strong positive correlation is found between L bol and λ Edd , as x-y corr = 0.92 ˘0.09.This trend suggests that more luminous AGNs are more efficient in accelerating UFOs.The correlation coefficients of N H and ξ are consistent with zero, pointing at both parameters being independent from L bol .The intrinsic scatter in both fit is ě0.6, which implies that linear regression is not optimal to model the data distribution.
Of particular interest is that v out and L bol seem to be weakly dependent, with a x-y corr" 0.19 ˘0.22.As radiation pressure is considered to play a relevant role in the acceleration of the outflows, one may expect to observe a positive correlation as in Gofford et al. (2015) where β " 0.4 `0.3 ´0.2 .Nonetheless, it should be noted that in this work the selection criteria used for the sample is different from the one used in the previously cited work.In particular, we are considering only outflows with velocities greater than 1% of the speed of light to focus the analysis on the most powerful accretion disc (i.e.UFOs) components.Moreover, a possible explanation for the lack of or weakness of such relation is that the instantaneous outflow velocity is simply not a good proxy of its acceleration, as it does not account for the mass flux.Instead, parameters that better represent the wind acceleration are the mass outflow rate and momentum rate, as explained below.
A statistically significant correlation with L bol (see Fig. 5 and Tab.4) is observed in the outflow radial distance (i.e.0.84 ˘0.17), mass outflow rate (i.e.0.92 ˘0.10), momentum rate (i.e.0.92 ˘0.12), and instantaneous wind power (i.e.0.94 ˘0.08), respectively.These results are in line with the ones reported in Gofford et al. (2015), Tombesi et al. (2012), andTombesi et al. (2013).The strong correlation between the parameters which take into account the energetics of the UFOs and the AGNs L bol , once again underlines that radiation is a relevant element in the description of the acceleration and evolution of the outflows.In particular, 9 p out represents the most relevant tracer of the correlation, with a low intrinsic scatter, σ " 0.28 ˘0.16.The most energetic outflows, and therefore the ones more likely to disrupt the SMBH accretion and interact with the host-galaxies bulges are found in the most luminous AGNs.Our results are consistent with Gofford et al. (2015), however, the correlation between L out and L bol obtained by the authors is slightly steeper (i.e.β " 1.5 `1.0 ´0.8 ).Once again this difference may be explained by the different selection criteria used here, as winds with lowest power are not being considered in our study.
An opposite behavior is observed in the normalized outflows parameters (see Fig. 6 and Tab. 4).These scale invariant properties are either not or weakly anti-correlated to L bol .Specifically, the correlation coefficient of the normalized momentum rate 9 p out { 9 p rad is consistent with 0, while the normalized outflow distance r out {r s and mass outflow rate 9 M out { 9 M acc and anti-correlated.The intrinsic scatter for each value are higher or similar to the ones obtained in the non rescaled analysis.These evidences suggest that outflows are not related to the bolometric luminosity of the AGNs once the dependence on the scale of the system, i.e. the BH mass, is removed.These findings are still consistent with the analysis from Gofford et al. (2015), however, an anti-correlated trend is observed in most of our parameters.Our results support the general idea that UFOs may take on various scales and energetic primarily determined by the intrinsic sizes of their AGN hosts, but are produced by the same underlying mechanism.These picture can be better understood in the CCA framework (Sec.1), in which multiphase clouds condense out of the diffuse macro-scale halos of galaxies/groups of galaxies and trigger self-regulated AGN outflow feedback, as found here.Such raining clouds, not only drive the observed intrinsic clumpiness of the meso AGN (Sec.2), but also allow for significant disc accretion rates comparable to the outflow rates (Sec.3.2 and Table 3), which would be unattainable in a hot mode of accretion (e.g., Bondi/ADAF).CCA models predict ultrafast outflows triggered regardless of the radio loudness of the systems, with typical velocities of 0.1c and mechanical efficiencies of a few percent (Gaspari & S ądowski 2017), consistently with our findings (Sec.3.1).Such combined evidences suggest that self-similar CCA is a key mechanism shaping the evolution of our observed systems.
Correlations with Eddington Ratio and Radio Loudness
In this subsection we investigate the correlation between wind properties and Eddington ratio and Radio Loudness, using once again a hierarchical Bayesian model for linear regression.We point out that the sources for which we computed R x are only 17 out of the 27 in our sample, as a radio flux for the source is required.Our results are collected in Table 5. Considering the normalized distance from the SMBH, the mass outflow rate and the mechanical power normalized to the Eddington limit, we find a possible proportionality to λ Edd , with correlation coefficients equal to 0.61˘0.31,0.57˘0.40 and 0.55˘0.38,respectively.As λ Edd is the ratio between L bol and L Edd , these trends show once again the relationship between the outflow parameters and the bolometric luminosity discussed in the previous section.Thus, positive correlations are expected as previously discussed.Instead, the mass outflow rate and momentum rate normalize to the radiation pressure are anti-correlated to λ Edd , with coefficients -0.64˘0.33 and -0.63˘0.37.These findings suggest that the mass driven outwards by outflows decreases as the accretion rate of the SMBH increases.We point out that the values obtained by this analysis may be just lower limits, as this anticorrelation is surely affected by our assumption of constant accretion rate efficiency η = 0.1.Although an accurate examination of these processes is beyond the scope of this paper, it may be interesting in the future to compare values of accretion directly obtained through accretion disc fitting models.We now discuss briefly the correlations found between the outflow parameters and the X-ray radio loudness.We observe a weakly positive correlation between the outflow parameters and R x , although characterized by a large intrinsic scatter.Nevertheless, these results may suggest that the radio luminosity, normally attributed to the relativistic jet emission, and the outflow power are connected.The normalized outflows parameter show correlation coefficients that are largely consistent with zero.An exception is the normalized distance r out /r s , which seems to be anti-correlated to R x , with correlation coefficient -0.55˘0.33.Understanding the nature of these relationships may be difficult, as our AGN sample is limited in both range of λ Edd and R x values to effectively capture meaningful correlations.More studies with larger samples are required to understand how the efficiency in accretion of the SMBH and the relativistic jet influences the disc and its dynamics.
CONCLUSIONS
In this work, we explored the physical parameters of UFOs through a uniform analysis of a sample of local X-ray bright RQ and RL AGNs.In our statistical analysis we investigated several correlations between different outflow parameters, and with respect to the AGNs bolometric and Eddington luminosities.Our results indicate that accretion disc winds are not only a common trait of both classes of AGNs, but that they are also most likely produced by the same physical mechanism and conditions.Consequently, the radio-loudness dichotomy seems not to be a good tracer for the presence or lack of outflows, nor to be informative of the main parameters of the winds.
On average approximately the same amount of material accreted by the SMBH is ejected through disc winds.This evidence is in agreement with the self-regulation of the accretion and ejection in AGN, in particular related to a CCA scenario.The average wind power corresponds to »3% of the Eddington luminosity, indicating that disc winds can indeed provide a significant feedback effect in both RQ and RL AGNs.
The outflow parameters related to the energetics are strongly correlated with the bolometric luminosity, highlighting that the most powerful winds are found in the most luminous AGNs, which, most likely, are also highly accreting.Surprisingly, a statistically significant correlation is not found between the outflow velocity and the AGN luminosity, suggesting that velocity alone is not the most relevant wind parameter, but instead we need to consider the total wind power.Moreover, the lack of statistical significant correlations between the normalized outflows parameters and the bolometric luminosity may imply that the underlying UFOs acceleration mechanism(s) are the same for a variety of systems.
In the future, it will be interesting to extend a similar study to disc winds detected in both local and high-redshift quasars, to extend the exploration to winds driven by stellar-mass black holes, and to compare the results with detailed numerical simulations.Moreover future missions focused on high-resolution X-ray spectroscopy, like XRISM, will improve exponentially the current statistics of UFO observations in both RL and RQ AGNs, allowing more stringent constraints on the wind parameters.( A3.Estimated disc winds parameters for the sample of Tombesi et al. (2014).Notes: (1) Observation number.
( 4)-( 5) Logarithm of the minimum (maximum) distance of the wind normalized to the Schwarschild radius.( 6)-( 7) Logarithm of the minimum (maximum) wind kinetic power normalized to the Eddington luminosity.
( 4)-( 5) Logarithm of the minimum (maximum) distance of the wind normalized to the Schwarschild radius.( 6)-( 7) Logarithm of the minimum (maximum) wind kinetic power normalized to the Eddington luminosity.
( A15.Normalized disc wind mass and momentum rates for the sample of Tombesi et al. (2012Tombesi et al. ( , 2013)).Notes: (1) Observation number.( 2)-( 3) Logarithm of the ratio between the minimum (maximum) mass outflow rate and the accretion rate.(4)-( 5) Logarithm of the ratio between the minimum (maximum) momentum rate of the outflow and the momentum rate of the radiation. ( The distributions of the UFO parameters, i.e., v out , logξ and N H , are shown in Fig 2 while the normalized r out , 9
Figure 1 .
Figure 1.Distributions of the logarithm of the SMBH mass in solar mass units (upper panel), bolometric luminosity (central panel), and Eddington ratios (lower panel) for the combined sample of RL (blue) and RQ (red) AGNs with detected outflows.
Figure 2 .
Figure 2. Distributions of the logarithm of the velocity in units of the speed of light (upper panel), ionization parameter (central panel), and column density (lower panel) for the outflows detected in the combined sample of RL (blue) and RQ (red) sources.
Figure 3 .
Figure 3. Distributions of the logarithm of the wind position in units of the Schwartzschild radius (upper panel), logarithm of the ratio of the mass outflow rate normalized to the Eddington accretion rate (central panel), and the ratio of the wind momentum rate normalized to the momentum rate of the radiation (lower panel) for the combined sample of RL (blue) and RQ (red) sources with detected outflows.
Figure 4 .
Figure 4. Scatter plot showing the disc winds kinetic power versus the bolometric luminosity for the sample of RQ (red cross) and RL (blue circle) AGNs.Using the same color-coding the best fit obtained using a linear regression algorithm are also shown as a dashed blue line and dotted red line for RL and RQ AGNs respectively.
Figure 5 .
Figure 5. Linear regression fit for the outflows physical parameters as a function of L bol .In figure the outflow radial distance (upper), mass outflow rate (central) and instantaneous wind power (lower) are shown in physical units in a logarithmic plane.The best fit linear regression for the data is also highlighted in each panel as a dash-dotted red line.
Figure 6 .
Figure 6.Linear regression fit for the outflows normalized parameters as a function of L bol .In figure the outflow radial distance (upper), mass outflow rate (central) and instantaneous wind power (lower) are shown in a logarithmic plane.The best fit linear regression for the data is also highlighted in each panel as a dash-dotted red line.
.1 to Table A.5 are shown the values for the Tombesi et al. (2014) sample; from Table A.6 to Table A.10 for Gofford et al. (2015); and from Table A.11 to Table A.15 for
Table 1 .
Mean values and error on the mean of the AGNs intrinsic properties and UFO parameters for RL and RQ AGN and p-value of the KS-test between the two distributions.
Table 2 .
Correlation analysis of outflow luminosity with respect to the AGN bolometric luminosity for RL, RQ and entire sample.The first two column represent the regression coefficients whereas the third the regression intrinsic scatter for the two AGN populations.
Table 3 .
Mean values of the AGNs intrinsic properties and disc wind parameters for the entire sample.
Table 4 .
Summary of the correlation analysis of the outflow parameters with respect to the AGN bolometric luminosity.The output in the table are computed using a hierarchical Bayesian model for linear regression as explained in Sec.3.1.
Table 5 .
Summary of the correlation analysis of the outflow parameters with respect to the Eddington ratio and the radio loudness parameter.The output in the table are computed using a hierarchical Bayesian model for linear regression as explained in Sec.3.1.
Table A5 .
Tombesi et al. (2014)mass and momentum rates for the sample ofTombesi et al. (2014).Notes: (1) Observation number.(2)-(3)Logarithm of the ratio between the minimum (maximum) mass outflow rate and the accretion rate.(4)-(5)Logarithm of the ratio between the minimum (maximum) momentum rate of the outflow and the momentum rate of the radiation.
Table A10 .
Gofford et al. (2015)mass and momentum rates for the sample ofGofford et al. (2015).Notes: (1) Observation number.(2)-(3)Logarithm of the ratio between the minimum (maximum) mass outflow rate and the accretion rate.(4)-(5)Logarithm of the ratio between the minimum (maximum) momentum rate of the outflow and the momentum rate of the radiation. | 11,586.4 | 2024-06-29T00:00:00.000 | [
"Physics"
] |
Transitivity Analysis of “ Heroic Mother ” by Hoa Pham
The paper investigates the application of Halliday’s theory of transitivity in the construction of personality. The essay aims to identify and explain how the main character’s personality is portrayed and represented through language used in Hoa Pham’s “Heroic Mother”. The findings hope to prove that linguistic choices in transitivity play an important role in building up the main character of the story. The essay is divided into six parts. The first part explains the roles of language and language studies in social life. The second part notes the functions of Halliday’s transitivity system in literary studies by reviewing previous studies on transitivity. The next part deals with Halliday’s theoretical framework of transitivity as a guideline for this analysis. The fourth part introduces Hoa Pham – an Australian Vietnamese writer and playwright, the author of “Heroic Mother”. The analysis of transitivity in “Heroic Mother” is provided in the fifth part. The last section of the essay offers concluding remarks about the interpretation of “Heroic Mother”. The discussion of results will show how linguistics analysis together with observations about the text enables a better understanding of the main character, known as a “heroic mother”.
Introduction
It is widely believed that people who study and use a language are interested in how they can do things with language, how they can make meanings build up and be understood through choices of words and grammatical resources. Bloor and Bloor claim that "when people use language, their language acts produce -construct meaning" (2004, p. 2). Kroger and Wood (2000, p. 4) believe that language is taken to be not simply a tool for description and a medium of communication but as a social practice, a way of doing things. Gee (2005, p. 10) even claims that "language has a magical property: when we speak or write, we design what we have to say to fit the situation in which we are communicating. But at the same time, how we speak or write creates that very situation." In other words, language shapes and reinforces attitudes and beliefs, then, is a medium for cuing identities, activities, values, and ideologies.
The study of language is so important that, as Fairclough (1989, p. 2) states, "using language is the most common form of social behaviour" and we depend on language in our public and private interaction, determining our relationships with other individuals and the social institutions we inhabit. For Halliday (1985, xiv), "a language is interpreted as a system of meanings, accompanied by forms through which the meanings can be realized and answer the question, "how are these meanings expressed?" This puts the forms of a language in a different perspective: as means to an end, rather than as an end in themselves." It is from this point of view of language that systemic functional linguistics was developed by Halliday and his associates during the 1960s.
Fairclough claims that language "is a material form of ideology, and language is invested by ideology" (2001, p. 73). Social language or discourse is not only representational but intervenes in social change because "discourse contributes to the creation and recreation of the relations, subjects…and objects which populate the social world" (p. 73). That is to say, discourses are material effects of ideology which also have a strong impact on shaping our sense of reality. Making the same point, Fowler makes the link between discourse and ideology even clearer when he defines discourse as "socially and institutionally originating ideology, encoded in language" (1986, p. 42). Discourse is a way to mould and manifest ideologies, where "ideology" can be defined as the everyday taken for granted collective set of assumptions and value systems that social groups share (Simpson, 1993).
Moreover, ideologies are the essential and basic social concepts that reflect the aims, significances and values of the social group (Wodak, 2001). Fairclough (2003) also stresses that discourse is a powerful vehicle in the construction of social reality, a vehicle that shapes points of views through dominant ideologies and constructs the realities of living and being. In this sense, discourse is dialectically related to the socio-cultural and institutional contexts. In the words of Fowler, "language provides names of categories, and so helps to set boundaries and relationships and discourse allows these names to be spoken and written frequently, so contributing to the apparent reality and currency of categories" (1986, p. 94). Therefore, language and language study attract a lot of academic researchers from different disciplines to better understand contemporary society.
With this idea in mind, in this paper, I will examine the function of language as powerful social practice in the short story "Heroic Mother" published in 2008 by the Australian Vietnamese writer Hoa Pham in the light of Halliday's theoretical framework on transitivity. The aim is to clarify the main character's personality.
Previous Analysis on Transitivity of Literary Texts
Transitivity analysis has been widely used to understand the language of speakers and writers. It examines the structure of sentences which are represented by processes, the participants involved in these processes, and the circumstances in which processes and participants are involved. Using transitivity analysis, researchers have tried to reveal that language structures can produce certain meanings and ideology which are not always explicit for readers. In other words, the task of functional analysis, particularly transitivity analysis, is to discover the relation between meanings and wordings that accounts for the organization of linguistic features in a text. Therefore, the concept of transitivity has been used by a number of linguists to shed more light on the use of language in a literary text.
As a pioneer and scholar in transitivity analysis, Halliday's study of William Golding's The Inheritors is an influential example. Carter and Stockwell describe it as "one of the groundbreaking analysis in stylistics" (1971, p. 19). In this analysis, Halliday points out how understanding grammar, especially transitivity, can help interpret the meaning in a literary text. According to Halliday's theory, patterns of transitivity, including processes, participants, and the circumstances, occur in the clauses and sentences of a text. He claims that "transitivity is the set of options whereby the speaker encodes his experience and transitivity is really the cornerstone of the semantic organization of experience" (p. 81).
Following the method of transitivity analysis developed by Halliday, Yaghoobi (2009) makes a systemic analysis of news structures in two selected printed media, namely Newsweek and the Kayhan International. By identifying processes and the role of participants involved in those processes, Yaghoobi's study proves that the representation of the same news actors, Hizbullah and Israeli forces, by two different and ideologically opposed printed media, were opposite to each other.
These transitivity analyses are just a few among many, but they are fundamental examples of how language patterns, particularly transitivity, can convey the meaning and ideology of a literary text. They also add further dimensions that have proved useful in stylistic analysis. The functional grammar analysis of English helps readers understand human interactions in social contexts and can be used to uncover ideological meanings within them. In the next part, the focus will be on explaining the theory of transitivity.
Theory on Transitivity
The systemic functional linguistics approach to discourse analysis is based on the model of "language as a social semiotic" outlined in the works of Halliday. Language is used functionally, what is said depends on what one needs to accomplish. In Halliday's theory, language expresses three main kinds of meanings simultaneously: ideational, interpersonal, and textual meanings (1985). Among them, the ideational meaning (the clause as representation) serves for the expression of "content" in language, that is, our experience of the real world, including the experience of our inner world. When we use language we often use it to speak of something or someone doing something. That is why the ideational meaning can be referred to as experiential meaning coming from the clause as representation.
The interpersonal meaning helps to establish and maintain social relations; the individual is identified and reinforced in this aspect by enabling him/her to interact with others by expression of their own individuality. Our role relationships with other people and our attitudes towards others are often expressed by interpersonal meaning. This line of meaning in a clause comes from the clause serving as an exchange. We usually use language to facilitate an action or to demand an object and the expectant result is most generally gained verbally or in writing.
The textual meaning creates links between features of the text with elements in the context of situation; it refers to the manner in which a text is organized. In other words, the textual meaning comes from the clause as message. The clause gets its meaning/massage from its thematic structure. Halliday and Matthiesen defines the theme of clause as a "starting point of the message: it is what the clause is going to be about" (1976, p. 64). With that, the theme serves to locate and orientate the clause within the context. The other part of the message that extends and elaborates the theme is the rheme. Therefore, a clause consists of both a theme and a rheme and a theme + rheme combination will give a precise illustration on the text orientation, its ideas and subject matters.
Halliday also claims that the three types of meanings presented in language are not accidental but are necessarily in place because we need them to perform functions in social life.
In constructing experiential meaning, there is one major system of grammatical choice involved: the system of transitivity or process type. I have chosen transitivity because of all the grammatical aspects analysed, it produces the fruitful data on the text. In his An Introduction to Functional Grammar, Halliday identifies transitivity as follows: A fundamental property of language is that it enables human beings to build a mental picture of reality, to make sense of their experience of what goes on around them and inside them. …Our most powerful conception of reality is that it consists of "goings-on": of doing, happening, feeling, being. These goings-on are sorted out in the semantic system of language, and expressed through the grammar of the clause… This… is the system of TRANSITIVITY. Transitivity specifies the different types of processes that are recognised in the language and the structures by which they are expressed (1985, p. 101) The theoretical framework of transitivity was established and developed by Halliday. Transitivity generally refers to how meaning is represented in clauses; transitivity patterns can reveal the certain worldview "framed by the authorial ideology" in a literary text (Fowler, 1986, p. 138). Clauses represent events and processes of various kinds, and transitivity aims to make clear how the action is performed, by whom and on what. Transitivity is an important and powerful semantic concept in Halliday. It is part of the ideational function of language, therefore, an essential tool in the analysis of representation. Implicitly and crucially, different social structures and values require different patterns of transitivity.
While Kress (1976, p. 169) states that transitivity is representation in language processes, Simpson asserts that transitivity refers generally to how meaning is represented in the clause (1993, p. 88). Hasan claims that transitivity: … is concerned with a coding of the goings on: who does what in relation to whom/what, where, when, how, and why. Thus the analysis is in terms of some process, its participants, and the circumstances pertinent to the process -participant configuration. (1988, p. 63) In other words, transitivity can show how speakers/writers encode in language their mental reflection of the world and how they account for their experience of the world around them.
Halliday's theory that transitivity is measurable will be used to study the clausal structure which is based on the main verb of the sentence. According to this theory, in transitivity different processes are distinguished according to whether they represent actions, speech, states of mind or states of being. Those are identified, classified and known as Material processes, Relational processes, and Mental processes.
Material processes of transitivity are processes of doing, usually physical and tangible actions. Halliday calls them action clauses expressing the fact that something or someone undertakes some action or some entity "does" something -which may be done to some other entity. These processes can be probed by asking what did x do? Two essential participants usually appear in material process are the Actor -the doer of the process -and the Goal -the person or entity affected by the process.
Mental processes usually encode mental reactions such as perception, thoughts and feelings. Mental processes give an insight into people's consciousness and how they sense the experience of the reality. These can be probed by asking what do you think/ feel/know about x? Mental processes have two participants: the Senser -the conscious being who is involved in a Mental process -and the Phenomenon -which is felt, thought, or seen by the conscious Senser.
Relational processes construe the relationships of being and having between two participants. There are two different types of Relational processes; one is called Identifying Relational which serves the purpose of defining and the participants involved are Token and Value. Thus the Value serves to define the identity of the Token. The other type of Relational process is the attributive Relational which serves to describe. The participants associated www.ccsenet.org/ijel International Journal of English Linguistics Vol. 2, No. 4;2012 with it are the Carrier and the Attribute and we can say that "the x (realized by Carrier) is a member of the class y (realized by Attribute)".
There are also three subsidiary process types that share the characteristic features of each of the three main processes. Between Material and Mental processes lie Behavioural processes that characterize the outer expression of inner working and reflect physiological and psychological behaviours such as breathing, laughing, sneezing… Behavioural processes usually have one participant who is typically a conscious one, called the Behaver. Between Mental and Relational processes are Verbal processes, which represent the art of saying and its synonyms. Usually three participants are involved in Verbal processes: the Sayer is responsible for verbal process; the Receiver is the person at whom the verbal process is directed; and the Verbiage is the nominalised statement of the verbal process. And between Relational and Material processes are Existential processes which prove states of being, existing, and happening. Existential processes typically employ the verb be or its synonyms such as exist, arise, occur. The only participant in this process is Existent which follows the there is /are sequences. There is no priority of one process type over another so Halliday and Matthiessen portray the interrelationship between transitivity processes as a sphere which enables us to construe and portray our experiential meanings of the world, how we perceive what is going on (1976, p. 172). Transitivity processes are also useful in uncovering the participants involved, how the speaker/writer locate himself in relation to the others, and whether they take an active or passive role in the communication.
After examining the transitivity and its processes from Halliday's systemic functional grammar, I will proceed to analyse the data. My focus is not on linguistic or stylistic patterns, but on checking how these linguistic patterns are used to define characters and meanings.
Introduction to the Author (Note 2)
Hoa Pham's "Heroic Mother" is a short story which was published in 2008. Hoa Pham is an Australian Vietnamese author and playwright. I am taking this biographical information from her website, so I hope it is correct. She was awarded the 2001 Sydney Morning Herald's Young Writer of the Year for her novel Vixen. Currently, she is the editor of Peril, an online journal of arts and culture for Asian Australians. She has already published two novels, namely Quicksilver and Vixen, several children's books including No-one Like Me and 49 Ghosts, short stories "Reality", "Yolk", and "Heroic Mother", and more recently, two plays Silence and I could be you.
Though Hoa Pham has received recognition in the field, there is a lack of academic criticism on her work. This is partly because of the fact that she is an up and coming writer. In spite of this fact, I have decided to go for this writer in order to move away from particular Asian Australian writers whose works have been over-studied. I chose her work because I wanted to look at a text that had not yet been studied by critics.
Adaptation of Transitivity Aanalysis in "Heroic Mother"
Hoa Pham's "Heroic Mother" describes the locked-in solitude of the elderly who often talk about their past memories and victories as a way to educate younger generations. This short story also depicts the spiritual and emotional gaps between the elderly and their children who often care more about the outside world or their personal interests than family relations. "Heroic Mother" is a first-person narrative in which the main character acts as the narrator and speaks in such a way as to refer to herself, providing an account of events from "inner" points of view. This gives the advantage of sympathy to the character and helps readers get the character's thoughts and feelings in an intimate way.
The first sentence of the story gives a comment on the main character as "a little crazy" person. She is living with her family in Hanoi, the capital city of Vietnam. She is considered an absent-minded elderly lady with conservative feelings and behaviours. What is more, she seems to be neglected and ignored by her relatives. As one part of her story-telling, the main character retells her past to prove that she is not crazy; she just "acted crazy" to disguise herself in resistance and fighting the war. Several details about the neighbourhood and the companions of the main character are also provided.
Before analysing the text, I should note that transitivity analysis requires that complex sentences in the text be cut into simple clauses involving a process and the participants in the process. The clauses are numbered in order of the story. Though the analysis covers the whole story, to make it easier for analysis I have divided the story into three parts to represent the changes and developments in the main character's thoughts and feelings (see Appendix A). The following is the examination of language which leads to the above interpretations of "Heroic Mother".
Part 1
To begin with, we should look into the frequency and the role of the main character as a participant in the processes assigned to her. Of 25 participants in the first part, only 6 refer to the main character herself (clause 1b, 3a, 3b, 3c, 3e, 5a) while 5 refer to the co-operation of the main character with the other family members or friends (clause 1a, 5b, 6b, 7a, 7b). The other 14 participants are the city or the environment such as "the locals", "the kids", "the green of the lake", and "the traffic" (Note 3)... This suggests that the main character does not take a central role as participant even though she is introduced right from the first sentence of the story. It seems that, by focusing on the city and the environment, the writer invites readers to join the main character's society where she is living so that they can have a better understanding of her.
In term of process type, among 11 processes with the main character's involvement, 6 are carried out by her while 5 are taken by her together with other people. In those 6 processes, 5 are either Relational processes (2-clause 1b, 3e), or Behavioural processes (2-clause 3a, 3b), or Mental processes (1-clause 3c), whereas only one takes a Material process (clause 5a). The much greater proportion of Relational, Behavioural and Mental processes to Material processes illustrates that the narrator, also the main character, tries to sketch her relationship, behaviour and her inner thoughts towards other members of her family or towards life and society as in the examples of "smile" or "do not understand". With only one Material process assigned, as in "…I do exercise in Hoan Kiem lake…", she becomes an Actor but her action does not affect anyone else, not even her relatives, because the Goal in this process is "exercise". Therefore, it could be argued that, as an elderly lady, she is not supposed to or perhaps she is not strong enough to get involved in any activities except taking care of herself by "doing [her] exercise" every morning.
Among 25 processes in this part, there is only one Verbal process (clause 2a) in which the family members take the role of Sayer. However, it is not the conversation between the family members and their grandma because the Verbiage is the statement that "[she] is a little crazy". Sadly enough, though she lives with her family, there is no interaction between them. The children usually avoid having talks with her by "zooming off on their nice new mopeds" or they "turn on the TV and watch their cartoon American movies". The children, including her granddaughter, tend to ignore her.
Though being thought as a "little crazy" grandma in the family, the main character still pays great concern to other members, especially to her granddaughter. The next section reveals how she cares for her relatives.
The prevalence of Mental and Verbal processes in this part proves that the main character does not just keep her concerns inside her heart and does nothing. Instead, she with other "older generation" members including parents and friends try to give advice or guidance to her granddaughter as seen in clauses 10a, 11a, 15c. She usually "tells about the good and the future" or "keeps telling [her granddaughter] the good schools and university is in Hanoi". One Material process where she takes the role of Actor involves the Goal of what she and others "have struggled for". This representation of material process suggests that the main character also wants to raise the www.ccsenet.org/ijel International Journal of English Linguistics Vol. 2, No. 4;2012 grand daughter's awareness of what is good and what is bad by mentioning the times she and the girl's parents experienced or how they "struggled" for a better future. It is the same with Relational processes, which prove that the main character even tries to share several hobbies with her granddaughter by "watching American movies" but it does not help much to make their relationship closer or friendlier.
Of 32 participants in the whole text, 13 refer to the main character's relatives; of these, 11 refer to her granddaughter. This suggests that as a narrator, the main character is more interested in talking about her granddaughter than herself or other things such as "Hanoi", "school", "university" or "the persons". She "hopes that her granddaughter does not see what [she] and her parents have seen". As the memories flow, she continues to flash back to her unforgettable past, which may not be easily felt and sympathized with by the younger generations.
The next section examines the concluding part of the story, which reveals more about the main character's past and present days as a grandmother.
Part 3
It is interesting that in this part the main character is the dominant participant in 38 processes out of 62, or 61%. In the two previous parts, she performs sometimes as a sole participant ("I"), sometimes as a co-participant with her relatives or friends ("We") whereas in this final section, she almost always appears as a sole participant "I" (only in clause 36 as "We") (Note 5). Obviously, she is not only the main character but the narrator of the story. Talking about herself, she tries to explain why she "was not as mad as people said [she] was".
The main character is the participant in 7 Mental processes (clause 24a, 31a, 31c, 34b, 35b, 37b, 43a), 10 Relational processes (clause 24e, 24g, 25a, 27b, 32a, 33a, 33b, 46b, 47b, 50), one Verbal process (clause 51b), and up to 15 Material processes. The much greater use of Material process in this part shows the main character's active involvement in the past, ostensibly a contrast to her position on the sidelines in the present activities. In the past, she "[enacted] out the great epics of the Trung sisters and King Le Loi ", or "would go round the American solders", or "[stood] in the rain", or "[carried] documents". But even in the present, she is surely not a weak elderly grandma for she can "chop meat with a cleaver". She is even smart enough to "choose what [she] can hear and say", which proves that she is "a kindly grandmother" and she can be herself.
In the Relational processes, the main character is the Carrier of the Attribute "mad", "crazy", or "scared" as seen in clauses 24g, 27b, 32a. By using these she wants to stress the stereotype through which she is seen by her relatives. However, in the other example of relational process (clause 33a) the main character portrays herself as being "a kindly grandmother". That is the way she defends herself against her relatives' misjudgement and their inappropriate attitudes towards her. What is more, by stating her quality of being "kindly" she may believe that it is more important to be herself than to mind the words of others.
Out of 4 Verbal processes in the part 3, the main character's family and daughter take the role of Sayer in 3 Verbal processes as shown in clauses 24f, 42a, 46a. Once more, it is not the verbal interaction between the family and the main character but it is the family's comments on her as being crazy or "acted crazy all [her] life". As they cannot understand what goes on in her mind, they either "tell her off impatiently' or "tell her to shut up".
Conclusion
Transitivity analysis gives more detailed and more nuanced support to the reader's responses to "Heroic Mother". It provides linguistic evidence to support the interpretation of the story so readers, having been shown on what/who does what to whom/what in the main character's world, are better equipped to decide on the story's meaning.
The study of transitivity through the analysis of processes and the participants involved in these processes shows that the main character, known as a heroic mother, is suffering from the loneliness, boredom, and inadequate consideration from her family. The main character in Hoa Pham's "Heroic Mother" is just an example of what is happening to many so-called heroic mothers who usually live with their sorrows and their victories which are sometimes ignored by younger generations. Though the concept of "heroic mother" is myth, in Barthes' sense of the word, which supports the old women who lost their children during war time, the main character herself in this story experienced the difficult days of her youth. She used to be a soldier or fighter. She accepted being stereotyped as crazy to achieve her cause of a better future.
As with many other heroic mothers during the war, she devoted her energy, her youth, and even her life to the country's independence and freedom. Her silent contribution towards national liberation is an eternal sacrifice. However, as the time goes by, many heroic mothers have passed away while others who survive are facing their old age. These white-with-age survivors usually tell the stories about their lives and their contribution made for the www.ccsenet.org/ijel International Journal of English Linguistics Vol. 2, No. 4;2012 country and its people. They consider it a good way to remind younger generations of their past victory as well as to educate them about the sacrifice and patriotism for the national liberation cause. Unfortunately, the younger generation do not always appreciate those educational stories. Some of them even take the present beautiful life for granted, and ignore all the sacrifice and devotion of their elders. Therefore, in "Heroic Mother", the eponymous character is usually seen as having been crazy or "have acted crazy all [her] life" though she herself believes that she "has islands of sanity amongst [her] craziness". Implicitly, as the narrator of the story, the main character wants to tell the readers that "Hey, people don't understand me. I behaved like this just to play my role and to fight the war. I am not crazy at all." In conclusion, linguistically, I hope this study will contribute towards an understanding how linguistic analysis of a text can be used to interpret meanings in a literary text. In the social extent, this study aims to call people's awareness to the contemporary situation of "heroic mothers". Hopefully, in the future, the concept of "heroic mother" is not just the creation of the title but a much more practical system. | 6,685 | 2012-07-25T00:00:00.000 | [
"Linguistics"
] |
Liquid-based materials
Journal: National Science Open Manuscript ID NSO20220045.R1 Manuscript Type: Perspective Date Submitted by the Author: 24-Aug-2022 Complete List of Authors: Zhang, Yunmao; Xiamen University College of Chemistry and Chemical Engineering Hou, Xu; Xiamen University College of Chemistry and Chemical Engineering
Advanced materials are the material basis for social development.Solid materials have the characteristics of stability, durability, and processability, but it is often difficult for them to have a large-scale and rapid dynamic response [1].Liquid materials are usually smooth, defect-free, and self-healing, with dynamic response and high mass transfer efficiency, but they cannot be self-supporting and are unlikely to be fabricated into fixed shapes themselves [2,3].Liquid-based materials are rising for breaking the limitations of conventional materials.They are composed of solids and liquids, which endows them with the characteristics of both solid and liquid materials and unique advantages in fast dynamic response, soft interface, structural plasticity, etc. [2][3][4][5][6].The solid materials offer frameworks and confinements for stabilizing liquid materials.Based on the structure types, the solid framework in the liquid-based materials can be roughly divided into the non-supporting structure, soft supporting structure, and hard supporting structure, although in some conditions, coupled structure types exist (Figure 1).Various liquids with different properties including waterbased liquids, organic liquids, ionic liquids, liquid metals, and other responsive liquids have been widely used [6][7][8][9][10].
In fact, liquid-based materials are ubiquitous in nature.For example, the surface of the peristome of some carnivorous plants is completely covered with a liquid layer, forming a super-smooth surface; the liquid film on the eyes provides us with a very smooth refractive surface and makes us adjust the refractive index of the liquid film, and also isolates dust and bacteria to protect the eyes; the synovial fluid in the knee gap plays a central role in reducing joint wear during the hundreds of millions of times of friction in its lifetime [5].In essence, liquid-based materials can bring new interfacial physical and chemical properties to traditional solid materials [1,5].Simultaneously, that dynamic interaction between the liquid and the solid is utilized to carry out the functional interface physicochemical design between the liquid and the solid, providing a broader space and unlimited design possibilities for breaking through problems that traditional materials cannot solve [2].
In recent years, various kinds of liquid-based materials have been developed, such as hydrogels [7], ionic liquid-based materials [6], liquid metal-based materials [9], liquid-infused surfaces [8], and liquid-based Materials Science membranes [3].These liquid-based materials have attracted more and more attention in many fields, because of their significant advantages in adaptability, anti-fouling, anti-ice, anti-fog, self-healing, defect-free, and high interface transport efficiency [2][3][4][5].For the non-supporting structure solid framework, the solid part is usually particle materials, such as nano/micro particles and porous particles, and this kind of liquid-based materials generally still has fluid ability.For example, Dunne et al. [10] realized liquid-in-liquid fluidic channels by immiscible magnetic liquid-based material containing magnetic particles, which show near- frictionless, self-healing, anti-fouling, and non-clogging properties.The solid materials used as soft supporting structures generally include polymer chains, nanofibers, nanotubes, graphene, and gelators, which provide soft and deformable skeletons and expand the scope of application.For example, Markvicka et al. [9] developed a soft and highly deformable circuit interconnect material architecture composed of liquid metal droplets suspended in a soft elastomer, which exhibited unprecedented electronic robustness in a self-healing soft robotic and self-repairing digital counter after significant damage.Zhang et al. [11] used hybridizing polyelectrolyte hydrogel and aramid nanofiber membrane to build a three-dimensional gel interface to achieve high-performance osmotic energy conversion.Choi et al. [12] fabricated a hydrogel liquid metal composite using three-dimensional printed molds and demonstrated the feasibility of reliable utilization of a hydrogel and liquid metal in self-healing electronics.The hard supporting structure built by the solid part usually provides a three-dimensional structure to hold the liquid, like porous surfaces, metal-organic framework, and organic/inorganic membranes, while there are also multiple structure combinations in one liquid-based material.Li et al. [13] enhanced the separation efficiency of carbon dioxide and water vapor using a superb water permeable membrane prepared by treating an alumina hollow fiber supported metalorganic framework membrane with a hydrophilic ionic liquid.The slippery liquid-infused porous surface constructed by Villegas et al. [4] and Aizenberg and collaborators [5] inspired by pitcher plants shows excellent anti-fouling, anti-ice, anti-bacteria, anti-thrombosis, and anti-fog properties.Hou et al., inspired by the alveoli, established a liquid gating system based on membrane science and technology [14] and proposed "liquid gating technology" [2].This technology expands the fundamental scientific issues of the traditional membranes from the solid-liquid and solid-gas interfaces to the solid-liquid-liquid and solid-liquid-gas interfaces and can be applied with a dynamic physicochemical interface design for the application in multiphase separation, electroless visual substance detection, biomedical catheters, responsive switchable gas valves, and other fields [2].Most recently, a continuous air purification system is developed based on liquid-based materials.In this system, the liquid gating solid matrix was used to filtrate the particles and control the gas-liquid-solid multiphase interaction property by adjusting the redox state, and the gating liquid as functional material is also used to absorb the particles in the air.Through the coordination of the two parts of the system, the good anti-fouling performance and long-term purification can be achieved [15].In 2020, liquid gating technology was selected as the top ten emerging technologies in the chemistry of the year by the International Union of Pure and Applied Chemistry (IUPAC).IUPAC points out that "Liquid gates can selectively process mixtures of fluids without clogging… they could become extremely useful for large-scale filtration and separation processes… liquid gates could accelerate the progress towards SDG 6, which looks to ensure access to clean water and sanitation for all… since liquid gates require no electricity at all, they ensure huge energy savings… liquid gates will soon be scaled-up and adopted by key players in the chemical enterprise" [16].Although liquid-based materials have shown great advantages in many fields, how to design and prepare more controllable, stable, and responsive liquid-based materials, how to break through the preparation theory and technology of liquid-based material systems around the key scientific issue of twophase or multiphase interfaces control among solid, liquid, and gas and interactions is still a great opportunity and challenge for liquid-based materials in the future.
Despite solid materials and liquid materials commonly used having met basic social needs, many new materials are still urgently needed in the key fields to meet the social development goal in the near future, e.g., the efficient adsorption and catalytic materials needed to achieve carbon neutrality.New concepts and methods are used to break through the preparation theory and technology and understand the relationship between macroscopic properties and microscopic mechanisms of liquid-based materials.This is due to the complexity of liquid and liquid-based materials, and further studies on this aspect are required [17].The advantages of the liquid in terms of molecular scale dynamic response, a functional structure in a microscopic or limited domain space, the designability of the structure of a solid matrix, and the mass, momentum, energy transport and reaction at the interface all need to be considered.Additionally, liquid-based materials can also be combined with artificial intelligence, machine learning, and materials genome initiative, which have emerged in recent years, to further explore the interaction between solids and liquids, expand the design of the materials, improve their properties, and ensure their stability.This will bring new ideas for smart applications of liquid-based materials, such as substance detection, interface transport, energy conversion and storage, microfluidics, artificial organs, and wearable devices in the areas of ecological environment, manufacturing technology, resources and energy, agricultural science and technology, life and health, and aerospace science and technology.
Figure 1
Figure 1 Design of liquid-based materials and the typical interactions at the interface among the solid/liquid/gas interfaces.The middle circles show the liquid-based materials design by selecting and designing two essential components, including the solid part and the liquid part.The light brown circle shows the typical solid parts, which can be divided into three categories according to the structural role they play.The green circle shows the typical liquid parts.Three-phase characteristics that need to be considered when designing liquid-based materials are located at the three corners of the triangle.These characteristics are not only the factors to be considered in the construction of liquidbased materials but also the external factors of the environment in which the liquid-based materials are used.The arrows represent the interactions at solid/liquid, gas/solid, and liquid/gas interfaces inside or outside the liquid-based material. | 1,940.4 | 2022-09-01T00:00:00.000 | [
"Materials Science",
"Chemistry",
"Engineering"
] |
Correction of Apolipoprotein A-I-mediated Lipid Efflux and High Density Lipoprotein Particle Formation in Human Niemann-Pick Type C Disease Fibroblasts*
Impaired cell cholesterol trafficking in Niemann-Pick type C (NPC) disease results in the first known instance of impaired regulation of the ATP-binding cassette transporter A1 (ABCA1), a lipid transporter mediating the rate-limiting step in high density lipoprotein (HDL) formation, as a cause of low plasma HDL-cholesterol in humans. We show here that treatment of human NPC1-/- fibroblasts with the liver X receptor (LXR) agonist TO-901317 increases ABCA1 expression and activity in human NPC1-/- fibroblasts, as indicated by near normalization of efflux of radiolabeled phosphatidylcholine and a marked increase in efflux of cholesterol mass to apoA-I. LXR agonist treatment prior to and during apoA-I incubation resulted in reduction in filipin staining of unesterified cholesterol in late endosomes/lysosomes, as well as cholesterol mass, in NPC1-/- cells. HDL species in human NPC disease plasma showed the same pattern of diminished large, cholesterol-rich α-1 HDL particles as seen in isolated heterozygous ABCA1 deficiency. Incubating NPC1-/- fibroblasts with the LXR agonist normalized the pattern of HDL particle formation by these cells. ABCG1, another LXR target gene involved in cholesterol efflux to HDL, also showed diminished expression in NPC1-/- fibroblasts and increased expression upon LXR agonist treatment. These results suggest that NPC1 mutations can be largely bypassed and that NPC1 protein function is non-essential for the trafficking and removal of cellular cholesterol if the down-stream defects in ABCA1 and ABCG1 regulation in NPC disease cells are corrected using an LXR agonist.
High density lipoprotein cholesterol (HDL-C) 5 levels in plasma correlate strongly with protection against atherosclerotic vascular disease (1), believed to be due mainly to the removal of cholesterol by HDL from cells in the artery wall and other tissues (2). Mobilization of cell phospholipids and cholesterol to lipid-poor HDL apolipoproteins (apos) by the actions of the membrane transporter ATP-binding cassette transporter A1 (ABCA1) is the rate-limiting step in HDL particle formation (3). Delivery of additional cell cholesterol to HDL occurs by ABCA1-independent mechanisms, possibly facilitated by the actions of other membrane transporters including ABCG1 (4,5), ABCG4 (5), or scavenger receptor class B type I (6). Further maturation of HDL occurs through the esterification of cholesterol on the particle surface by lecithin:cholesterol acyltransferase (LCAT) (7), and transfer to the HDL pool of surface components of triglyceride-rich lipoproteins during their hydrolysis (8). The presence of approximately half-normal HDL-C levels in individuals heterozygous for ABCA1 mutations (9,10), however, indicates ABCA1 activity is a critical determinant of HDL-C levels in plasma, and that passive efflux and further steps in HDL maturation do not compensate for an initial decrease in ABCA1 activity to increase HDL-C levels.
Niemann-Pick type C (NPC) disease is a neurovisceral disorder characterized by accumulation of unesterified cholesterol and other lipids in late endosomes and lysosomes and impaired cholesterol trafficking to other cell compartments (11,12). Consistent with impaired regulation of cholesterol synthesis and esterification and low density lipoprotein (LDL) receptor activity in this disorder (13)(14)(15), we previously demonstrated that basal and cholesterol-stimulated expression of ABCA1 is diminished in fibroblasts from a patient with NPC disease, leading to impaired lipidation of apolipoprotein A-I (apoA-I) (16). We also found that 17/21 (81%) of NPC disease patients initially screened had low plasma HDL-C (16). These findings indicate that impaired regulation of ABCA1 leading to low plasma HDL-C is an integral feature of NPC disease. To our knowledge NPC disease represents the first known condition of decreased HDL formation and low HDL-C as a consequence of impaired ABCA1 regulation, rather than mutation.
The relationship of impaired ABCA1 regulation and HDL formation to the neurodegeneration and overaccumulation of cholesterol and other lipids in tissues including the liver in NPC disease is currently unknown. Earlier work by Liscum and Faust (15) had shown that addition of the oxysterol 25-hydroxycholesterol could normalize the rate of cholesterol esterification in human NPC disease fibroblasts, despite concomitant suppression of endogenous cholesterol synthesis in these cells. These results suggested the oxysterol might facilitate the mobilization of stored cholesterol from the late endosome/lysosome compartment to the endoplasmic reticulum. Lange and colleagues (17) subsequently showed that addition of oxysterols including 25-hydroxycholesterol to human NPC disease fibroblasts preferentially reduced lysosomal cholesterol and increased endoplasmic reticulum cholesterol, as measured by the pool of cholesterol available for esterification in whole cell homogenates. These investigators also suggested that reduction of endogenous cholesterol synthesis in the presence of oxysterols might deplete plasma membrane cholesterol, resulting in a shift of lysosomal cholesterol to the plasma membrane even in the presence of the NPC1 mutation (17). Frolov et al. (18) reported impaired oxysterol generation in human NPC1 and NPC2 disease fibroblasts, and decreased accumulation of total cholesterol mass and filipin staining of late endosomes/lysosomes in human NPC disease cells incubated with LDL-containing serum in the presence of 25-or 27-hydroxycholesterol. Together, these results suggest that correction of oxysterol-dependent gene regulation normalizes cholesterol trafficking even in the presence of mutations in the genes encoding NPC1 or NPC2.
A major regulator of ABCA1 expression is oxysterol-dependent activation of the nuclear receptor liver X receptor (LXR), which up-regulates ABCA1 to mobilize excess cell cholesterol by forming HDL (19 -21). In the present studies we tested whether addition of the non-oxysterol agonist of LXR, TO-901317, would correct the regulation of ABCA1 and normalize lipid efflux to apoA-I and HDL particle formation in human NPC disease fibroblasts, in the absence of direct effects of exogenous oxysterols on cholesterol synthesis and LDL receptor expression. Our results demonstrate correction of ABCA1 expression and near normalization of ABCA1-mediated lipid efflux, as well as correction of ABCG1 expression and HDL particle formation, even in the presence of NPC1 mutations, with LXR agonist treatment. These results suggest LXR agonists might greatly improve or possibly normalize the trafficking and overaccumulation of cholesterol and other lipids in NPC disease.
EXPERIMENTAL PROCEDURES
Materials-Cholesterol, phosphatidylcholine, LXR agonist TO-901317, fatty acid-free bovine serum albumin (FAFA), and filipin were purchased from Sigma. [ Preparation of Lipoproteins and ApoA-1-HDL (d ϭ 1.07-1.21 g/ml) and LDL (d ϭ 1.019 -1.063) were obtained from pooled plasma of healthy volunteers by standard ultracentrifugation techniques (22). The whole protein fraction of HDL was obtained by delipidating HDL, and purified apoA-I was obtained using the method of Yokoyama et al. (23) but substituting Q-Sepharose Fast Flow (GE Healthcare) for DEAE-cellulose. Radiolabeling of LDL using [1,2,6, H]cholesteryl linoleate was performed as described (24), to a specific activity of 16 -44 cpm/ng of LDL protein. Plasma for determination of HDL particle species was obtained from a 12-month-old male NPC disease subject homozygous for the I1061T mutation of NPC1 who had hepatosplenomegaly and early neurologic symptoms, and 13-month-old male plus 43-year-old female control subjects following informed consent.
Labeling of Cellular Cholesterol Pools and Phospholipids-Cells in 16-mm wells were labeled with LDL-derived cholesterol by incubation with DMEM containing 1 mg/ml FAFA (DMEM/FAFA) plus 50 g/ml [ 3 H]cholesteryl linoleate-labeled LDL protein for 24 h. Following cholesterol loading, cells were rinsed twice with PBS containing 1 mg/ml bovine serum albumin (PBS/bovine serum albumin) at 37°C, and incubated an additional 24 h to allow hydrolysis of added LDL and equilibration of LDL-derived cholesterol in DMEM/FAFA, in the absence or presence of 5 M TO-901317 added from a 10 mM stock in dimethyl sulfoxide (Me 2 SO). Cells were rinsed 3 times with PBS/bovine serum albumin prior to analysis or addition of efflux medium. To label phosphatidylcholine, cells in 35-mm dishes were loaded with non-lipoprotein cholesterol for 24 h and then incubated with 5 Ci/ml [ 3 H]choline chloride in DMEM/FAFA Ϯ TO-901317 during the 24-h equilibration period (16). Cells were rinsed 5 times with PBS/FAFA prior to addition of efflux medium.
Cholesterol and Phospholipid Efflux-Following radiolabeling and equilibration steps, cells were incubated for 24 h in DMEM/FAFA containing 10 g/ml apoA-I in the absence or presence of 5 M TO-901317. After incubation, efflux media were collected and cells were rinsed twice with ice-cold PBS/ bovine serum albumin and twice with ice-cold PBS. Cells were stored at Ϫ20°C until lipid extraction. Efflux media were centrifuged at 3,000 ϫ g for 10 min at 4°C to remove cell debris. Radioactivity in medium was measured directly (for cells labeled with [ 3 H]cholesterol) or the medium was extracted for determination of radiolabeled phosphatidylcholine (25). Extracted cellular lipids were separated by thin-layer chromatography and assayed for radioactivity as previously described (26). Protein content of extracted cell layers was determined using bovine serum albumin as standard (27).
Cholesterol Mass Assay-Cells grown in 60-mm dishes, loaded with 50 g/ml unlabeled LDL, and equilibrated for 24 h in the absence or presence of 5 M TO-901317 were incubated with 10 g/ml apoA-I Ϯ TO-901317 for 24 h. After incubation, media and cells were collected and cells were homogenized by sonication. Phospholipids from media or cell homogenates were digested by phospholipase C to remove the polar head groups, and total lipids were extracted in the presence of tridecanoin as the internal standard. Samples were derivatized with Sylon BFT (Supelco) and analyzed by gas chromatography (Agilent Technologies, 6890 Series equipped with a Zebron capillary column (ZB-5, 15 m ϫ 0.32 mm ϫ 0.25 m) and connected to a flame ionization detector; Zebron, Palo Alto, CA). The oven temperature was raised from 170 to 290°C at 20°C/ min, and then to 340°C at 10°C/min where the temperature was kept for 24 min. Helium was used as the carrier gas. The gas chromatography was operated in constant flow mode with a flow rate of 4.5 ml of helium/min. The injector was operated in the split mode and was kept at 325°C, and the detector was kept at 350°C (28). Separation of sterols was identified by comparing their retention times with standards, and calculation of sterol mass in samples was based on the internal standard.
Filipin Staining-Cells were grown on coverslips and loaded with 50 g/ml LDL as described above. The cells were then incubated in the absence or presence of 5 M TO-901317 during the 24-h equilibration period and during a 24-or 48-h incubation with DMEM/FAFA Ϯ 10 g/ml apoA-I. Cells were then fixed with 3% paraformaldehyde in PBS for 20 min, washed three times with PBS, and stained with filipin as described (18), with slight modification. Cells were incubated in PBS with 1.5 mg/ml glycine for 10 min, washed three times with PBS, and stained with 50 g/ml filipin in PBS for 30 min. Coverslips were mounted with ProLong Antifade reagent (Molecular Probes), and filipin fluorescence was detected by fluorescence microscopy on a Leica DM IRE2 microscope equipped with a 4Ј,6diamidino-2-phenylindole filter.
Western Blot Analysis of ABCA1-Cells in 100-mm dishes were harvested with 3 ml of PBS and pelleted by centrifugation for 10 min at 3000 ϫ g. The pellet was resuspended in 500 l of extraction buffer containing 50 mM Tris-HCl buffer, pH 7.4, containing 500 ng/ml aprotonin, 1 g/ml leupeptin, 1 mM phenylmethylsufonyl fluoride in ethanol, and 2 mM EGTA. The pellet was then homogenized by sonication for three 5-s periods and nucleic acids were pelleted by centrifugation for 5 s at 1,000 ϫ g twice. Crude cellular membrane proteins in the supernatant were pelleted by centrifugation for 20 min at 14,000 ϫ g at 4°C and resuspended in 0.45 M urea containing 0.1% Triton X-100 and 0.05% dithiothreitol. Twenty to thirty micrograms of crude membrane proteins were separated by 5% SDS-PAGE under reducing conditions and transferred to nitrocellulose membrane. Immunoblotting was performed using a polyclonal rabbit anti-human ABCA1 antibody (1:1000 dilution) from Novus Biologicals and a goat anti-rabbit IgG horseradish perodixase-conjugated secondary antibody (1:10,000, Sigma). Immunoblots were reprobed with rabbit polyclonal anti-protein-disulfide isomerase (Stressgen) and goat polyclonal anti-actin (Santa Cruz) as loading controls. Chemiluminescence was detected by enhanced chemiluminescence assay (Amersham Biosciences).
Two-dimensional Gel Electrophoresis of HDL Particles-Fasting blood from the NPC subject and normolipidemic control subjects was collected into EDTA tubes and placed immediately on ice. Plasma was obtained by centrifugation at 2000 ϫ g for 10 min at 4°C. To characterize apoA-I-containing particles generated by normal and NPC1 human skin fibroblasts, fibroblast-conditioned media in 35-mm dishes were centrifuged at 2000 ϫ g for 5 min at 4°C to pellet cells and the supernatant was concentrated 10-fold by ultrafiltration (Amicon Ultra-4, MWCO 10000, Millipore). Plasma and media samples were kept on ice and used the same day or frozen at Ϫ80°C. Plasma HDL particles were separated according to the method of Asztalos et al. (10) as previously described. HDL particles in equivalent volumes of concentrated apoA-I-conditioned media were separated according to the method of Castro and Fielding (29) except that in the second dimension, voltage was increased from 100 V for 19 h to 125 V for 24 h to increase the separation of the ␣-migrating HDL species. Briefly, 20-l samples were separated in the first dimension by 0.75% agarose gel in 50 mM barbital buffer, pH 8.6, at 200 V for 5.5 h at 5°C. Electrophoresis in the second dimension was performed with a 2-23% polyacrylamide concave gradient gel at 125 V for 24 h at 5°C in 0.025 M Tris, 0.192 M glycine buffer, pH 8.3. High molecular weight protein standards (7.1-17.0 nm, Amersham Biosciences) were run on each gel. Following electrophoresis, samples were electrotransferred (30 V, 24 h, 4°C) onto nitrocellulose membranes (Trans-Blot, Bio-Rad). To locate the standard proteins, the nitrocellulose membranes were stained with Ponceau S and the position of each protein marked. Membranes were blocked by a 1-h incubation in Tris-buffered saline containing 1% Tween 20 (TBST) and 10% nonfat milk at room temperature. ApoA-Icontaining particles were detected by blotting the membranes with rabbit polyclonal anti-human apoA-I antibody (Calbiochem) in TBST containing 1% nonfat milk for 1 h at room temperature, and then with 125 I-labeled donkey anti-rabbit antibody (Amersham Biosciences). The specific activity of the secondary antibody was 4.8 ϫ 10 6 cpm/mg. Membranes were incubated for 3 h in 80 ml of TBST containing 1% nonfat milk and 2.6 g of antibody, followed by three washes of 5 min each in TBST before autoradiography (30).
Statistical Analysis-Results are expressed as mean Ϯ S.D. Significant differences between experimental groups were determined using the Student's t test.
Diminished ABCA1 Expression in NPC1
Ϫ/Ϫ Human Fibroblasts Is Corrected by LXR Agonists-We previously demonstrated low basal and cholesterol-stimulated levels of ABCA1 mRNA and protein in human NPC1 Ϫ/Ϫ fibroblasts (16), consistent with impaired oxysterol-and LXR-target gene regulation in this disease. To determine whether exogenous LXR ligands can correct ABCA1 expression in NPC disease cells, NPC1 Ϫ/Ϫ fibroblasts were incubated in the presence of the synthetic non-oxysterol LXR agonist TO-901317 (31) or the oxysterols 25-and 27-hydroxycholesterol. As shown previously (16), ABCA1 protein levels were low in NPC1 Ϫ/Ϫ cells grown to confluence in 10% fetal bovine serum or loaded with non-lipoprotein cholesterol, when compared with NPC1 ϩ/ϩ fibroblasts (Fig. 1). Addition of TO-901317 or either of the oxysterols to non-cholesterol-loaded NPC1 Ϫ/Ϫ cells increased ABCA1 protein to levels similar to those seen in cholesterolloaded or LXR ligand-treated NPC1 ϩ/ϩ cells. Similar changes were seen in ABCA1 mRNA levels assessed by reverse transcriptase-PCR (data not shown), indicating the effect of the agonists on ABCA1 is at the transcriptional rather than posttranscriptional level. Reprobing the same Western blots for loading controls protein-disulfide isomerase and actin showed variable results between cell lines, for reasons that are unclear but likely related to alteration of other pathways in NPC1 Ϫ/Ϫ cells. These results indicate equivalent loading of lanes for each condition within each cell type, however, and do not alter our conclusion that ABCA1 expression is increased by LXR agonists in NPC1 Ϫ/Ϫ cells.
ApoA-I-mediated Efflux of Phosphatidylcholine and LDL-derived Cholesterol Is Increased in NPC1
Ϫ/Ϫ Fibroblasts Treated with LXR Agonist-We previously reported impaired efflux of LDL-derived, whole cell, plasma membrane, and newly synthesized cholesterol as well as the phospholipids phosphatidylcholine and sphingomyelin to apoA-I from human NPC1 Ϫ/Ϫ fibroblasts (16 H]phosphatidylcholine from NPC1 Ϫ/Ϫ cells to levels higher than from apoA-I-treated NPC1 ϩ/ϩ cells, and to ϳ90% of levels seen from apoA-I plus LXR agonisttreated NPC1 ϩ/ϩ cells ( Fig. 2A). This result suggests ABCA1 function is restored to normal or near normal levels in NPC1 Ϫ/Ϫ cells by addition of the LXR agonist, and that ABCA1 can mobilize phosphatidylcholine to apoA-I even in the presence of NPC1 mutations. Addition of LXR agonist to NPC1 Ϫ/Ϫ cells raised the efflux of LDL-derived [ 3 H]cholesterol to apoA-I to levels similar to those in apoA-I-treated NPC1 ϩ/ϩ cells, but to only 42% of levels in apoA-I-plus LXR agonist-treated NPC1 ϩ/ϩ cells (Fig. 2B). Efflux of LDL-derived [ 3 H]cholesterol to apoA-I from NPC1 Ϫ/Ϫ cells would be expected to be lower than from NPC1 ϩ/ϩ cells even in the presence of LXR agonist, due to dilution of exogenously derived [ 3 H]cholesterol by the much larger pool of unlabeled unesterified cholesterol in NPC1 Ϫ/Ϫ compared with NPC1 ϩ/ϩ cells (32).
ApoA-I plus LXR Agonist Treatment Depletes NPC1 Ϫ/Ϫ Fibroblasts of Cholesterol Mass-To determine total cholesterol efflux from both cell types, we measured changes in cholesterol mass in the medium and cellular compartments of LDL-loaded wild type and NPC disease fibroblasts treated with apoA-I in the absence or presence of LXR agonist. Total cholesterol mass in the medium of apoA-I-treated NPC1 Ϫ/Ϫ cells was slightly lower than in the medium of apoA-I-treated NPC1 ϩ/ϩ cells (Fig. 3A). Cholesterol mass efflux to apoA-I from NPC1 Ϫ/Ϫ cells was significantly lower than from NPC1 ϩ/ϩ cells after subtraction of albumin-dependent cholesterol efflux to the medium (Fig. 4), which was higher from NPC1 Ϫ/Ϫ cells.
Consistent with previous reports (13,32), NPC1 Ϫ/Ϫ fibroblasts loaded with LDL and incubated with albumin alone showed ϳ50% less cholesteryl ester (CE) mass (Fig. 3B), and 2-3-fold more unesterified cholesterol (UC) mass (Fig. 3C) when compared with NPC1 ϩ/ϩ fibroblasts. ApoA-I treatment markedly depleted CE mass in NPC1 ϩ/ϩ cells (Fig. 3B), further accentuated by addition of LXR agonist, consistent with ABCA1 preferentially mobilizing cholesterol that would otherwise be esterified by acyl-CoA:cholesterol acyltransferase (33). This effect was not seen in NPC1 Ϫ/Ϫ cells treated with apoA-I alone; addition of LXR agonist in the apoA-I incubation resulted in significant (33%) depletion of CE mass in NPC1 Ϫ/Ϫ cells. Incubation with 10 g/ml apoA-I for 24 h failed to reduce UC mass significantly in either NPC1 ϩ/ϩ or NPC1 Ϫ/Ϫ cells (Fig. 3C). Addition of LXR agonist to the apoA-I incubation resulted in a 30% drop in UC mass in NPC1 ϩ/ϩ cells, and a 28% drop in NPC1 Ϫ/Ϫ cells (Figs. 3C and 4). These results suggest correction of ABCA1 expression also restores the ability of apoA-I to deplete NPC1 Ϫ/Ϫ cells of cholesterol mass, even in the presence of NPC1 mutations.
A hallmark of NPC disease cells is accumulation of large amounts of cholesterol in late endosomal/lysosomal compartments, as determined by heavy staining with the unesterified cholesterol-specific dye filipin in the same pattern as LAMP1 or LAMP2 staining for these intracellular compartments (34,35). Filipin staining of NPC1 ϩ/ϩ and NPC1 Ϫ/Ϫ fibroblasts was performed to assess changes upon incubation with apoA-I in the absence or presence of LXR agonist. Twenty-four and 48-h incubations of NPC1 Ϫ/Ϫ fibroblasts with apoA-I resulted in no significant alteration in the intense filipin staining when compared with cells treated with albumin alone (Fig. 5). Addition of LXR agonist during the apoA-I incubations resulted in decreased filipin staining at 24 h, an effect that was accentuated at 48 h. These results are consistent with the drop in UC mass in these cells (Fig. 3C), and suggest up-regulation of LXR responsive genes in NPC1 Ϫ/Ϫ cells is capable of depleting late endosomal/lysosomal cholesterol in the absence of NPC1 protein function.
HDL Particle Species in NPC Disease Plasma and NPC Disease Fibroblast-conditioned Medium-
We previously reported the presence of low plasma HDL-C levels in more than 80% of homozygous NPC patients studied (16). To attempt to correlate this finding with changes in cholesterol mass efflux to apoA-I from NPC disease fibroblasts, we performed two-dimensional gel electrophoresis of HDL particles in the plasma of an NPC disease patient and in apoA-I-conditioned medium of NPC1 Ϫ/Ϫ cells. HDL in the plasma of a 12-month-old male NPC disease subject showed a near absence of large ␣-1 and pre␣-1, and a decrease in ␣-2 and pre␣-2 HDL particles compared with HDL spe- cies seen in an age-and sex-matched control subjects and a 43-year-old female control subject (Fig. 6A). These changes in HDL species in NPC disease plasma are similar to those previously reported for Tangier disease heterozygote (ABCA1 ϩ/Ϫ ) subject plasma, which also showed decreased levels of ␣-1, pre␣-1, ␣-2, and pre␣-2 HDL particles (Fig. 6B) (10). These results suggest that decreased ABCA1 activity in NPC disease, as in heterozygous Tangier disease, is primarily responsible for the absence of larger ␣-HDL in NPC patient plasma. The absence of these larger, cholesterol-rich ␣-HDL, which have a longer half-life in plasma than smaller HDL particles (29,36), likely explains the diminished plasma HDL-C concentration in NPC disease.
Two-dimensional gel electrophoresis of HDL particles formed in the medium of NPC1 Ϫ/Ϫ cells incubated with 10 g/ml apoA-I for 24 h similarly showed a near absence of larger ␣-HDL, and fewer small ␣-HDL, when compared with apoA-I conditioned medium of NPC1 ϩ/ϩ human fibroblasts (Fig. 7A). Addition of LXR agonist during the apoA-I incubation resulted in a marked increase in ␣-HDL particle formation in both cell types, and converted the HDL species formed by NPC1 Ϫ/Ϫ cells to a normal pattern (Fig. 7B). These results provide further evidence of correction of HDL particle formation by an LXR agonist in the presence of NPC1 mutations. Impaired ABCG1 Expression in NPC Disease-In addition to reduced expression of ABCA1, the impaired cholesterol trafficking and oxysterol generation in NPC disease could be expected to reduce the expression of all LXR-dependent genes (37). Whereas ABCA1 is believed to be the primary promoter of the initial lipidation of apoA-I in HDL particle formation (3), further delivery of cholesterol to HDL particles occurs in part through the actions of another ABC transporter, ABCG1, whose expression is also LXR dependent (4,5,38). The level of ABCG1 expression in human NPC disease has not previously been reported. Like ABCA1 (16), we found lower basal levels of ABCG1 mRNA in LDL-loaded human NPC1 Ϫ/Ϫ fibroblasts when compared with NPC1 ϩ/ϩ cells (Fig. 8). ABCG1 mRNA rose to a similar level in both cell types upon incubation with LXR agonist. These results suggest the correction of HDL particle formation in LXR agonist-treated NPC1 Ϫ/Ϫ fibroblasts may be mediated in part by increased ABCG1 expression, in addition to increased ABCA1 expression.
Additional LXR response genes expected to show impaired expression in NPC disease and involved in cholesterol efflux and modulation of HDL particle size include ABCG4 (5, 39), apoE (40), CETP (41), and PLTP (42). No ABCG4 mRNA was detected in Me 2 SO-or TO-901317-treated NPC1 ϩ/ϩ or NPC1 Ϫ/Ϫ human fibroblasts (Fig. 8). ApoE mRNA was not found in either of our fibroblast lines, consistent with a previous report in human fibroblasts (43). CETP mRNA was also not found in our fibroblast lines in the absence or presence of LXR agonist. PLTP is thought to mediate formation of larger HDL particles through particle fusion (44), and is LXR-agonist responsive (42). We were, however, unable to detect diminished PLTP mRNA in NPC1 Ϫ/Ϫ fibroblasts, nor an increase in PLTP expression following incubation with TO-901317 in either NPC1 Ϫ/Ϫ or NPC1 ϩ/ϩ fibroblasts. LCAT mediates the maturation of HDL particles by esterifying cholesterol on the HDL particle surface; LCAT has not been reported to be LXR responsive. Cholesteryl ester levels were very low in the efflux medium from both NPC1 ϩ/ϩ or NPC1 Ϫ/Ϫ fibroblasts (data not shown), further suggesting altered LCAT activity is unlikely to explain the increase in HDL particle size seen in LXR agonist-treated NPC1 Ϫ/Ϫ fibroblasts. We were unable to detect LCAT mRNA in Me 2 SO-or TO-9012317treated NPC1 ϩ/ϩ or NPC1 Ϫ/Ϫ fibroblasts.
DISCUSSION
The present studies demonstrate that an exogenous agonist of the nuclear receptor LXR can correct the impaired regulation and activity of ABCA1 in human NPC disease cells. The increased ABCA1 protein in NPC1 Ϫ/Ϫ cells is functional, as indicated by near normalization of phosphatidylcholine and markedly increased cholesterol efflux to apoA-I. ABCA1 is believed to preferentially mobilize to the plasma membrane the regulatory substrate pool of intracellular cholesterol, likely FIGURE 6. HDL particle species in control, NPC disease, and ABCA ؉/؊ human plasma. A, fasting plasma from a 43-year-old female (HDL-C 1.35 mmol/liter) and 13-month-old male (HDL-C 1.07 mmol/liter) control subjects and a 12-month-old NPC1 Ϫ/Ϫ male subject (HDL-C 0.86 mmol/liter) were analyzed by non-denaturing two-dimensional gel electrophoresis to assess HDL particle species. HDL particles were separated by charge on agarose gel in the first dimension and by size on the nondenaturing polyacrylamide gel in the second dimension, followed by electrotransfer to nitrocellulose membrane and detection of apoA-I-containing particles by immunoblot as described (10). h. An equivalent volume of Me 2 SO (vehicle for TO-901317) was added to control cultures. Total RNA was extracted from cells as well as Npc1 ϩ/ϩ Balb/c mouse liver and brain, and analyzed by semi-quantitative reverse transcriptase-PCR. HepG2 cell RNA was isolated as the positive control for CETP amplification. Cyclophilin is shown as a loading control. Results are representative of two experiments with similar results. residing in late endosomes/lysosomes (45), that would otherwise be delivered to the endoplasmic reticulum for esterification by acyl-CoA:cholesterol acyltransferase. Removal of this acyl-CoA:cholesterol acyltransferase substrate pool was suggested by the ability of apoA-I to deplete cholesteryl ester mass in LXR agonist-treated NPC1 Ϫ/Ϫ cells, as seen in NPC1 ϩ/ϩ cells incubated in the absence or presence of LXR agonist. In addition, LXR agonist treatment resulted in a marked decrease in unesterified cholesterol mass, as well as late endosome/lysosome cholesterol assessed by filipin staining, in NPC1 Ϫ/Ϫ cells.
Increased expression and activity of ABCA1 by LXR agonist TO-901317 treatment has been shown in multiple cell lines and tissues (31, 46 -48). In the current studies we show the ability of increased expression of ABCA1 by this agonist to markedly increase the mobilization of total and late endosomal/lysosomal cholesterol even in the presence of dysfunctional NPC1 protein. Increased ABCA1 in NPC1 Ϫ/Ϫ fibroblasts could be mobilizing cholesterol mainly from the plasma membrane, with secondary depletion of late endosome/lysosome cholesterol by an NPC1-independent mechanism, to replenish plasma membrane cholesterol lost to efflux. Alternatively, or in addition, ABCA1 might be mobilizing cholesterol from the late endosome/lysosome compartment directly, by a mechanism also not requiring functional NPC1. Several studies have suggested internalization of ABCA1 for direct mobilization of cholesterol from the late endosomal/lysosomal compartment represents a quantitatively important component of total ABCA1mediated cholesterol efflux to apoA-I (35, 49 -51). Regardless of whether ABCA1 is removing cholesterol from the late endosomal/lysosomal compartment directly or indirectly, our results indicate up-regulation of ABCA1 can largely bypass NPC1 mutations and increase cholesterol trafficking and removal from NPC disease cells.
The near normalization of phospholipid efflux to apoA-I from LXR agonist-treated NPC1 Ϫ/Ϫ fibroblasts suggests NPC1 function is not necessary for this aspect of ABCA1 activity, either at the plasma membrane or in intracellular compartments. The failure of LXR agonist treatment to completely correct cholesterol efflux to apoA-I, despite normalization of ABCA1 expression, suggests two possibilities. The first is that incubation of NPC1 Ϫ/Ϫ fibroblasts with LXR agonist and apoA-I requires incubations longer than 24 h to see a more pronounced effect. Increased depletion of UC at 48 h, as indicated by a further drop in filipin staining at this time point (Fig. 5), suggests this might be the case. The second possibility is that increased ABCA1 can largely but not completely correct cholesterol mobilization from late endosomes/lysosomes in the presence of NPC1 mutations, and that NPC1 function is necessary for a portion of cholesterol removal from this compartment independent of ABCA1. Whereas this might also be the case, our results suggest increased expression of ABCA1 alone is able to bypass the majority of the effects of NPC1 mutation on intracellular cholesterol trafficking for efflux to apoA-I, and as such, that NPC1 function is not absolutely essential for this process.
The persistent increase in cholesterol content in NPC1 Ϫ/Ϫ cells even after LXR agonist and apoA-I treatment could be due in part to increased de novo cholesterol synthesis in these cells.
Previous studies showed similar decreases in de novo cholesterol synthesis in response to the LXR agonist 25-hydroxycholesterol in NPC1 ϩ/ϩ and NPC1 Ϫ/Ϫ cells (15). This suggests the persistently higher UC levels in NPC1 Ϫ/Ϫ cells is more related to the need for longer incubations to see further declines in this pool, rather than an over-compensation of new cholesterol synthesis by the NPC1 Ϫ/Ϫ cells.
We also found near absence of larger ␣-1 and reduced ␣-2 HDL particles in the plasma of a 1-year-old NPC disease patient. Subjects heterozygous for ABCA1 mutations show the same pattern of loss of larger ␣-HDL species (Fig. 6) (10), and approximately half-normal plasma HDL-C levels (9). These results suggest that the reduction in HDL-C and loss of larger ␣-HDL in NPC patient plasma are due mainly to reduced ABCA1 expression and activity in this disease. Treatment with LXR agonist corrected the pattern of absence of large ␣-HDL and reduced smaller ␣-HDL in NPC1 Ϫ/Ϫ fibroblast apoA-Iconditioned medium to the pattern of HDL species seen in NPC1 ϩ/ϩ cell apoA-I-conditioned medium. These results provide further evidence that NPC1 protein dysfunction in NPC1 Ϫ/Ϫ cells can be bypassed to normalize the cholesterol trafficking required for HDL particle formation.
The current studies also demonstrate impaired expression of ABCG1 in human NPC disease cells. ABCG1 is an additional LXR response gene involved in cholesterol efflux. Expression of ABCG1, along with ABCA1 and other LXR response genes including apoE, would be expected to be decreased in cholesterol-replete or cholesterol-loaded tissues in NPC disease, due to the sequestration of unesterified cholesterol in late endosomes/lysosomes, and the consequent defect in generation of LXR-activating oxysterols (18). LXR agonist treatment also corrected ABCG1 expression in human NPC1 Ϫ/Ϫ fibroblasts (Fig. 8). Despite our conclusion that the primary mediator of the correction of cholesterol mobilization and HDL particle formation by LXR agonist in NPC1 Ϫ/Ϫ cells was increased ABCA1 activity, these results suggest ABCG1 may also have played a role in this effect. ABCG1 has been shown to facilitate delivery of cell cholesterol to pre-formed HDL but not to lipidfree apoA-I, by a mechanism that does not involve HDL particle binding to the cell surface (5). Overexpression of ABCG1 redistributes cholesterol to cell surface domains where the cholesterol is accessible to removal by HDL (52). LXR agonist TO-901317 treatment of mouse macrophages results in redistribution of ABCG1 from intracellular compartments to the plasma membrane, with a concomitant increase in cholesterol efflux to HDL (53). The relative importance of increased ABCA1-mediated cholesterol efflux to apoA-I versus ABCG1mediated cholesterol efflux to preformed HDL in correcting the trafficking and removal of cholesterol from NPC disease cells requires further investigation. The absence of expression of LXR response genes apoE and ABCG4 in human fibroblasts ruled out a role for these genes in the correction of lipid mobilization and HDL formation in our studies, but would not rule out an important role of these genes in the impaired cholesterol homeostasis in other NPC disease tissues, including the brain.
Whether the results of studies in human NPC disease cells can be corroborated using cells from mouse models of NPC disease is not yet clear. Npc1-deficient mice show no decrease in plasma HDL-C (54,55), whereas more than 80% of human NPC patients studied to date do (16). This difference is seen despite the recent report of low ABCA1 expression in Npc1 Ϫ/Ϫ mouse fibroblasts (56), as seen in our previous studies of human NPC disease fibroblasts (16) and the current studies. These findings suggest differences in HDL metabolism in mice compared with humans might make the Npc1 Ϫ/Ϫ mouse an unsuitable model to study this aspect of NPC disease. Studies using the same LXR agonist as used here have shown increased ABCA1 and ABCG1 mRNA in the brains of wild type mice treated with the agonist orally in vivo (48), and increased ABCA1 and ABCG1 mRNA in wild type mouse glial cells in culture (48,57), although different effects of the agonist on cholesterol efflux from these cells to apoA-I were reported (48,57). Although no differences were seen in ABCA1 and ABCG1 mRNA levels in the cerebellum of wild type and Npc1 Ϫ/Ϫ mice, the increase in cerebellar ABCA1 and ABCG1 expression with TO-901317 treatment might have contributed to the protection against Purkinje cell death and extended lifespan seen in Npc1 Ϫ/Ϫ mice treated with the neurosteroid allopregnanolone (56). Additional studies are needed to know how well expression of ABCA1 and ABCG1 in fibroblasts correlates with expression of these transporters in the brain of human patients with NPC disease, and whether up-regulation of these transporters in the brain might contribute to increased survival in these patients.
In summary, we have demonstrated the ability of an agonist of the nuclear receptor LXR to up-regulate ABCA1 expression and activity and largely correct phospholipid and cholesterol efflux to apoA-I, as well as HDL particle formation, in the presence of NPC1 mutations. In addition, ABCG1 expression was found to be low in NPC1 Ϫ/Ϫ cells, and also corrected with LXR agonist treatment. Addition of LXR agonist during apoA-I incubations markedly decreased late endosomal/lysosomal cholesterol staining by filipin, as well as total unesterified cholesterol mass in NPC1 Ϫ/Ϫ cells. These results suggest the NPC1 protein is non-essential for the trafficking and removal of cell cholesterol if the downstream defects in ABCA1 and ABCG1 expression induced by dysfunctional NPC1 can be corrected (37). Whether correction of ABCA1 and/or ABCG1 expression would be sufficient to have a major therapeutic benefit in human NPC disease, or whether NPC1 has another essential role in addition to mobilization of cholesterol from late endosomes/lysosomes, are questions of major importance. | 7,911.4 | 2006-12-01T00:00:00.000 | [
"Biology"
] |
Handwritten Geez Digit Recognition Using Deep Learning
. Amharic language is the second most spoken language in the Semitic family after Arabic. In Ethiopia and neighboring countries more than 100 million people speak the Amharic language. Tere are many historical documents that are written using the Geez script. Digitizing historical handwritten documents and recognizing handwritten characters is essential to preserving valuable documents. Handwritten digit recognition is one of the tasks of digitizing handwritten documents from diferent sources. Currently, handwritten Geez digit recognition researches are very few, and there is no available organized dataset for the public researchers. Convolutional neural network (CNN) is preferable for pattern recognition like in handwritten document recognition by extracting a feature from diferent styles of writing. In this work, the proposed model is to recognize Geez digits using CNN. Deep neural networks, which have recently shown exceptional performance in numerous pattern recognition and machine learning applications, are used to recognize handwritten Geez digits, but this has not been attempted for Ethiopic scripts. Our dataset, which contains 51,952 images of handwritten Geez digits collected from 524 individuals, is used to train and evaluate the CNN model. Te application of the CNN improves the performance of several machine-learning classifcation methods sig-nifcantly. Our proposed CNN model has an accuracy of 96.21% and a loss of 0.2013. In comparison to earlier research works on Geez handwritten digit recognition, the study was able to attain higher recognition accuracy using the developed CNN model.
Introduction
Amharic language is the only African language with its own alphabet and writing system while most of the other African languages use Latin and Arabic alphabets for their own writing system [1]. Te Federal Democratic Republic of Ethiopia and other regional states use the Amharic language as their ofcial working language. It is the mother language for over 50 million people and the second language for over 100 million people in Ethiopia [1]. Arabic is the only Semitic language spoken more than Amharic in the world. Amharic is also spoken by some people in neighboring countries like Eritrea, Djibouti, and Somalia. Tere are many historical documents written in Geez scripts found in Ethiopia. Tere are around 80 diferent languages spoken in Ethiopia, with up to 200 dialects. Te Geez alphabet is used as the writing system in some languages. Amharic, Geez, and Tigrinya are the most spoken languages in Ethiopia that use the Geez alphabet [1].
Geez script consists of 265 characters including 27 labialized characters (characters representing 2 sounds), 20 symbols for numerals, and 8 punctuation marks [2]. Our research focused on only the Geez digits. Geez numerals have been used in Ethiopian calendars, Geez Bibles, and historical documents. Geez numbers consist of twenty different symbols to represent the numerical values. Unlike Latin numbers, 0 is not represented by any symbol. Twenty numbers are represented by independent symbols such as 1-9, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, and 10000 as shown in Figure 1. Other numbers are represented by the combination of those twenty symbols. Each digit symbol has a dash (horizontal line) above and below the digit character.
Handwritten character and digit recognition works are done in diferent languages to improve the efciency of the recognition when they digitize historical and handwritten documents [4]. Digit recognition is a well-known problem that has been used to document indexing using dates such as document date, birth date, marriage date, and death date [5].
Digit recognition and detection have been utilized in a variety of applications, including automated the reading of the number of bank cheques, postal numbers and codes, tax forms, and document indexing based on dates [6]. Tere are two types of architectures for handwritten digit string recognition. Te two strategies for recognizing the digit string are detection-free and segmentation-based recognition [7]. In segmentation based on the system, we frst detect the numerical string that may contain multiple digits. Splitting digits should be done before a recognition to isolate each digit [8,9]. However, detection-free recognition approach recognizes each digit without any splitting and detection preprocesses [10].
Random Forest, SVM, KNN, and other machine learning techniques have been developed to recognize handwritten digits. Deep learning methods like CNN have the highest accuracy when compared to the most commonly used machine learning algorithms for handwritten digit recognition [11,12]. Pattern recognition and large-scale image classifcation are both done with CNN. Handwriting character recognition is a research feld in computer vision, artifcial intelligence, and pattern recognition [1]. It might be claimed that a computer application that conducts handwriting recognition has the capacity to acquire and recognize characters in photographs, paper documents, and other sources, and convert them to electronic or machine-encoded form. Deep learning is a popular feld of machine learning that uses hierarchical structures to learn high-level abstractions from data. According to references [13,14], the availability of technology CPUs, GPUs, and hard drives, among other things, machine learning algorithms, and large data, such as MNIST handwritten digit data sets and ImageNet data, are all factors in deep learning's success. Handwritten digit recognition, facial recognition, computer vision, audio and visual signal analysis, voice recognition, disaster recognition, and automated language processing are all areas where deep learning is applied [15].
Nowadays, deep learning is becoming a popular technique to learn to recognize patterns and deep patterns and extract. It has a deep learning level to generate patterns from a given dataset. It is an amazing algorithm with diverse libraries to extract patterns and recognize from images and classify them. Among the deep learning algorithms, the CNN is efcient and has good image classifcation, image recognition, pattern recognition, feature extraction, and so on.
Related Works
Kusetogullari et al. [5] introduced a deep learning architecture known as DIGITNET to detect and recognize English handwritten digits that are found in historical documents in Sweden. Te authors also created a large-scale handwritten digit dataset for the public known as DIDA. Te data were collected from the Swedish handwritten historical documents written by diferent priests in the nineteenth century. Te dataset consists of 100,000 handwritten digit images. Te DIGITNET consists of two diferent architectures to detect a digit and recognize the digit. Te frst architecture is DIGITNET-dect which detects the digit strings from handwritten documents and the second architecture is DIGITNET-rec which recognizes the handwritten digit. Te authors used a deep learning approach to train both models and used regression-based deep CNN methods to detect the digit. YOLOv3 was designed by the authors to detect and classify a digit from an image. In the recognition phase, three diferent CNN architectures were proposed by the authors. Convolutional, batch normalization, max-pooling, fullyconnected layers, and SoftMax layers are all included in each proposed model. But still, it has a limitation of some of the image data having high resolution, so it increases the computational cost in the training of the model and some digits are not labeled due to their bad appearance. Low digit detection accuracy because of negative sampling is also a limitation of the research work.
Chen et al. [16] compared fve machine learning classifcation models to recognize handwritten digits ofine. Te authors compared the performance of the KNN, neural network, random forest, decision tree, and bagging with gradient boost. 70,000 digit images are used to develop the classifer models. Te KNN and neural network show better accuracy than other classifers and KNN achieves 10 times faster speed than the neural network model. Te preprocessing stage is the crucial part of the recognition system in handwritten recognition. Te authors used some preprocessing techniques to enhance the data. Tey used normalization to give equal weight to each attribute. Ten, they used a median flter for the noise reduction step. Image sharpening and image attribute reduction are the other steps in the preprocessing phase, but still, it has some limitations from those, the bewilder tool is not efective to preprocess handwritten image data and they did not fnd a threshold value for the binarization preprocess technique; then, they ignore binarization technique. Te image is blurred after median flter and sharpening in preprocessing techniques.
Beyene [3] proposed a multilayered feed-forward propagation ANN for ofine handwritten and machineprinted Amharic (Geez) number recognition. Te author collected only 560 datasets for the model. He used 460 for the training and 100 for the test data. Te author collected the data manually because there is no public data for Geez handwritten digits. Te overall classifcation accuracy is 89.88%, which is poor because he used a very small amount of the data to develop his model [3]. Many researchhas been experimented in the specifc area of handwritten digits of ancient Semitic language (Geez). Some other researchers Applied Computational Intelligence and Soft Computing have done for all Geez character recognition but the author [3] did his research specifcally on Geez digits. But still, it has some limitations from those, a small number of data are used to train the algorithm, the work does not give any information about the preprocessing technique, and the accuracy of the proposed model is low to recognize the digit. Hossain and Ali [17] proposed a handwritten digit recognition using a CNN on MNIST handwritten datasets. Te authors used MatConvoNet to increase the speed of the operation of building the proposed model. MatConvoNet is a MATLAB function that supports an efcient computation on CPU and GPU allowing the training of complex models on large datasets such as Image Net ILSVRC. However, it has some limitations such as the research does not give any information about the preprocessing technique and the number of hidden convolution layers is small in the proposed model. Demilew and Sekeroglu [1] proposed an ancient Geez script recognition model by using deep learning. Te authors developed a deep CNN model to recognize Ethiopian ancient Geez characters found in historical documents. Tey proposed an architecture that only recognizes Geez characters and not words or full sentences. Te dataset is a total of 22,913 images collected from libraries, private books, and the Ethiopian Orthodox Tewahedo Church. Tey also developed a recognition system to recognize twenty-six base characters only. In Geez scripts, there are around 265 characters and 34 base characters, but they classifed each character to its base character class, not to its specifc character. Tere are 7 characters found in each base class including the base class. One of the challenges in recognizing handwritten Geez script is the similarity between the characters which are found in the same base class. Te authors classifed all of the seven characters found in the same class into one base class and ignored the difcult task in their model, but still, it has the problem of low image quality, the number of instances is not balanced for each character. Also, the research work does not mention the methods that are used for character detection. Te proposed model classifed all of the seven characters found in the same class into one base class; this is the other limitation.
Gondere et al. [2] designed a handwritten Geez character recognition system using a CNN. Te authors used multitask learning to enhance the model from the relationships of the characters. Tey ran the experiment by some hyper-parameters of a CNN. Te parameters are 100 batches in size, 0.3 keeping probability for dropout, 0.0001 learning rate, and 0.01 L2 regularization. Tey organized a dataset from different previous research works. But still, it has some problems in the research work. Te frst one is they used a unique handwritten dataset that afected the performance of the models and the work does not mention the preprocessing technique. Ali et al. [18] proposed a model to recognize a handwritten digit. Te authors used a CNN algorithm to develop the model. Tey used deeplearning4j with a CNN for the recognition system. Te CNN is composed of two main tasks. Te frst task is to extract a feature from each layer. Each layer takes input from the output of the previous layer and forwards the current output to the next layer. Te second task of the CNN architecture is feature classifcation. Tis unit generates or classifes the predicted output. Te authors used the MNIST dataset for their work. 60,000 handwritten digit images were used for training and testing the model. But it has some limitations from those, the proposed model used a large kernel size in the convolution layer, and because of that, it consumes a longer training time. Also, the work does not give any detailed information about the preprocessing technique.
Most of the researchers did digit recognition on English numbers. Tey achieved high performance using diferent methods to recognize handwritten digits. For English handwritten digits, there are many resources and datasets ready to be used by the research community. It encourages the researchers to focus on that area. However, for Geez handwritten digits, there are no organized data in public for researchers to work on recognition of handwritten digits. Some researchers did Geez character recognition for machine-printed and handwritten characters but they did not focus on digits, especially for handwritten. Te author of [3] is the frst researcher to work on recognizing handwritten Geez digits, but the dataset he used was a very small and low performance made.
Data Collection Method
For this study, handwritten data were collected from a variety of people with various writing styles. Instead of manual feature extraction, which is difcult for humans to do, deep learning models are utilized, which are life-simplifying and efcient techniques to extract with high accuracy, and performance. A data-gathering paper was created for this purpose. Te data gathering paper is prepared in a way to make the pre-processing easier. Te paper is A4 size which consists of the symbol of all 20 Geez numbers, in 2 rows and 10 columns in a box, and other same-sized empty boxes prepared and repeated 5 times as shown in Figure 2. Tis means an individual has to handwrite 100 instances or digits. Te data were collected from 524 diferent individuals and each person gave 100 instances of digits. According to calculations, since the collected data are from 524 diferent individuals, 52,400 instances are obtained. People from many demographic groups participated in the data collection. Te data were gathered from elementary pupils, high school students, high school staf members, university students, and university academic staf (lecturers). Te majority of information was acquired from university students, which totaled roughly to 250 at Adama Science and Technology University.
Te data collection in the university was successfully conducted with the help of Computer science and Engineering Club ASTU (CSEC-ASTU) members. Te club had 100 members at the time of data collection; thus, the data were gathered from them and through their connections on the campus. As mentioned earlier, data were obtained from 250 university students, 150 of whom are male and 100 of whom are female. After collecting the data, it must be converted from paper to digital format before it can be processed. Te documents were scanned using a TECNO mobile with a 50 Mega Pixel camera and a software app called cam scanner for this process. Te advantage of using a Applied Computational Intelligence and Soft Computing cam scanner is that it detects the paper and provides only the digital format (in image format) of the paper part after removing the background, reducing noise.
Python's OpenCV library was used for data extraction during the pre-processing technique. Tis program's input is one partition, and its output is the extracted data. Once prepared for one partition, the same would go for others.
Data Preprocessing
Te second phase of the proposed model is the preprocessing phase that occurs after the digital image has been made. Te digitized image is frst checked for skewing before being preprocessed to reduce noise. Preprocessing is necessary for creating data that are simple to recognize using handwritten digit recognition systems, and the goal is to reduce background noise, enhance the image's region of interest, and produce a clear distinction between foreground and background. Te study use the Python OpenCV library for the preprocessing technique.
Resize Image.
Because the data are available in a range of sizes, it must be resized to ft the network's input size. All images are resized to 32 × 32 pixels in this work. Tis scaling is important for reducing computational complexity and for concentrating on the region of interest by cropping it.
RGB to Grayscale
Conversion. Te simplest color model is grayscale, which specifes colors using only one component: lightness. A value ranging from 0 (black) to 255 (white) is used to defne the amount of brightness (white). All the original images in the dataset are in RGB color format. Converting the RGB to grayscale, reduce the color channel, and it reduces the computational complexity compared with RGB color images. In our proposed model the input images are grayscale so, the original images should be converted to the grayscale color format.
Color Inversion.
Te dominant color of the original image is white, which has a value of 255. For grayscale image, dataset models changing the dominant color to black is preferable to reduce the complexity of the mathematical operations. Because black color has 0 values, a convolution operation with the dominant part with a 0 value is reducing the computational complexity of the model. Figure 3 shows the preprocessing techniques used in our dataset. As shown in Figure 3(d), the dominant part of the image is the background of the image. In the color inversion technique, the background is converted from white to black color as shown in Figure 4.
Proposed Model
Te convolutional neural network (CNN) is the proposed model to address the Geez handwritten digit recognition. To recognize the digits, a CNN-based digit classifer is used. Six diferent CNN-based handwritten digit classifers consist of a number of layers such as a convolutional layer, maxpooling layer, dropout layer, fatten layer, fully-connected layers, and SoftMax layer to achieve high recognition accuracy. Furthermore, the training was performed by applying the backpropagation approach of stochastic gradient descent.
Finally, based on the evaluation metric, choose the best model for recognizing digit strings. Each classifer is constructed with a diferent number of convolutional layers, kernel sizes, and flters. Te parameters applied in all six classifers are summarized in Table 1. Model 6, for example, shown in Figure 5 has 8 convolutional layers, 4 max-pooling layers, 3 dropout layers, 2 fully connected layers, and 20 output layers. Te kernel size, stride, and number of flters in the frst convolutional layer are 3 × 3, 1, and 32 (3 × 3@1@ 32), respectively. Te second and third convolution layers are similar to the frst. After three convolution layers, the max-pooling layer (2 × 2@2@32) is applied. Te convolutional layer (3 × 3@1@64) is used in the ffth layer, and it consists of 64 flters with a kernel size of 3 3 and a stride of 1. Te following two layers are convolution layers, with the same hyperparameter as the ffth layer. Te max-pooling layer (2 × 2@2@64) is applied in the eighth layer. After the max-pooling layer, dropout is applied. Te convolutional layer (3 × 3@1@64) is applied next, which consists of 64 flters with a kernel size of 3 3 and a stride of 1. Te maxpooling layer (2 × 2@2@64) is the next hidden layer.
After the max-pooling layer, the dropout is applied. Te convolutional layer (3 × 3@1@128) is applied next, which consists of 128 flters with a kernel size of 3 × 3 and a stride of 1. Te max-pooling layer along with dropout layer is used before the fully connected layer. Fully connected layers are used, which consist of 128 nodes. In the convolutional and fully connected layers, ReLU is used as an activation function. SoftMax is used as a last layer to compute the probabilities of output classes in the last layer. Te class with the highest probability produces the desired result. Te epoch size is 30 and the total number of training instances in a single batch is 32. Te other fve classifers have varying numbers of convolutional and fully connected layers, as well as diferent layer organizations. Te frst fully connected layer contains 128 neurons and the second contains 20 neurons for all cases.
Result and Discussion
Te CNN is used to observe and see the diferences of the accuracies among diferent results from the handwritten Geez digit models. Training and validation accuracy were measured for 30 diferent epochs by changing out hidden layers for various combinations of convolution layers and using batch size 32 in all cases. Figures 6,7,8,9,10,and 11 illustrate the accuracy of the CNN, and Figures 12,13,14,15,16,and 17 show the loss of the CNN with various convolution and hidden layer combinations. Table 1 shows the maximum and minimum training and validation accuracies of the CNN determined after experiments for six diferent cases with diferent hidden layers, and Table 2 shows the maximum and minimum training and validation loss of the CNN in various cases for the recognition of Geez handwritten digits. Table 3 describes the CNN confguration and parameters for the six cases. Te models have varies numbers of convolutional and fully connected layers, as well as diferent layer organizations. Te frst fully connected layer contains 128 neurons and the second contains 20 neurons in all cases.
Te frst hidden layer in the frst case presented in Figures 6 and 12 is the convolutional layer 1, which is used for feature extraction. It has 32 flters with a kernel size of 3 × 3 pixels, and it uses ReLU as an activation function. Te next hidden layer is convolutional layer 2, which consists of 32 Applied Computational Intelligence and Soft Computing 5 flters with a kernel size of 3 × 3 pixels and ReLU. To minimize the spatial size of the output of a convolution layer, a pooling layer 1 is defned, with max-pooling and a pool size of 2 × 2 pixels. Te next layers are two convolutional layers of a 64 flter with a kernel size of 3 × 3 pixel and the ReLU activation function is applied to the model. A max-pooling layer 2 is applied after the convolution layer4. Next to the pooling layer 2, a regularization layer dropout is used to reduce the Figures 7 and 13 are defned for case2, where the frst hidden layer is the convolutional layer 1, which is used for feature extraction. It has 32 flters with a kernel size of 3 × 3 pixels, and it uses ReLU as an activation function. Te next hidden layer is convolutional layer 2, which consists of 32 Applied Computational Intelligence and Soft Computing flters with a kernel size of 3 × 3 pixels and ReLU. To minimize the spatial size of the output of a convolution layer, a pooling layer 1 is defned, with max-pooling and a pool size of 2 × 2 pixels. Te next layers are two convolutional layers of a 32 flter with a kernel size of 3 × 3 pixel and the ReLU activation function is applied to the model. A max-pooling layer 2 is applied after the convolution layer4. Te next two hidden layers are convolution layers which are made up of 64 flters with a kernel size of 3 × 3 pixels. Max pooling and dropout layers are applied after the convolution layers. Te next two layers are convolution layers with a channel size of 64 followed by a max-pooling layer. Te next hidden layer is convolution layer 9 with a 3 × 3 kernel size of 128 flters. A max-pooling layer with a dropout is applied after the convolution layer. Rectifed Linear Units (ReLU) are used as an activation function in all convolution layers. Te dimensions and hyperparameters used in this and the next cases are the same as those used in case 1. Te overall performance test accuracy is found to be 94.71%. Te minimal training and validation accuracy is determined at epoch 1. Te training accuracy is 85.01%, and the validation accuracy is 89.00%. Epoch 28 has the highest training accuracy, while epoch 20 has the highest validation accuracy. Te maximum accuracy for training and validation is 98.74% and 94.99%, respectively. Te total model loss is estimated to be approximately 0.2928. Two convolutions layers with a kernel size 3 × 3 which have 32 flters are taken one after the other in case 3, as shown in Figures 8 and 14, followed by a max-pooling layer. Two other convolution layers which have the same parameter from the frst two layers are applied before the maxpooling layer and dropout layer. Te next layers are three consecutive convolution layers which have 64 flter channels with a 3 × 3 kernel size and followed by a max-pooling layer. Before the fatten layer, two convolutional layers, maxpooling layer, and the dropout layer were applied. Te two convolution layers have 64 and 128 kernel channels, respectively. Both layers have the same kernel size of 3 × 3. A fattened layer is followed by the two fully connected layers.
Te overall performance test accuracy is found to be 94.98%. At epoch 1, the minimum training accuracy is 85.96%, whereas the minimum validation accuracy is 89.63%. Te maximum training and validation accuracies are 98.63% and 95.28% found at epochs 26 and 20, respectively. Te total model loss is found at approximately 0.2908.
For case 4, shown in Figures 9 and 15, three consecutive convolution layers are applied one after the other. Te number of channel is 32 and the kernel size is 3 × 3. Te maxpooling layer was applied after the three convolutional layers. Te max-pooling layer is followed by three convolution layers which have 64 kernel channels and 3 × 3 kernel size which are followed by a max-pooling layer with a Figures 10 and 16, and for this case, three consecutive convolution layers are applied one after the other. Te kernel channel is 32 and the kernel size is 3 × 3. Te max-pooling layer was applied after the three convolutional layers. Next to the pooling layer, a regularization layer dropout is used to reduce overftting by randomly eliminating 20% of the neurons in the layer. Te next layers are three convolution layers followed by a maxpooling layer and a dropout layer. Te two fully connected layers are followed by a fattened layer.
Te overall performance test accuracy was found to be 94.42%. At epoch 1, the minimum training accuracy is 87.41%, while the minimum validation accuracy is 89.90%. Epoch 29 has the highest training accuracy, while epoch 27 has the highest validation accuracy. Te maximum accuracy for training and validation is 99.77% and 94.84%, respectively. Te total test loss of the model is 0.5504. Te validation loss of the model increase when the iteration goes. It shows the model became overft to the training data. Te maximum model loss is occurred in this case from all the six cases. Also, the minimum model accuracy among all cases occurred in case 5. It shows that overftted models give a high model loss and low accuracy for a new test dataset.
Finally, in Case 6 ( Figures 11 and 17), three convolutions are taken one after the other, followed by a pooling layer. Te three convolution layers have 32 kernel channels. Tree convolution layers with a kernel size 64 are next, followed by a max-pooling layer. Next to the pooling layer 2, a regularization layer dropout is applied to reduce overftting by randomly eliminating 20% of the neurons in the layer. Convolutional layer 7 which has 64 kernel size is the next hidden layer, followed by a max-pooling layer and a dropout layer. Te next layer is convolution layer 8 with 128 number of channels and kernel flter size of 3 × 3. All convolution layers have the same flter size. Max-pooling layer 4 with a dropout was applied after the convolution layer 8. Te fatten layer, followed by two fully connected layers, is applied. Te overall performance test accuracy was found to be 96.21%. At epoch 1, the minimum training and validation accuracies were found to be 88. 77% Applied Computational Intelligence and Soft Computing loss decreases when the number of epoch goes, but the validation loss fuctuate for 10 epochs and then remain constant for the remaining number of epochs. By varying the hidden layers, the changes inaccuracies for handwritten digits were observed over 30 epochs in the experiment. Accuracy curves for the six cases for each parameter were generated using a handwritten Geez digit dataset. Te six cases behave diferently due to the different combinations of hidden layers. Te maximum and minimum accuracies for several hidden layer variations were recorded using a batch size of 32. As shown in Figure 18, the highest test accuracy in performance was found to be 96.21% for 30 epochs in case 6 among all the observations (Conv1, Conv2, Conv3, pool1, Conv4, Conv5, Conv6, pool2 with dropout, Conv7, pool3 with dropout, Conv8, pool4 with dropout, fatten layer, 2 fully connected layers).
Tis type of greater accuracy will work in Geez handwritten digit recognition to help the machine execute more efciently. In case 5, however, the lowest accuracy among all observations in the performance was discovered to be 94.42% (Conv1, Conv2, Conv3, pool1, Conv4, Conv5, Conv6, pool2, fatten layer, and 2 fully connected layers). Furthermore, the total highest model loss in case 5 is 0.5504, while the total lowest model loss in case 6 with dropout is around 0.2013 (Figure 19). With this minimal loss, the CNN will be able to achieve greater image quality and noise processing. From the observed result, the study chooses the best model from six cases that have highest model test accuracy and lowest test loss. So, case 6 model with highest accuracy of 96.21% and lowest loss of 0.2013 is the proposed model for this research work.
Te previous work on Geez handwritten digit recognition is done by the author of [3] who achieved 89.88% accuracy using an ANN model. Tis study evaluates CNN models with diferent layers with diferent hyperparameters. Compared with the previous work, the study improve the accuracy of the recognition from 89.88% to 96.21% by using CNN, increasing the dataset size, and enhancing the quality of the image by using pre-processing techniques on the dataset.
Conclusion and Future Scope
In this research work, convolutional neural networks was used to recognize Geez handwritten digits with 20-digit classes. CNNs are the current state-of-the-art algorithm for classifying image data and are widely used. On a prepared form for data collection, a large number of Geez handwritten digits were collected from individual handwriting. Te handwritten documents are scanned and preprocessed to get 32 × 32-pixel digit images. Te study ofered a new public dataset for the Geez handwritten digit dataset, which is open to all researchers. CNN architecture was used from the deep learning approaches to develop an Geez handwritten digit recognition system. A lot of trial and error neural network confguration tuning mechanisms were used to get the best ft model of CNN-based architecture. In comparison to earlier research works on Geez handwritten digit recognition, the study able to achieve higher recognition accuracy using the developed CNN model. Te proposed model achieved an accuracy of 96.21% and a model loss of 0.2013. Regardless of the fact that much work has been done in the English language to recognize handwritten digits, only a small amount of work has been done in the Amharic language. Due to a lack of research work on the area, there is a big challenge to get datasets for the Amharic language. Te collected data amount is enough to train the model, but it is not a large dataset, and the students dominate the respondent of the data gathering. Most of the respondent is student, so the model is performed well for the students and for other individual group the model does not perform well like the students. Te dataset does not include the historical document and manuscript images. Te collected data are only from individuals not including other sources. In this research, a dataset was developed that can be used by other researchers in the future. In the future, the dataset will also have historical data as the dataset for the model, and the Applied Computational Intelligence and Soft Computing current work only supports a single handwritten Ge'ez digit, but in the future, add the support for multi-digit.
Data Availability
Te data used to support the fndings of this study are available at https://drive.google.com/fle/d/1abJWvSYSyw8 mLQ5Blg_lYAJng1K3LtGS/view?usp=sharing.
Conflicts of Interest
Te authors declare that they have no conficts of interest. | 7,583.8 | 2022-11-08T00:00:00.000 | [
"Computer Science"
] |
Construction of Theoretical Capital Progress Curve Model of Power Grid Infrastructure Project Based on Contract Payment Terms
In order to implement the central government’s major deployment of preventing and punishing statistical falsification, improving the authenticity of statistical data, and further improving the science and accuracy of power grid project funding forecasts. By studying the fund payment law of power grid infrastructure project and constructing the theoretical capital progress curve model of power grid infrastructure project, the progress and time point of fund payment in each link of the whole process of the project can be effectively controlled. Scientific prediction of the project life cycle at all stages of the fund demand, financing arrangements and payment control to provide a reference.
Introduction
The fund payment of power grid infrastructure project is based on the signing of the contract. According to payment terms of the contract, combined with the project construction progress and cost entry, it reflects the actual tax inclusive capital flow paid by the project to the construction party, suppliers and other relevant units. In order to further strengthen the lean management of unit infrastructure project funds, based on the payment terms of materials and service contracts, the theoretical capital progress curve model of 35kV and above power grid infrastructure projects is established to realize the fund prediction of the whole life cycle of the project, and provide sufficient basis for medium and long-term financing plan arrangement, so as to assist the company's investment and financing decisions, coordinate the company's fund arrangement and improve the fund efficiency and effectiveness.
Relationship among power grid construction, cost and capital
The construction progress of power grid project is the core management index of construction department, which reflects the actual construction progress of the project, and is the completion of the quantities of each sub part of the project expressed by the percentage of completion. The entry cost is the core index of project management of financial department, which reflects the actual financial expenditure of various construction costs of the project on the basis of signing the contract.
The fund payment is carried out according to the project construction, cost entry and contract payment conditions. By combing the relationship among construction, cost and capital, the formation conditions and process of project fund payment are straightened out, and the relationship between cost and fund is refined.
The contract types are divided into service type and material type. The material contract payment is generally divided into advance payment, arrival payment, operation payment and quality assurance deposit. The flow chart of the relationship between material contract fund payment and material cost entry is as follows: The payment of service contract is generally divided into advance payment, progress payment, settlement payment and quality assurance deposit. The flow chart of the relationship between fund payment and service cost entry of service contract (taking engineering construction contract as an example) is as follows:
Construct the theoretical capital progress curve model
Based on the WBS element of the project, through combing the business logic among the construction progress, project cost and fund payment, the theoretical capital progress curve model of power grid infrastructure project is constructed based on project budget estimate, milestone plan and contract payment terms. Forecast the monthly fund payment from the beginning of the project into the plan to the end of the quality assurance period, draw the theoretical capital progress curve of the project, and realize the multi-dimensional prediction of the power grid project funds according to the long-term, annual and monthly.
Sorting out the fund payment rules stipulated in the contract
Project fund payment is based on the payment terms stipulated in the project contract. The payment terms of material contract determine the fund payment rules of equipment purchase cost and installation material cost.
The service contract corresponds to the construction engineering cost, installation cost, survey and design fee, engineering supervision cost, etc. in addition to equipment purchase and installation material cost. It mainly includes design, construction, supervision, installation and commissioning, technical service consultation, land acquisition compensation, etc.
By combing the payment rules of service contract and material contract, two important factors of fund forecast are determined: payment time point and proportion.
Payment rules of material contract
At present, State Grid Corporation of China adopts different forms of material procurement according to different voltage levels and material types. The organization forms of material procurement are divided into agreement inventory and batch purchase. Through investigation, it is known that batch procurement is often used in material procurement of projects with voltage level above 35kV. The corresponding payment proportion of different material types and contract amount purchased in batches is shown in Table I.
PAYMENT PROPORTION STIPULATED IN DIFFERENT BATCH PURCHASE CONTRACTS
According to the above table, except for tower and UHV equipment, the payment proportion of other equipment is basically the same, and the contract amount of project batch procurement is generally more than 500000 yuan. Therefore, the payment proportion of batch procurement materials for substation project is simplified and uniformly adopted as 1:6:2.5:0.5.
Payment rules of service contract
Service contracts correspond to construction engineering fees, installation fees, survey and design fees, and engineering supervision fees other than equipment purchase and installation material fees. Sort out the payment terms of various service contracts, and sort out the payment time and payment proportion according to the contract payment terms, as follows:
Payment terms and time of construction contract
The engineering construction contract corresponds to the engineering construction cost and installation engineering cost. By combing the payment terms of the engineering construction contract, we can get the payment time and proportion of the construction cost and installation engineering cost in each stage, as shown in Table II.
Land compensation agreement and policy treatment agreement
The land use compensation agreement and policy treatment agreement correspond to the land acquisition and clearance fee. Through the payment terms of the land use compensation agreement and policy treatment agreement, the payment time and payment proportion of land acquisition and clearing fee, as shown in Table III.
Preliminary work cost contract
Through sorting out the preliminary work cost contracts such as geological hazard risk assessment and feasibility study commission contract, we can get the payment time and payment proportion of the project preliminary work cost, as shown in Table IV.
Contract for prospecting and designing
By combing the payment terms of survey and design contract, we can get the payment time and proportion of survey and design fee in each stage, as shown in Table V.
Project supervision contract
By combing the payment terms of the project supervision contract, we can get the payment time and proportion of the project supervision fee in each stage, as shown in Table VI.
Loan contract
General loan contract for interest payment requirements: from the interest date, pay interest every March, June, September, December.
Other
Other expenses include: design document review fee, project legal person management fee, bidding fee, production preparation fee, project settlement audit fee, etc. This item includes many expenses, but the amount is small. According to the fund payment experience, the average apportionment payment in the process of project construction is adopted.
Prediction of fund payment time point combined with milestone node in engineering construction
By combing the payment terms of service and material contracts, the contract agreed payment time and proportion of various expenses are determined. Combined with the milestone node of project construction, the fund payment time point of various expenses in the whole process of the project is determined. The predicted payment time of various expenses is shown in Table VII. Interest is paid every March, June, September and December from the interest date Payment at the end of each quarter during the construction period 3.6 Others: design document review, bidding, legal person management, production preparation, etc. Settlement fund --Average apportionment and payment in the process of project construction By combing the time point and proportion of fund payment, the fund forecast allocation rules are determined. Combined with the project budget data and considering the project balance, the fund forecast expenditure amount of each month in the whole process from the early stage of the project to the completion of the quality assurance deposit payment is calculated, the fund payment progress is determined, and the theoretical fund progress curve is drawn.
Model validation
Typical power grid infrastructure projects with different voltage levels and different types are selected to calculate and draw the theoretical capital progress curve of the project, and compare with the actual capital progress curve. Taking a 220 kV substation project as an example, a case study is carried out. The project is planned to start on August 29, 2016 and put into operation on May 28, 2018. The estimated amount of the project is 75.43 million yuan. The estimated amount of various expenses including tax is shown in Table VIII. The details of other expenses are shown in Table IX. The annual milestone plan and construction schedule of the project are shown in Table X. According to the project budget estimate and milestone plan, the monthly fund payment progress of the project is predicted and the theoretical fund progress curve is drawn by applying the above fund prediction rules. The accuracy of the progress curve and the actual payment curve are verified by the scientific model. It can be seen from the above figure that the trend of the fund payment progress predicted by the theory is the same as that of the actual fund payment, and the difference is small. Only in the civil engineering stage, the deviation is large. Through the analysis, it is found that some materials arrive at the civil construction stage and pay for the goods, which leads to the deviation between the actual fund payment and the theoretical prediction. At the expiration of the project warranty period, the difference between the theoretical curve and the actual curve is caused by the difference between the actual balance rate of the project and the theoretical balance rate. Through the model validation, the theoretical capital progress curve model can better predict the progress of power grid project fund payment.
Conclusion
Through the theoretical capital progress curve model of power grid engineering, the project can automatically prepare the whole process of power grid project fund demand forecast in the project implementation stage, assist the annual fund demand forecast, provide sufficient basis for medium and long-term financing plan arrangement, and lay a solid foundation for reasonable planning of financing strategy and optimization of financial resource allocation. | 2,447.2 | 2021-01-01T00:00:00.000 | [
"Engineering"
] |
Near-Infrared Spectroscopy and Machine Learning for Accurate Dating of Historical Books
Non-destructive, fast, and accurate methods of dating are highly desirable for many heritage objects. Here, we present and critically evaluate the use of near-infrared (NIR) spectroscopic data combined with three supervised machine learning methods to predict the publication year of paper books dated between 1851 and 2000. These methods provide different accuracies; however, we demonstrate that the underlying processes refer to common spectral features. Regardless of the machine learning method used, the most informative wavelength ranges can be associated with C–H and O–H stretching first overtone, typical of the cellulose structure, and N–H stretching first overtone from amide/protein structures. We find that the expected influence of degradation on the accuracy of prediction is not meaningful. The variance-bias decomposition of the reducible error reveals some differences among the three machine learning methods. Our results show that two out of the three methods allow predictions of publication dates in the period 1851–2000 from NIR spectroscopic data with an unprecedented accuracy of up to 2 years, better than any other non-destructive method applied to a real heritage collection.
Table of Contents
reports the results of previous studies to date paper using mid-IR and NIR spectral data combined with PLS. In order to easily compare the reported results, since different date ranges were analyzed, the Normalized Root Mean Square Error of Prediction (NRMSEP) is also reported, and computed as follows: NRMSEP = RMSEP y max −y min , where RMSEP is the Root Mean Square Error of Prediction, y max and y min are, respectively, the maximum and minimum reference values in the property of interest (i.e., date). Table S1. Summary of the data reported in the literature on the use of mid-IR and NIR spectral data to date paper by PLS. The spectral preprocessing algorithms used are reported, i.e., Savitzky
S2.1 Samples and Sampling Strategy
The analyses were designed to explore the underlying process in dating models provided by SML methods using NIR spectroscopic data, as well as the possible sources of uncertainty associated with the publication dates of the books and the sampling method.
The books analyzed are from the general collection of the National and University Library of Slovenia To have a sample set representative of the period 1851-2000 a stratified sampling strategy was designed with the decade of publication as the criterion for stratification. A total of 100 books was analyzed. Table S2 reports the number of books analyzed per decade. Table S2. Sample sizes as the number of items (books) analyzed in each stratum (decade) of publication date. Sample size 1851-1860 6 1861-1870 6 1871-1880 7 1881-1890 7 1891-1900 7 1901-1910 7 1911-1920 7 1921-1930 6 1931-1940 6 1941-1950 7 1951-1960 7 1961-1970 7 1971-1980 7 1981-1990 7 1991-2000 6 The books were randomly selected from within a decade, and Figure S1 shows the number of randomly selected samples per publication year. Metadata including bibliographic information (e.g., title, author and publication year) were recorded and are reported in Supporting Information file (Dataset_S1).
S2.2 NIR Spectroscopy
Diffuse reflectance spectra were acquired in the range of 350-2500 nm using a portable UV-VIS-NIR ASD LabSpec® 5000 spectrometer (Malvern Panalytical Ltd, UK), equipped with a built-in light source, and three separate detectors: a 512-element silicon photo-diode array detector for the spectral interval 350-1000 nm, and two TE-cooled, extended range InGaAs photo-diodes for spectral intervals 1000-1800 nm and 1800-2500 nm. The sampling interval was 1 nm, while the spectral resolutions were 3 and 10 nm in the interval 350-1000 and 1000-2500 nm, respectively. Spectra, each an average of 200 scans, were acquired using a fiber-optic probe (Malvern Panalytical Ltd, UK) with spot diameter of approximately 2 mm in close contact with the samples, using Indico™ Pro software, version 6.0.3 (Analytical Spectral Device, USA). Splice correction for the light source was used to achieve a continuous spectrum. An ASD Spectralon® reference target (Malvern Panalytical Ltd, UK) was used for baseline measurement. In each book, 10 different pages were measured: 3 pages in the front, 4 pages of the middle and 3 pages at the back of the book block. On each page, 3 different points were measured: gutter, center and outer margin of the page (see Figure S2). Thus, a total of 30 spectra were taken for each book (taking about 30 min) in an area without ink and visible signs of localized degradation (e.g., foxing). The stack of paper below the measured page was used as the background for the spectra acquired in the inner (gutter) and outer margins, while a Spectralon® reference target was used for the spectra acquired in the center of the page to avoid interferences due to the ink from the pages below. The penetration depth of NIR radiation in a matrix of organic substances is typically 1-3 mm [5] , and it has been previously estimated that information is returned from up to 4-5 layers of purified cellulose sheets (approximately 0.5 mm) [1] . Therefore, the spectra can be considered as representative of both surface and bulk properties. All raw spectra were visually inspected, no outlier was detected. Some spectra exhibit different spectral features in some books, they were not considered to be outlier as they can express variability associated with different kinds of paper in the same book block or with extent of degradation.
To see if there is any influence associated with the compositional changes and sampling method, thus pages and points on the page analyzed, each spectrum was treated as an independent observation as it is representative of a unique combination of page and point where the measurement was made.
Spectroscopic analyses were conducted in a repository room of the NUK Library, where all the books were assembled at least two weeks before the analyses to let the paper acclimatize to the well-controlled environmental conditions. Temperature (19.0 ± 0.5 °C) and relative humidity (52 ± 1%) were measured using a Hobo MX100 datalogger (Onset Computer Corporation, USA).
The raw spectra are reported in Supporting Information files (Dataset_S2-S4), including the naming convention adopted for the spectra.
S2.3 Data Analysis
The workflow shown in Figure S3 illustrates the main steps of data analysis. Figure S3. Data analysis workflow.
S6
Preprocessing steps and SML models were conducted using R (vers. 4.2.1). [6] Data manipulation and plot prospectr 0.2.5 [7] Spectral preprocessing GA 3.2.2 [8] Variable selection -GA Boruta 7.0.0 [9] Variable selection -RF pls 2.8-1 [10] PLS ranger 0.14.1 [11] RF mlr 2.19.0 [12] kNN Table S4 reports the seven combinations we tested of two commonly used algorithms: Standard Normal Variate (SNV), [13] which performs a normalization of the spectra by subtracting each spectrum by its own mean and dividing it by its own standard deviation, and Savitzky-Golay (SG) algorithm, [14] a smoothingbased derivatization method. Figure S4 shows the truncated reflectance spectra (1000 -1899 nm and 2001 -2300 nm) of two books as collected by the LabSpec 5000 (raw spectra), and as preprocessed by the preprocessing algorithms. As variable selection method we tested GA employing PLS, RF and kNN to compute the fitness function (see Table S5). is the population size, is the probability of crossover between pairs of individuals, is the probability of mutation in a parent individual, is the maximum number of iterations to run before the GA search is halted. To calculate the fitness value for PLS-GA and kNN-GA, the normalized values of , i.e. , was computed; while for RF-GA the normalized value of , i.e. , was used.
S2.3.1 Preprocessing
is the number of attributes tried at each split, and is the number of trees in the forest.
Abbreviation
Variable selection method and parameters settings PLS -GA Genetic algorithm (size pop = 50, p cross = 0.8, p mut = 0.1, iter max = 1000) with NRMSE CV of PLS using 100 PLS components to calculate the fitness value RF -GA Genetic algorithm (size pop = 50, p cross = 0.8, p mut = 0.1, iter max = 1000) and NRMSE OOB of RF using m try = √p and n trees = 500 to calculate the fitness value kNN -GA Genetic algorithm (size pop = 50, p cross = 0.8, p mut = 0.1, iter max = 1000) and NRMSE CV of kNN with previously tuned k values to calculate the fitness value. Euclidean distance. Kernel optimal. Boruta Variable selection using random forest using m try = p/3 and n trees = 500 Preliminary analyses (Figure S5-S7 and Tables S6-S8) were carried out using all variables (wavelengths).
As result, for computation of fitness values in GA-based selection methods, we considered: 100 PLS components and chose the error at the minimum to calculate the fitness function using PLS (Table S6 and Figure S5); the number of neighbors (k) corresponding to the minimum value of error for each spectral preprocessing algorithm (Table S7 and Figure S6) to estimate the fitness function using kNN; 500 trees were chosen to estimate the fitness function using RF. At each node of the trees, a given number of randomly selected input variables (m try ) are chosen for all the trees in the forest. In general, m try can be chosen as some function of the number of variable (p), usually √p and p/3 are the default values for classification and regression, respectively. However, performance of RF is affected very little over a wide range of m try values, except near the extremes (i.e., m try = 1 or p) [15] . Based on preliminary tests ( Figure S7), due to computational times of the GA implementation, we considered a random subsample of √p of all available predictors (selected wavelengths) to determine the best split.
S9 Figure S5. as a function of number of components for PLS. The minimum is reached before 100 components for all spectral preprocessing algorithms, as reported in Table S6. Figure S6. as a function of k values. The minimum corresponds to values of k ranging from 2 to 4, as reported in Table S7. Figure S7. sufficiently converge to a constant level after 400 trees for all spectral preprocessing algorithms.
The results of GA-based wavelength selection are shown in Figure S8-S10, demonstrating that a smooth convergence to a maximum fitness values of selected variables (wavelengths) is reached.
S2.3.2 Simulation Study
To investigate the trend of model performance according to sample size (i.e., number of spectra), we design a simulation study where 100 random samples for each of the ten sizes (i.e. 50, 100, 200, 300, 400, 500, 600, 700, 800, 900) are generated for the groups identified, by sampling without replacement from the original data. We stop at 900 as it is the maximum sample size of the smallest subset (i.e., "Front" and "Back" Tables Table S8. Summary of the results of the PLS models with different spectral preprocessing and variable selection methods in terms of 100 , corresponding SD and CI95%. Table S15. Summary of the results in terms of 100 , corresponding SD and CI95% of the RF models built using different number of spectra grouped by the publication date of the books (1851-1900, 1901-1950 and 1951-2000). The 100 is also reported. -1900, 1901-1950 and 1951-2000). The 100 is also reported. S20 Table S17. Summary of the results in terms of 100 , corresponding SD and CI95% of PLS, RF, and kNN built using different number of spectra without grouping. The 100 is also reported. Table S19. Summary of the results in terms of 100 , corresponding SD and CI95% of the RF models built using different number of spectra grouped by the page (front, middle, back pages of the book block) where the measurement was made. The 100 is also reported. Table S21. Summary of the results in terms of 100 , corresponding SD and CI95% of the PLS models built using different number of spectra grouped by the point (gutter, centre, margin of the page) where the measurement was made. The 100 is also reported. Table S23. Summary of the results in terms of 100 , corresponding SD and CI95% of the kNN models built using different number of spectra grouped by the point (gutter, centre, margin of the page) where the measurement was made. The 100 is also reported. | 2,843.4 | 2023-05-22T00:00:00.000 | [
"Computer Science"
] |
An Exploratory Gene Expression Study of the Intestinal Mucosa of Patients with Non-Celiac Wheat Sensitivity
Non-celiac wheat sensitivity (NCWS) is a recently recognized syndrome triggered by a gluten-containing diet. The pathophysiological mechanisms engaged in NCWS are poorly understood and, in the absence of laboratory markers, the diagnosis relies only on a double-blind protocol of symptoms evaluation during a gluten challenge. We aimed to shed light on the molecular mechanisms governing this disorder and identify biomarkers helpful to the diagnosis. By a genome-wide transcriptomic analysis, we investigated gene expression profiles of the intestinal mucosa of 12 NCWS patients, as well as 7 controls. We identified 300 RNA transcripts whose expression differed between NCWS patients and controls. Only 37% of these transcripts were protein-coding RNA, whereas the remaining were non-coding RNA. Principal component analysis (PCA) and receiver operating characteristic curves showed that these microarray data are potentially useful to set apart NCWS from controls. Literature and network analyses indicated a possible implication/dysregulation of innate immune response, hedgehog pathway, and circadian rhythm in NCWS. This exploratory study indicates that NCWS can be genetically defined and gene expression profiling might be a suitable tool to support the diagnosis. The dysregulated genes suggest that NCWS may result from a deranged immune response. Furthermore, non-coding RNA might play an important role in the pathogenesis of NCWS.
Introduction
Non-celiac gluten/wheat sensitivity (NCGS/NCWS) is a syndrome dependent on gluten/wheat ingestion that presents both intestinal and extra-intestinal symptoms [1][2][3][4]. Herein, we refer to non-celiac gluten/wheat sensitivity as NCWS. This disorder was originally described in the late 1970s in patients with diarrhoea and abdominal discomfort that improved after gluten withdrawal from the diet. A more complete spectrum of NCWS clinical signs includes alternate bowel habits, bloating, abdominal pain, fatigue, foggy mind, limb numbness, dermatitis, and joint and muscle pain. These symptoms largely overlap with those of other wheat-related disorders, including celiac disease and wheat allergy [3], as well as common intestinal disorders such as irritable bowel syndrome (IBS) [5]. Given the lack of NCWS-specific biomarkers, the most advanced diagnostic algorithm implies the exclusion of confounding pathologies and the assessment of symptoms during a gluten challenge [6]. Although initially proposed as open-label [3], a more recent protocol, known as the Salerno criteria 6, entails the administration of a questionnaire where patients have to choose at least one and up Int. J. Mol. Sci. 2020, 21, 1969 2 of 15 to three main symptoms and give a score (ranging from 1 to 10) to their intensities during a short double-blind placebo-controlled gluten challenge. Although this protocol may be helpful in a research setting, it has never been validated clinically, nor have its positive and negative predictive values have been estimated; moreover, it may have a high nocebo effect [7], and is extremely difficult to be applied in the clinical environment, where the gluten challenge is generally administered in an open label trial. As a further limitation, this gluten-centred diagnostic protocol does not take into account other wheat components (e.g., fermentable oligo-, di-, and monosaccharides (FODMAPs) and amylase trypsin inhibitor (ATI)) that may be important in NCWS [1,2,8].
To date, we still know little about NCWS inducing factors, possible genetic predisposition, and pathological mechanisms, besides the fact that they probably differ from celiac disease and wheat allergy. As an example, the vast majority of patients with celiac disease (more than 95%) bear the human leukocyte antigen (HLA) DQ2, DQ8, or both, whereas up to 50% of NCWS patients are DQ2/DQ8-positive and the frequency of these HLA in the general population is about 30% [3,4]. Studies aimed at identifying the molecular features of NCWS reported the downregulation of forkhead box P3 (Foxp3) and the upregulation of toll-like receptor 2 and claudin-4 in the intestinal mucosa. In addition, soluble cluster of differentiation 14 (CD14), lipopolysaccharide (LPS)-binding protein, and fatty acid-binding protein 2 were shown to be increased in the plasma on these patients [9,10].
These findings suggest an involvement of innate immunity and an alteration of the intestinal permeability. Nevertheless, the intestinal mucosa of NCWS patients does not show important signs of inflammation besides a modest production of interferon gamma (IFNγ) and an increase of intraepithelial lymphocytes [4,11]. A more recent study reported a significant increase of eosinophils and clustering of T cells in the duodenal and rectal mucosa of NCWS patients as compared to controls [12], further suggesting an impact of an inflammatory immune response in NCWS.
Because NCWS presents minor signs of intestinal involvement, at least by using the classical approaches, we hypothesised that subtle changes of multiple genes might be sufficient to cause intestinal discomfort. In the present study, we tested the main hypothesis that there is a difference in gene expression between NCWS patients and non-NCWS patients. Moreover, we aimed to shed light on the molecular mechanism that regulates this disorder and to identify possible biomarkers of NCWS status that could be helpful to the diagnosis.
Therefore, we carried out a genome-wide expression analysis on RNA extracted from intestinal mucosa of 12 NCWS patients, and 7 age-matched controls.
The Intestinal Mucosa of NCWS Patients Showed a Gene Expression Pattern Different from Controls
Total RNA isolated from the duodenal biopsies of 7 controls and 12 NCWS patients (Table 1) was used to assess gene expression by microarray analysis. Gene expression levels, measured as described in the methods, were subjected to quantile normalization and log 2 transformation before statistical analysis. Contextual to the normalization, we excluded aberrant/unreliable probes signals, resulting in 53,218 measurements. About half of the probes targeted protein-coding transcripts (52.4%) and the other half targeted non-coding transcripts (47.6%). We considered as non-coding RNAs the 25,334 transcripts whose gene identifiers (IDs) or symbols start with LNC, LOC, XLOC, Linc, SNORA (small nucleolar RNA), or SNORD (small nucleolar RNAs, C/D box), or end with IT (intronic transcripts) or AS (antisense RNA), as well as all the Agilent probes without a gene symbol (Table 2). This last group of 8115 transcripts were included here because a manual check revealed that, although heterogeneous, the vast majority of them are non-coding RNA. Only 303 of these non-coding RNAs (i.e., SNORA and SNORD) were short non-coding RNAs, whereas the remaining 25,031 (Table 2) belong to the class of long non-coding RNA (lncRNA). The first two columns report a classification and the number of transcripts analysed by microarray. The column labelled "Absolute Mean Difference > 1" shows the number of transcripts whose mean expression levels differed at least 1 unit between NCWS and control. The column labelled "Benjamini-Hochberg Correction p-value < 0.05" shows the number of transcripts whose mean expression levels differed at least 1 unit and were statistically different after false discovery rate (FDR) correction. The percentages of "protein-coding" and "non-coding" relate to the total number of transcripts. The percentage of Xloc, Loc, LNC, Linc, etc. relates to the number of non-coding transcripts.
The differentially expressed (DE) transcripts were identified through two subsequent steps; firstly, we selected the transcripts whose mean expression differed from at least 1 unit from the mean expression of controls. The frequency distribution of these differences throughout the transcriptome ( Figure S1) indicated that only a small pool of transcripts fulfilled our filtering criteria. Secondly, on the filtered transcripts, we performed a Student's t-test and Benjamini-Hochberg correction (false discovery rate, FDR = 5%). This approach revealed 300 DE transcripts, of which 228 (76.0%) were significantly down-regulated and 72 (24.0%) were significantly up regulated (Table S1). Remarkably, only 36.7% of the DE transcripts were protein-coding RNA ( Table 2). This bias against non-coding RNAs was even more striking when we calculated the percentage of DE transcripts within the subsets of protein-coding RNAs (0.35%) and non-coding RNAs (0.79%). Hierarchical clustering of patients and DE transcripts, shown in the heat map of Figure 1, provided a first qualitative overview about the genetic differences between NCWS and controls.
Furthermore, we used the STRING database (https://string-db.org/ [13]) to evaluate the biological connectivity of the DE transcripts. STRING contains 96 of the DE transcripts identified in this study and shows 25 connections among them, whereas by chance we would have expected only 11 connections. This enrichment was statistically significant (protein-protein interaction enrichment p-value: 0.000147) and suggested that the DE transcripts are, at least partially, biologically connected as a group [13].
transcripts.
The differentially expressed (DE) transcripts were identified through two subsequent steps; firstly, we selected the transcripts whose mean expression differed from at least 1 unit from the mean expression of controls. The frequency distribution of these differences throughout the transcriptome ( Figure S1) indicated that only a small pool of transcripts fulfilled our filtering criteria. Secondly, on the filtered transcripts, we performed a Student's t-test and Benjamini-Hochberg correction (false discovery rate, FDR = 5%). This approach revealed 300 DE transcripts, of which 228 (76.0%) were significantly down-regulated and 72 (24.0%) were significantly up regulated (Table S1). Remarkably, only 36.7% of the DE transcripts were protein-coding RNA ( Table 2). This bias against non-coding RNAs was even more striking when we calculated the percentage of DE transcripts within the subsets of protein-coding RNAs (0.35%) and non-coding RNAs (0.79%). Hierarchical clustering of patients and DE transcripts, shown in the heat map of Figure 1, provided a first qualitative overview about the genetic differences between NCWS and controls.
Furthermore, we used the STRING database (https://string-db.org/ [13]) to evaluate the biological connectivity of the DE transcripts. STRING contains 96 of the DE transcripts identified in this study and shows 25 connections among them, whereas by chance we would have expected only 11 connections. This enrichment was statistically significant (protein-protein interaction enrichment p-value: 0.000147) and suggested that the DE transcripts are, at least partially, biologically connected as a group. [13] Figure 1. Heat map representation of the DE transcripts between NCWS and controls. Reddish and bluish colours represent upregulated and downregulated transcripts, respectively. The columns of the heat map represent the patients organised according to Canberra distance algorithm, whereas the rows represent the transcripts organised according to correlation distance. The code shown below the columns identifies a specific patient. Figure 1. Heat map representation of the DE transcripts between NCWS and controls. Reddish and bluish colours represent upregulated and downregulated transcripts, respectively. The columns of the heat map represent the patients organised according to Canberra distance algorithm, whereas the rows represent the transcripts organised according to correlation distance. The code shown below the columns identifies a specific patient.
Gene Expression can Contribute to Identify the NCWS Status
To reduce the number of variables (i.e., DE transcripts) and examine the relationship between gene expression and NCWS, we analysed the microarray data by the unsupervised principal component analysis ( Figure 2). The first component (principal component (PC)1) explained almost 38.9% of the variance and was associated with NCWS status, whereas PC2 explained 12.7% of variance ( Figure 2) and was not associated with the pathology. The new principal component analysis (PCA)-derived variable (PC1) was able to cluster samples into two groups, showing that distinct gene expression patterns were present in NCWS and controls. gene expression and NCWS, we analysed the microarray data by the unsupervised principal component analysis ( Figure 2). The first component (principal component (PC)1) explained almost 38.9% of the variance and was associated with NCWS status, whereas PC2 explained 12.7% of variance ( Figure 2) and was not associated with the pathology. The new principal component analysis (PCA)-derived variable (PC1) was able to cluster samples into two groups, showing that distinct gene expression patterns were present in NCWS and controls. In order to define which transcript may contribute more to discriminate the NCWS status and if a limited set of transcripts might be sufficient for this classification, we relied on a penalized logistic regression using the LASSO (least absolute shrinkage and selection operator) method. This analysis of variables' importance identified 15 transcripts that mainly contributed to characterisation of NCWS patients ( Figure 3; Table 3). The heat map of these 15 transcripts clearly showed the different expression profiles between NCWS and controls groups ( Figure S2A). The PCA of these 15 transcripts showed that NCWS patients were fairly homogeneous and this small gene expression signature had In order to define which transcript may contribute more to discriminate the NCWS status and if a limited set of transcripts might be sufficient for this classification, we relied on a penalized logistic regression using the LASSO (least absolute shrinkage and selection operator) method. This analysis of variables' importance identified 15 transcripts that mainly contributed to characterisation of NCWS patients ( Figure 3; Table 3). The heat map of these 15 transcripts clearly showed the different expression profiles between NCWS and controls groups ( Figure S2A). The PCA of these 15 transcripts showed that NCWS patients were fairly homogeneous and this small gene expression signature had the potential to group patients and controls separately ( Figure S2B). Statistical analysis of the receiver operating characteristic (ROC) area under the curve (AUC) built with the transcripts selected by LASSO indicated that indeed one transcript would be sufficient to classify NCWS patients with very high confidence (Table 3). the potential to group patients and controls separately ( Figure S2B). Statistical analysis of the receiver operating characteristic (ROC) area under the curve (AUC) built with the transcripts selected by LASSO indicated that indeed one transcript would be sufficient to classify NCWS patients with very high confidence (Table 3).
Possible Pathological Mechanisms Engaged in NCWS
Finally, to shed light on the pathological mechanisms of NCWS, we analysed the literature of DE transcripts and investigated their belonging to functional pathways by using ingenuity pathway analysis (IPA). Although most of the DE transcripts were uncharacterised or poorly investigated non-coding RNA, we obtained a network of proteins (Figure 4) It is worthwhile to mention that the vast majority of the DE transcripts identified in this study had previously been reported in malignancies of the intestinal tract ( Figure S3). This could simply reflect the high number of studies focusing on cancers and/or the huge amounts of molecular pathways associated with cancers (cell transformation, growth, invasion, etc.) [13].
Possible Pathological Mechanisms Engaged in NCWS.
Finally, to shed light on the pathological mechanisms of NCWS, we analysed the literature of DE transcripts and investigated their belonging to functional pathways by using ingenuity pathway analysis (IPA). Although most of the DE transcripts were uncharacterised or poorly investigated noncoding RNA, we obtained a network of proteins (Figure 4) suggestive of a possible role of immune response (e.g., AZU1, BMP7, CD70, CD72, FCAR, HLF, IL1RL1, KIT, killer cell lectin-like receptor C1 (KLRC1), MCF2L, MYBPC1, NOD-like receptor protein 5 (NLRP5), NR1D2, NRAP, NRP2, SFN, TAOK2, VTCN1), hedgehog pathway (NKX6-1, PTCH1, PTCH2, SFN, ST3GAL4), and regulators of circadian rhythm (ARNTL, CIART, HLF, NR1D2, TEF, YEATS2) in NCWS. Literature analysis further indicated an involvement of innate immune response and potentially of autoimmunity in NCWS; details about this claim are in the Discussion section. It is worthwhile to mention that the vast majority of the DE transcripts identified in this study had previously been reported in malignancies of the intestinal tract ( Figure S3). This could simply reflect the high number of studies focusing on cancers and/or the huge amounts of molecular pathways associated with cancers (cell transformation, growth, invasion, etc.). [13]
Discussion
In susceptible individuals, the simple contact, inhalation, or ingestion of wheat derivatives can cause allergic, autoimmune, or poorly defined pathological reactions. Wheat allergy includes immunoglobulin E (IgE) immuno-mediated symptoms such as respiratory allergy, contact urticaria, food allergy, and wheat-dependent exercise-induced anaphylaxis [3,14]. Celiac disease bears a number of intestinal, systemic, and autoimmune manifestations [15]. Both pathologies rely on clinical, laboratory, and histological diagnostic criteria. On the contrary, the clinical manifestations and the pathogenic mechanisms of the recently defined NCWS are still little understood and the diagnosis is not supported by laboratory tests [6,10]. Currently, NCWS diagnosis relies on the association of gluten ingestion with symptom severity in individuals where celiac disease has been excluded by the absence of serological markers (e.g., anti-TGA and anti-EMA) [6].
We decided to investigate gene expression profiles of NCWS with the aim of shedding light on the molecular mechanisms governing this pathology and eventually identifying effective biomarkers helpful to the diagnosis. Previous studies have demonstrated that gene expression profiles have the necessary qualification to carry on this endeavour [16,17].
NCWS patients recruited for this study were middle-aged females with normal body mass index and without obvious signs of systemic or intestinal inflammation. These patients reported intestinal symptoms (e.g., abdominal pain, bloating, diarrhoea, and constipation) and a variety of extra-intestinal manifestations (e.g., fatigue, headache, brain fog, and numbness in the limbs) that worsened upon gluten ingestion. Controls did not exhibit the typical symptoms of NCWS, whereas many general diagnostic tests (e.g., erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), calprotectin, and haemoglobin (Hb)) overlapped those of NCWS patients. Celiac disease was excluded by assessing specific serum biomarkers (e.g., anti-TGA and anti-EMA) and by histological evaluation of the duodenal mucosa.
The microarray platform used in this study contained about 53,000 probes targeting both protein coding and non-coding RNAs in roughly equal amounts. In the intestinal mucosa of patients affected by NCWS, we identified 300 transcripts whose expression differed from controls. Remarkably, a minor fraction of these DE transcripts were protein-coding RNAs. The remainder of the DE transcripts were lncRNA or uncharacterised RNA. Non-coding RNAs ranked in first position among those with higher discrimination power between NCWS and control groups (LASSO analysis), further underscoring the importance of this class of transcripts in NCWS.
lncRNA is a large and novel class of RNA that is still little understood [18]. They can be sub-classified into intronic lncRNAs, bidirectional lncRNAs, enhancer RNAs (eRNAs), and intergenic lncRNAs on the basis of the region of the genome that is transcribed or on the transcription mechanism [19,20]. The introduction of next generation sequencing led to the discovery of thousands of new lncRNAs in mammalian cells [18]. The functional roles and the specific mechanisms of action have been reported for various lncRNAs; nevertheless, some researchers sustain that lncRNAs are produced by spurious transcription and therefore mainly without functions [21,22]. In contrast, accumulating evidence shows that transcription of these RNAs requires transcription factors/promoters, in analogy to protein coding RNA, and their expression is regulated by cell needs and in response to pathological conditions. For example, lncRNA can control chromatin structure and gene expression, splicing machinery, the stability of mRNA, and the availability of miRNA [22]. These activities are accomplished by generating ribonucleoprotein complexes or by interacting with other nucleic acids of the cells through base pairing. From the pathophysiological standpoint, lncRNA are involved, among others, in inflammation and immune response [23]. For example, it has been reported that LPS influences the expression of 221 lncRNAs in monocytes [24]. The lncRNA tumor necrosis factor alpha-induced protein 3 (TNFaip3) works in concert with nuclear factor kappa B (NF-kB), whereas Lethe inhibits NF-kB-dependent gene expression downstream to tumor necrosis factor alpha (TNFα) [25,26]. Even cyclooxygenase-2 (COX2) expression is regulated by an lncRNA called p50-associated COX-2 extragenic RNA (PACER) [27]. In summary, lncRNAs appear to have important regulatory roles, but it is difficult to anticipate the effects of lncRNA dysregulation in NCWS. Further studies are needed to even understand whether changes in transcript expression the cause or the consequence of the disease are.
Literature analysis of the DE protein-coding RNAs identified in this study provided some insights about the pathological mechanism of NCWS. For example, ARNTL and NR1D2, two circadian clock genes, regulate inflammatory responses in macrophages by inhibiting NF-kB [28]. Recent studies demonstrated the importance of daily genes in the control of innate immunity [29]. NLRP5, NOD-like receptor protein 5, belongs to the superfamily of pattern-recognition receptors such as the toll-like receptors. The NOD-like receptors have an important role in innate immune response and autoimmune diseases including Crohn's disease [30]. FCAR encodes the receptor for the crystallisable fragment of IgA, a transmembrane protein expressed on the plasma membrane of neutrophils and macrophages [31]. Previous studies demonstrated that the signalling triggered by FCAR upon binding with IgA negatively regulates the inflammatory response [32]. Patients with IgA deficit are prone to autoimmune disease, including celiac disease, in line with the inhibitory role of FCAR and IgA in immune response [33]. Therefore, the downregulation of FCAR reported in this study may indicate that NCWS shares some pathological features with autoimmune diseases. AZU1 encodes an antimicrobial protein named cationic antimicrobial protein 37 kDa (CAP37) or azurocidin 1 because of its presence inside the azurophilic granules of neutrophils. This protein participates in innate antimicrobial response and monocytes chemo-attraction [34]. KLRC1 (killer cell lectin-like receptor C1) encodes the natural killer receptor G2A (NKG2A) protein, an integral membrane receptor of the natural killer lymphocytes, key cells at the edge of innate and adaptive immune response. Previous studies demonstrated that NKG2A may participate in autoimmune diseases including psoriasis, rheumatoid arthritis, and ankylosing spondylitis [35][36][37]. BMP7 encodes the bone morphogenetic protein-7, a transforming growth factor-beta (TGF-β) family member that is able to polarize monocyte into M2 macrophage and increase the expression of anti-inflammatory cytokines [38]. We can hypothesise that BMP7 downregulation may contribute to the development of a subtle inflammation in NCWS. Indeed, previous studies have underscored the anti-inflammatory effect of BMP7 in the gut [39]. Notably, BRINP3 (bone morphogenetic protein/retinoic acid inducible neural-specific 3), another component of the bone morphogenetic protein family identified in this study, was reported as being downregulated in the intestinal mucosa of patients affected by ulcerative colitis [40]. CD70 is present on antigen-presenting cells and was found to be involved in T-lymphocyte activation, as well as in pathophysiology of autoimmune diseases including lupus erythematosus and rheumatoid arthritis [41]. CD72 is a negative regulator of B cells that is also involved in the development of lupus erythematosus via toll-like receptor 7 [42]. Glycine N-methyltransferase (GNMT) encodes a tumour suppressor gene recently associated with T cell immune response. In a mouse model of autoimmune encephalomyelitis, the knock down of GNMT reduced the severity of the disease [43]. CUB and sushi multiple domains 1 (CSMD1) is a complement control-related gene that inhibits the activation of complement C3 and has a possible role in lupus erythematosus and schizophrenia [44]. The RNA transcript HLA-DRB6 is considered a pseudogene; nevertheless, its expression correlates with autoimmune diseases such as rheumatoid arthritis and type I diabetes mellitus (https://www.ebi.ac.uk/gwas/search?query=HLA-DRB6).
Besides immunity, ingenuity pathway analysis (IPA) suggested that the hedgehog pathway might have a role in NCWS. It is worthwhile to mention that a few studies demonstrated the involvement of hedgehog pathway in celiac disease and alteration of immune surveillance in cancer development [45][46][47].
Our study is not devoid of limitations. First, although our data have extensive statistical validation, they are representative of a small number of patients. Second, our findings may not be applicable to patients with different characteristics because the results have not been validated in an external cohort. Finally, the diagnosis was based on an open gluten challenge. Therefore, prospective studies should be conducted on larger cohorts of patients diagnosed according to the Salerno Experts' Criteria.
Patient's Recruitment
Patients referred to the Regional Centre for Adult Celiac Disease and the Gastroenterology and Endoscopy Unit at the "SS. Annunziata" University Hospital of Chieti claiming gluten-related symptoms were examined according to the current clinical practice by an expert gastroenterologist according to the established criteria at the time of the study design [3]. Those presenting with gastrointestinal symptoms, a negative serology for celiac disease (IgA anti-TGA and/or anti-EMA), a preserved mucosal architecture (Marsh grade ≤1), and a normal titre of total IgA were enrolled as potential NCWS. Subjects were also required to have negative immuno-allergy tests to wheat and a negative glucose hydrogen breath test to exclude small intestinal bacterial overgrowth (SIBO). Furthermore, patients with HLA-DQ2/8 positivity were purposefully excluded from the NCWS group when a positive family history for celiac disease among first degree relatives was reported. To the remaining potential NCWS, an experienced nutritionist prescribed an open gluten-free diet for a 6 week period, after which persistently symptom-free subjects were reintroduced to dietary wheat protein (equivalent to 10 grams of gluten). At that point, the recurrence of symptoms, together with all other clinical and diagnostic data, evaluated by an expert gastroenterologist, prompted the final diagnosis of non-celiac wheat sensitivity [3,9,48]. This was done according to the diagnostic criteria set forth at the time of the study design [3]. Symptom severity was assessed using a modified diagnostic questionnaire (Gastrointestinal Symptom Rating Scale) at baseline and after gluten exclusion.
As controls, we selected patients with dyspeptic symptoms who were prescribed endoscopic examination and reported no association of symptoms to specific dietary components at initial inclusion. Dyspepsia was defined according to the Rome III criteria. Those who accepted an elimination diet as an initial treatment strategy were prescribed a gluten-free dietary scheme by an experienced nutritionist for 4 weeks. Finally, controls were enrolled among the non-responders to a gluten-free diet, who had a normal duodenal mucosa at endoscopy and were not affected by SIBO ( Table 1). Note that a few controls and NCWS patients were positive for Helicobacter pylori (Table 1); however, they did not carry present or past major Helicobacter pylori-related diseases and its presence was equally distributed between groups.
All the controls and patients underwent upper endoscopy within 1 month from laboratory assessment. At least five duodenal biopsies were obtained for histological examination, including one in the duodenal bulb. Narrow band imaging and white light magnification were used to aid biopsy sampling. Biopsy samples were placed on cellulose paper to maintain orientation and prevent artefacts. An additional biopsy destined for inclusion in this study was taken from the immediate vicinity of those intended for histological examination. Biopsies used in the study were obtained prior to any gluten exclusion diet.
NCWS duodenal mucosa, examined by histological staining, scored grade 0 or I of the Marsh scale indicating no alteration or a minor infiltration of intraepithelial lymphocytes ( Table 1). Alterations of the intestinal mucosa, such as those observed in celiac disease, may cause malabsorption of nutrients, especially iron; however, ferritin and the haemoglobin levels suggested that intestinal absorption of iron was appropriate in these patients. Finally, the frequencies of HLA DQ2/Q8 were about 30%, in line with the distribution of these haplotypes in the general population.
NCWS and controls were matched for age, sex and body mass index, as well as showing comparable levels of C-reactive protein, erythrocyte sedimentation rate, haemoglobin, calprotectin, and ferritin (Table 1). These indices, although not exhaustive, suggest the absence of chronic inflammatory conditions. On the basis of their body mass index, patients were of normal weight or slightly overweight.
RNA Isolation and Microarray Analysis
Patients' biopsies were collected from the duodenum near the site used for histopathological examination, immediately submerged in RNAlater and stored at 4 • C. Total RNA was extracted using the miRNeasy kit (Qiagen), quantified, and analysed by gel electrophoresis and Bioanalyzer (Agilent). All the samples used for microarray hybridization had an RNA integrity number >4. RNA samples were extracted along a 2 year period and microarray hybridization was performed on two different chip/experiments. Chip1 included the following NCWS patients: 11_1, 12, 13, 14, 15_1, and a technical replicate 15_2. Chip2 included the following controls: 35, 37, 38, 17, 18, 19, and 20; NCWS patients 28, 29, 30, 31, 39, 41, and 42; and a technical replicate 11_2 already evaluated on chip1. Therefore, a total of 7 controls, 12 NCWS, and 2 technical replicates of NCWS were analysed by microarray.
The microarray, outsourced to "Consorzio Futuro in Ricerca", previously known as Tecnopolo Ferrara, was carried out with the Agilent Technologies. Data transformation was applied to set all the negative raw values at 1.0, followed by quantile normalization and log 2 transformation.
Statistical Analysis
In order to forestall the minimal number of patients to be recruited in the study, we assumed a common difference in mean expression between the two groups equal or greater than 1 in absolute values and a common standard deviation equal to 0.5. On the basis of these assumptions, we performed a simulation of sample size estimation considering different proportions of differentially expressed genes. This simulation showed that a number of patients of 7-11 was necessary to obtain a statistical power of 80% with a false discovery rate (FDR) of 0.05. Considering the fact that the proportion of differential expressed genes in our cohort was equal to 0.7, we had approximately 85% statistical power.
Qualitative variables were summarized as frequencies and percentages, and values of continuous variables, such as gene expression, were tested for normal distribution with Shapiro-Wilk's test and reported as mean and standard deviation (SD). The results were reported separately for each group. Mann-Whitney U test and Pearson's chi-squared test were applied to evaluate the differences in quantitative and qualitative characteristics among groups, respectively.
To identify differentially expressed (DE) transcripts, absolute difference between mean expression in NCWS patients and controls were calculated and the transcripts with value > 1 were selected. Student's t-test was applied to select transcripts with statistically significant difference between mean expression of NCWS patients and controls using the threshold FDR ≤ 0.05 [49].
An unsupervised principal component analysis (PCA) was applied to reduce the dimensionality of a microarray dataset into two components, principal component (PC) 1 and PC2. PCA was conducted as an "unsupervised" analysis to clarify the variance among microarray data from two groups using an R function "prcomp". The proportion of variance was also calculated to determine the percentage of variance explained by each PC.
Least absolute shrinkage and selection operator (LASSO) penalized logistic regression was used to identify DE transcript predictive of the diagnosis of NCWS. LASSO model allows the selection of variables with a highest predictive ability in the situation where there is a large initial set of variables. The final values used for the logistic regression model were alpha = 1 and lambda = 0.0001 (kappa = 0.492, accuracy = 0.783). LASSO regression selected variables through shrinking the beta coefficients of unimportant variables to zero. The degree of shrinkage is determined by a penalty parameter (lambda), the value of which is identified through cross validation (10-fold, repeated five times) to select the set of variables that maximize the area under the curve (AUC).
Finally, the ability of each selected DE transcripts to predict the status of NCWS was performed using a ROC curve. The AUC was calculated as a measure of classification model performance.
DE transcript was imported into ingenuity pathway analysis (IPA, Qiagen) to generate enriched pathways and networks.
For all tests, the threshold for statistical significance was set at p < 0.05. All analyses were performed with the open-source statistical R software (version 3.4.3, The R Foundation for Statistical Computing).
Conclusions
This study indicated that NCWS can be genetically defined and gene expression profiling could be a suitable tool to support the diagnosis. The functional role of the dysregulated genes suggested that NCWS may result from a pathological immune response, especially of the innate branch. Furthermore, non-coding RNA could play an important role in the pathogenesis of NCWS.
Supplementary Materials: The following are available online at http://www.mdpi.com/1422-0067/21/6/1969/s1, Figure S1. Frequencies of gene expression differences between NCWS and controls. Figure S2. Heat map and PCA representation of the transcripts selected by LASSO. Figure S3. Intestinal malignancies sharing protein expression alterations with NCWS. Table S1. List of differentially expressed transcripts. Funding: This work was supported by Fondazione Celiachia Onlus, Italy, grant no. 046_FC_2013.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,324.8 | 2020-03-01T00:00:00.000 | [
"Biology"
] |
Fault Identification Method of Diesel Engine in Light of Pearson Correlation Coefficient Diagram and Orthogonal Vibration Signals
In order to select fault feature parameters simply and quickly and improve the identification rate of diesel engine faults by using the vibration signals, this paper proposes a diesel engine fault identification method on the basis of the Pearson correlation coefficient diagram (PCC Diagram) and the orthogonal vibration signals. At first, the orthogonal vibration acceleration signals are synchronously acquired in the direction of the top and side of the cylinder head. And the time-domain feature parameters are extracted from the orthogonal vibration acceleration signals to obtain the Pearson correlation coefficient (PCC). Then, the correlation coefficient diagram used to do feature parameter screening is constructed by selecting the feature parameters with the correlation coefficient of more than 0.9. Finally, generalized regression neural network (GRNN) is adopted to classify and identify fuel supply fault in diesel engine. The results show that using the PCC Diagram can simplify the selection process of the feature parameters of the orthogonal vibration signals quickly and effectively. It can also improve the fault identification rate of diesel engine significantly with the help of adding the newly proposed cross-correlation coefficient of the orthogonal vibration signals into the GRNN input feature vector set.
Introduction
In accordance with the statistics by Sohu Auto and Ipsos in recent years, from the point of view of automotive faults, the number of faults occurring in the engine system accounts for more than 30% of the total with the highest proportion [1], thus causing great difficulties for production and life.So, some scholars have carried out a lot of researches, like collecting, extracting, and recognizing various kinds of information that appears during the working process of engines.Many engine fault detection and identification methods have been proposed, in which the most frequently used one is the using of the vibration signals [2][3][4].By installing a vibration sensor somewhere in the engine to collect the corresponding vibration signal in the engine during operation, the collected vibration signal is processed in the time domain, frequency domain, or time-frequency domain, to further achieve the identification of specific faults at the location.Although this method obtains a relatively high fault identification rate, it often uses vibration signal in a single direction to detect and identify a particular fault.Since the engine is a reciprocating circulation machine, the mixture of gas in the cylinder is isotropic, and the gas pressure formed in the combustion process acts on the top and side of the cylinder head simultaneously [5,6].That is, the vibration signals are generated on the surface of the cylinder head in the vertical and horizontal directions at the same time.Therefore, the vibration characteristics of the cylinder head in the vertical and horizontal directions should be taken into consideration in assessing of the working state of the engine.In addition, by the calculation of some experimental data, the correlation of the corresponding time-domain feature parameters of the orthogonal vibration acceleration signals acquired synchronously in the vertical direction of the cylinder head and horizontal direction of the cylinder head is proven to be small.That is to say, the corresponding time-domain feature parameters of the orthogonal vibration acceleration signals have weak correlation.So, this pair of orthogonal signals can be considered into use at the same time.Based on this, the PCC Diagram is adopted to optimize the orthogonal vibration signal feature selection to improve the fault identification rate of diesel engine.
Taking Weichai WD615 diesel engine as the research object, the paper carries out experimental researches on the faults of the fuel supply system for common engine faults.In the experiments, the engine fuel supply system faults are characterized by the fuel leakage of the sixth-cylinder highpressure fuel pipe under different severities.The experiments are conducted under four different working conditions: normal fuel supply, slight fuel leakage, severe fuel leakage, and fuel cut-off.The two vibration acceleration sensors are installed on the top and side of the cylinder head, which are perpendicular to each other.And the pair of orthogonal vibration signals on the surface of the cylinder head is synchronously collected.In order to avoid relatively complex calculations and improve the real-time performance of signal processing, only the time-domain feature parameters of the experimental data are extracted.The PCC are calculated after normalization, and the feature parameters with the correlation coefficient of more than 0.9 are selected to construct the coefficient diagram to screen feature parameters.Finally, the GRNN is adopted for fault classification and identification to verify the actual application effect of the PCC Diagram in identifying diesel engine faults with the orthogonal vibration signals.
PCC
The correlation coefficient is a basic method to reduce the dimensionality of high-dimensional data, and the PCC [7][8][9] is a more commonly used one.The PCC, proposed by Pearson in 1895, is based on statistics.The mathematical calculation process of it is simple and accurate, and it can measure the linear correlation between variables well.
The PCC of variables (which can also be vectors or matrices) X and Y can be calculated by the following equation.
In (1), cov(, ) is the covariance of X and Y; () is the variance of X, and () > 0; () is the variance of Y, and () > 0; and are the arithmetic mean of X and Y, respectively.The range of the value of XY is [-1,1].The larger the value is, the higher the linear correlation between X and Y is.When XY = 1, X and Y are completely positively correlated.When XY = −1, X and Y are completely negatively correlated.When XY = 0, X and Y are independent.
Feature Parameter Extraction and Fault
Identification Procedure Multichannel sensors are installed in the sixth-cylinder of the diesel engine, as shown in Figure 1.Two vibration acceleration sensors, PCB M603C01, are attached to the top and side of the cylinder head with powerful magnets.They are used to achieve the detection of the vibration acceleration at the top and side of the cylinder head.The fuel pressure sensor is fixed on the high-pressure fuel pipe by a clamp, with a dedicated charge amplifier, to collect the fuel pressure and fuel supply time of the high-pressure fuel pipe.The Hall sensor is fixed on the flywheel housing to gather the rotational speed signal.
When the engine is running in no-load, the speed of the engine is set to 800 r/min.And the host computer programs are written in Labview to drive the data acquisition card to collect the signals.The sampling frequency is 65536 Hz.
Under different working conditions of the sixth-cylinder high-pressure fuel pipe, the speed signal, fuel pressure signal of the high-pressure fuel pipe, and vibration acceleration signals of the top and side of the cylinder head are simultaneously acquired via multiple sensors.77 sets of data are collected from each channel under the condition of normal fuel supply, 62 sets of data under the condition of slight fuel leakage, 102 sets of data under the condition of severe fuel leakage, and 50 sets of data under the condition of fuel cutoff.
Feature Parameter Extraction.
A complete combustion cycle of the diesel engine corresponds to a crankshaft angle range of -360 ∘ CA-360 ∘ CA.Weichai WD615 used in the experiment is a six-cylinder diesel engine with the firing sequence of 1-5-3-6-2-4.So, for the sixth-cylinder combustion, the corresponding crankshaft angle range is 1/6 of a complete combustion cycle.That is, the crankshaft angle range is 60 ∘ CA to the left and to the right of the top dead center (TDC) of the sixth-cylinder, with 120 ∘ CA in total.The TDC of the sixth-cylinder is calibrated by each set of fuel pressure and rotation speed signals simultaneously captured under each of the above working conditions.Taking the sixth-cylinder TDC as reference, vibration acceleration signals that are synchronously acquired in the direction of the top and side of the cylinder head date with the same number of sampling points are extracted within the crank angle range of -90 ∘ CA-90 ∘ CA.The following 19 time-domain feature parameters are extracted: peak value, absolute peak value, peak-to-peak value, mean value, absolute mean value, root-mean-square (RMS) value, variance, standard deviation, root amplitude, Kurtosis index, Skewness index, peak index, waveform index, pulse index, margin index, peak value of autocorrelation function, peak value of cross-correlation function between all signals under different working conditions and normal fuel supply signal, peak correlation value between the vibration signal at the top of the cylinder head and the corresponding vibration signal at the side of the cylinder head of each working condition, and peak-to-peak correlation value between the vibration signal at the top of the cylinder head and the corresponding vibration signal at the side of the cylinder head of each working condition.The formulas for calculating these time-domain feature parameters are shown in Table 1.The last three parameters are calculated by the same formula, but the input signals are different.The 19 time-domain feature parameters are represented by F 1 to F 19 , respectively.
GRNN-Based Fault Identification
Process.GRNN [10][11][12] is a kind of feedforward neural network with mentors learning, belonging to radial basis function neural network (RBF).Because of the advantages of strong nonlinear mapping ability, good local approximation ability, fast learning speed, simple parameter adjustment during programming, good generalization ability, and good robustness, GRNN has been widely used in engineering.
𝑥(𝑛)𝑦(𝑛 + 𝑚)
The process of fuel supply fault identification by GRNN is shown in Figure 2. In the first step, the vibration acceleration signals of the top and side of the cylinder head are collected by the vibration acceleration sensors, respectively.Taking the TDC of the sixth-cylinder as the reference, the signal data are extracted when the crankshaft angle range is -90 ∘ CA-90 ∘ CA.The second step is to extract the time-domain feature parameters from F 1 to F 17 of the two vibration signals and the time-domain feature parameters F 18 and F 19 of the orthogonal vibration signals.For the third step, the PCC Diagrams are, respectively, constructed for the vibration acceleration signals of the top and side of the cylinder head with the feature parameters whose correlation coefficient are more than 0.9.And the respective feature parameters F 1 to F 17 are screened.At the fourth step, there are eight conditions as follows: the use of the vibration signal at the top of the cylinder head, the use of the PCC Diagram to optimize the vibration signal at the top of the cylinder head, the use of the vibration signal at the side of the cylinder head, the use of the PCC Diagram to optimize the vibration signal at the side of the cylinder head, the use of the cylinder head orthogonal vibration signals directly, the use of the PCC Diagram to Under the eight conditions, the fault feature vector sets are constructed by using the corresponding time-domain feature parameters as the input of GRNN to identify the fault.In the fifth step, the advantages and disadvantages of the abovementioned various fault identification methods are compared to choose the method with the highest fault identification rate.
PCC Diagram
Under the condition of normal fuel supply of the highpressure fuel pipe, 20 sets of the orthogonal vibration acceleration signals synchronously collected at the top and side of the cylinder head are randomly selected to discuss their correlation.
Correlation of the Orthogonal Vibration Signals.
As mentioned above, the orthogonal vibration signals at the top and side of the cylinder head should be taken into account simultaneously when evaluating the working state of the engine.Each time-domain feature parameter i (i =1 ∼ 17) of the vibration signal at the top of the cylinder head is normalized in the selected 20 sets of data.Similarly, each time-domain feature parameter j (j = 1 ∼ 17) of the vibration signal at the side of the cylinder head is normalized in the selected 20 sets of data.The PCC of each time-domain feature parameter i of the vibration signal at the top of the cylinder head and each time-domain feature parameter j (i = j) of the corresponding vibration signal at the side of the cylinder head are obtained, as shown in Table 2.It is not difficult to find that the PCC of the corresponding time-domain feature parameters of the orthogonal vibration acceleration signals at the top and side of the cylinder head are basically all below 0.5.That is, the linear correlation is weak overall.So, this pair of orthogonal signals can be considered at the same time for analysis.Though the conclusion is obtained by the 20 sets of data selected randomly, the correctness of the conclusion has been confirmed by the entire dataset.
PCC Diagram of Vibration Signal at Cylinder Head Top.
Dimensional reduction in high-dimensional data relying on the PCC can be accomplished by calculating the linear correlation coefficient between variables.However, the linear correlation between multiple variables is often complex and reticular.Therefore, when filtering the feature parameters through the PCC, the feature parameters cannot simply be selected directly according to the value of the correlation coefficient.In this paper, the concept of correlation coefficient diagram is proposed.The correlation coefficient diagram is established by selecting the feature parameters with certain correlation.And then the selection of feature parameters is carried out in combination with Graph Theory.
Each time-domain feature parameter i (i = 1 ∼ 17) of the vibration signal at the top of the cylinder head is normalized among the 20 sets of data selected above, and the PCC between each other is calculated.That is, the PCC between i and j (i, j = 1 ∼ 17, i ̸ = j) is calculated.Selecting the feature parameters with the correlation coefficient of more than 0.9 (the threshold value 0.9 is selected by experimental data) and connecting them to each other with line segment, a correlation network graph is formed, that is, the PCC Diagram, as shown in Figure 3.The correlation coefficient diagram obtained in this way is an undirected graph.Each node of the graph represents a time-domain feature parameter, and each line of the graph represents the correlation of two connected time-domain feature parameters with the correlation coefficient of more than 0.9.
Reserved Feature Parameters
Removed Feature Parameters Combining with the related knowledge in Graph Theory, the reservation and removal methods of the feature parameters are analyzed to obtain linearly independent feature parameters.In Figure 3(a), if F 2 is reserved, F 14 and F 15 are removed, while F 12 is reserved, so that F 2 and F 12 are linearly independent of each other.If F 14 or F 15 is reserved, the other three feature parameters can be removed.The reservation and removal methods of F 2 , F 12 , F 14 , and F 15 are listed using the combination method, and then three combinations can be attained.In Figure 3(b), the linear correlation between F 6 , F 7 , F 8 , and F 16 is excellent, so three of them can be arbitrarily removed.For these four feature parameters, the combination method is used to enumerate the reservation and removal methods.There are four combinations in total.Then, in accordance with the combination method, the reservation and removal methods of the two sets of feature parameters F 2 , F 12 , F 14 , F 15 and F 6 , F 7 , F 8 , F 16 are enumerated.There are twelve combinations of the feature parameters reserved and removed from F 1 to F 17 , as shown in Table 3.
PCC Diagram of Vibration Signal at Cylinder Head Side.
Similarly, each time-domain feature parameter m (m = 1 ∼ 17) of the vibration signal at the side of the cylinder head is normalized among the above-mentioned 20 sets of data.And the PCC between each other is obtained.This is the PCC between m and n (m, n = 1 ∼ 17, m ̸ = n).Selecting the feature parameters with the correlation coefficient of more than 0.9 and connecting them to each other with line segment, a correlation network graph is formed, that is, the PCC Diagram, as shown in Figure 4.It is also an undirected graph.Each node of the graph represents a time-domain feature parameter, and each line of the graph represents the correlation of two connected time-domain feature parameters with the correlation coefficient of more than 0.9.The connections of the time-domain feature parameters in Figure 4(a) are similar to those in Figure 3(a).If F 2 is reserved in Figure 4(a), then F 3 , F 14 , and F 15 are removed while F 12 is reserved, so that F 2 and F 12 are linearly independent of each other.If F 14 is reserved, then F 2 , F 12 , and F 15 are removed, and F 3 and F 14 are linearly independent of each other.If F 15 is reserved, F 2 , F 12 , and F 14 are removed, and F 3 and F 15 are linearly independent.If F 12 is reserved, F 14 and F 15 are removed, and F 2 and F 12 are linearly independent of each other (already existing) or F 3 and F 12 are linearly independent of each other.The reservation and removal methods of F 2 , F 3 , F 12 , F 14 , and F 15 are listed using the combination method, and then four combinations can be attained.In Figure 4(b), if F 5 is reserved, then F 6 , F 7 , F 8 , F 9 , and F 16 are removed, and F 5 and F 10 are linearly independent of each other or F 5 and F 13 are linearly independent of each other.If any one of F 6 , F 7 , F 8 , or F 16 is reserved, the other three will be removed at the same time with F 5 and F 13 , and the reserved feature parameter is linearly independent from F 9 and F 10 , and there are four combinations in total.If F 13 is reserved, then F 6 , F 7 , F 8 , F 10 , and F 16 are removed, and F 5 and F 13 are linearly independent of each other (already existing) or F 9 and F 13 are linearly independent of each other.The reservation and removal methods of F 5 , F 6 , F 7 , F 8 , F 9 , F 10 , F 13 , and F 16 are listed using the combination method, and there are seven combinations in total.Then, the combination method is used to enumerate the reservation and removal methods of the two sets of feature parameters between F 2 , F 3 , F 12 , F 14 , F 15 and F 5 , F 6 , F 7 , F 8 , F 9 , F 10 , F 13 , F 16 .So, there are 28 combinations of the feature parameters reserved and removed from F 1 to F 17 in total, as shown in Table 4.
Fuel Supply Fault Identification
In consideration of the conditions of normal fuel supply, slight fuel leakage, severe fuel leakage, and fuel cut-off at the same time, the fuel supply fault identification results for various conditions are shown in Figure 5.The data for GRNN under different working conditions are divided as shown in Table 5.
As for the distribution of fault identification results, most of the existing articles use planar graphs (two-dimensional graphs).But, the display method is not intuitive and the effect is very general.Figure 5 shows the distribution of fault identification results more intuitively from the perspective of three-dimensional space.
When using GRNN to identify the diesel engine faults, the time-domain feature parameters of the vibration signal at the top of the cylinder head from F 1 to F 17 are used as the input of GRNN.And the result of diesel engine fault identification by using the vibration signal at the top of the cylinder head can be obtained, as shown in Figure 5(a).The result of diesel engine fault identification by using the vibration signal at the side of the cylinder head can be obtained by taking the time-domain feature parameters of the vibration signal at the side of the cylinder head from F 1 to F 17 as the input of GRNN, as shown in Figure 5(b).As mentioned earlier, the linear correlation of the corresponding time-domain feature parameters of the orthogonal vibration acceleration signals at the top and side of the cylinder head is weak overall.So, when using this pair of orthogonal vibration signals to identify the diesel fault, the corresponding time-domain feature parameters of the pair of orthogonal vibration signals
Reserved Feature Parameters
Removed Feature Parameters can be directly used as the input of GRNN at the same time.
That is, the result of diesel engine fault identification can be directly obtained by using the orthogonal vibration signals of the cylinder head, as shown in Figure 5(c).
Furthermore, the feature parameters of the different combination methods reserved in Table 3 are used as the input of GRNN.The corresponding fault identification results are compared to find a feature parameter combination mode with the highest fault identification rate.That is the corresponding diesel engine fault identification rate of optimizing the selection of the feature parameters of the signal at the top of the cylinder head by using the PCC Diagram, as shown in Figure 5(d).Taking the feature parameters of the different combination methods reserved in Table 4 as the input of GRNN and comparing the corresponding fault identification results, the feature parameter combination mode with the highest fault identification rate is also found, that is, the corresponding diesel engine fault identification rate of optimizing the selection of the feature parameters of the signal at the side of the cylinder head by using the PCC Diagram, as shown in Figure 5(e).Taking the two sets of feature parameters by optimizing the selection of vibration signal feature parameters at the top and side of the cylinder head using the PCC Diagram as the input of GRNN, the corresponding fault identification results are compared to attain the combination of feature parameters with the highest fault identification rate, that is, the corresponding diesel engine fault identification rate of optimizing the selection of the feature parameters of the cylinder head orthogonal vibration signals by using the PCC Diagram, as shown in Figure 5(f).
Moreover, on the basis of identifying the diesel engine fault using the orthogonal vibration signals of the cylinder head directly, the orthogonal vibration signal crosscorrelation coefficient F 18 and F 19 are added as the input of GRNN.The corresponding fault identification results are compared to find the feature parameter combination with the highest fault identification rate, that is, the diesel engine fault identification rate of using the orthogonal vibration signals of the cylinder head and introducing the cross-correlation coefficient of the orthogonal vibration signals as the feature parameters, as shown in Figure 5(g).The diesel engine fault identification result of optimizing the orthogonal vibration signal feature parameters of the cylinder head by using the PCC Diagram and introducing the cross-correlation coefficient of the orthogonal vibration signals F 18 and F 19 as the feature parameters is shown in Figure 5(h).
It is not difficult to find that the graphical display method, like that in Figure 5, shows the distribution of fault identification results intuitively and clearly.At the same time, it can show the degree of dispersion of fault identification results intuitively and reflect the aesthetic feeling of natural science better.
Of course, in order to illustrate the results of fault identification more accurately with specific numerical values, Table 6 is introduced.
The results of fault identification are compared and analyzed in combination with Figure 5 and Table 6 as follows.(Figure 5 Comparing Figures 5(h) and 5(g), it is found that after using the PCC Diagram to optimize the orthogonal vibration signals of the cylinder head and introducing the crosscorrelation coefficient of the orthogonal vibration signals as the feature parameters, the fault identification rates under the four working conditions are all up to over 90%.They are 91.89%,95.45%, 96.88%, and 100%, respectively.In comparison with the former methods for fault identification, this method can improve the fault identification rate to a greater extent and reduce the degree of fault identification dispersion of wrong results.
Conclusion
The experimental researches on a large number of fuel supply faults for Weichai WD615 diesel engine show that, on the premise of only the time-domain feature parameters, it is easy to obtain a higher fault identification rate by using the orthogonal vibration signals to identify faults in the diesel engine.The PCC Diagram can be used to simplify the feature parameter screening visually and vividly, thus improving the diesel engine fault identification rate to a greater extent.Furthermore, the cross-correlation coefficient of the orthogonal vibration signals is a very important time-domain feature parameter found in the orthogonal vibration tests, adding it into the feature vector set as the input of GRNN can obviously improve the fault identification rate of diesel engine by using the orthogonal vibration signals of the cylinder head.In particularly, after screening the feature parameters by using the coefficient diagram, the fault identification rates of more than 90% are obtained under all the working conditions.This fault identification method based on diesel engine can be applied to information extraction and fault identification for other reciprocating circulation machines.
Vibration sensors and magnets Amplifier of fuel pressure sensor NI data acquisition card (b) Amplifier and data acquisition card
Figure 1 :
Figure 1: Sensors and the key setup.
Figure 2 :
Flow of fuel supply fault identification by GRNN.optimize the cylinder head orthogonal vibration signals, the use of the cylinder head orthogonal vibration signals and the introduction of the cross-correlation of the orthogonal vibration signals as the feature parameter, and the use of the PCC Diagram to optimize the orthogonal vibration signals of the cylinder head by introducing the cross-correlation of the orthogonal vibration signals as the special feature parameter.
Figure 3 :
Figure 3: PCC Diagram of cylinder head top signal.
Figure 4 :
Figure 4: PCC Diagram of cylinder head side signal.
Figures 5 (Figure 5 Figure 5 :
Figure 5: Continued. (a)) or only using the vibration signal at the side of the cylinder head (Figure5(b)), it is not difficult to find that the higher fault identification rate is relatively easily obtained by using the orthogonal vibration signals of the cylinder head (Figure5(c)).Respectively, comparing Figure5(a) with Figure 5(d), Figure 5(b) with Figure 5(e), and Figure 5(c) with Figure 5(f), it can be found that after optimizing the vibration signals by using the PCC Diagram, the number of feature parameters is significantly reduced and the fault identification rates under the four working conditions are obviously improved.Moreover, relying on the PCC diagram to optimize the orthogonal vibration signals of the cylinder head can get a higher fault identification rate.Comparing Figure 5(g) with Figures 5(a), 5(b), and 5(c), it is observed that the introduction of the orthogonal vibration signal cross-correlation coefficient as the feature parameters can obtain a higher fault identification rate under the four working conditions compared with only using vibration signal at the top (Figure 5(a)) and the side (Figure 5(b)) of the cylinder head and the orthogonal vibration signals of the cylinder head (Figure 5(c)).
Table 1 :
Formulas for time-domain feature parameters.
Table 2 :
Correlation coefficient of the orthogonal vibration signals.
Table 3 :
Combinations of cylinder head top signal feature parameters.
Table 4 :
Combinations of cylinder head side signal feature parameters.
Table 6 :
Identification rates in percent corresponding to Figure5.
91.89 95.45 96.88 100
Note: the data of the table in bold are the highest fault identification rate obtained under different working conditions when using different fault identification methods. | 6,426 | 2019-02-27T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
GAP1IP4BP contains a novel Group I pleckstrin homology domain that directs constitutive plasma membrane association.
The Group I family of pleckstrin homology (PH) domains are characterised by their inherent ability to specifically bind phosphatidylinositol 3,4,5-trisphosphate (PtdIns(3,4,5)P3) and its corresponding inositol head-group inositol 1,3,4,5-tetrakisphosphate (Ins(1,3,4,5)P4). In vivo this interaction results in the regulated plasma membrane recruitment of cytosolic Group I PH domain-containing proteins following agonist stimulated PtdIns(3,4,5)P3 production. Amongst Group I PH domain-containing proteins, the Ras GTPase-activating protein GAP1IP4BP is unique in being constitutively associated with the plasma membrane. Here we show that although the GAP1IP4BP PH domain interacts with PtdIns(3,4,5)P3 it also binds, with a comparable affinity, phosphatidylinositol 4,5-bisphosphate (PtdIns(4,5)P2) (Kd’s of 0.5 ± 0.2 and 0.8 ± 0.5 m M respectively). Intriguingly, whereas this binding site overlaps with that for Ins(1,3,4,5)P4, consistent with the constitutive plasma membrane association of GAP1IP4BP resulting from its PH domain-binding PtdIns(4,5)P2, we show that in vivo depletion of PtdIns(4,5)P2, but not PtdIns(3,4,5)P3, results in dissociation of GAP1IP4BP from this membrane. Thus the Ins(1,3,4,5)P4-binding PH domain from GAP1IP4BP defines a novel class of Group I PH domains that constitutively targets the protein to the plasma membrane and may allow GAP1IP4BP to be regulated in vivo by Ins(1,3,4,5)P4 rather than PtdIns(3,4,5)P3.
Background
Pleckstrin homology (PH) domains are protein modules of approximately 120 amino acids that were initially identified as regions of weak sequence homology repeated in pleckstrin (1,2). Subsequently PH domains have been identified in more than 100 different proteins that despite possessing low sequence similarities (10-20%) have been shown, or predicted to have, very similar overall topologies (3).
Many of these PH domain-containing proteins are involved in intracellular signalling, where the majority require membrane association for their function. Recent work has highlighted a pivotal role for PH domains in this membrane targeting (4,5).
The GAP1 IP4BP PH/Btk domain is the sole requirement for plasma membrane association.
To
Mutations within the PH/Btk domain that inhibit Ins(1,3,4,5)P 4 -binding result in GAP1 IP4BP dissociating from the plasma membrane.
In order to determine the correlation between Ins(1,3,4,5)P 4 -binding to GAP1 IP4BP and the ability to associate with the plasma membrane, the subcellular localisation of the GAP1 IP4BP mutants was determined. As seen in Figure 2, GAP1 IP4BP mutants with dramatically reduced abilities to bind Ins(1,3,4,5)P 4 , -K 585 →R, -A 587 →F, -N 597 →D, -F 598 →Q, -R 601 →K and -R 601 →C, no longer associate with the plasma membrane but were instead primarily cytosolic. This contrasted with GAP1 IP4BP -K 614 →E which, consistent with its ability to bind Ins(1,3,4,5)P 4 , retained a predominant plasma membrane localisation (Fig. 2). However, detailed image analysis revealed a detectable increase in cytosolic fluorescence compared to wild type (Fig. 3). Thus there does appear to be a strong correlation between the ability of the PH/Btk domain to bind phosphorylated forms of inositol and an ability to associate with the plasma membrane. Such a relationship suggests a potential molecular explanation for the plasma membrane localisation of GAP1 IP4BP ; namely that the PH/Btk domain may directly bind phosphoinositides present within the inner plasma membrane leaflet.
In unstimulated cells GAP1 IP4BP is not localised to the plasma membrane via an ability to bind basal PtdIns(3,4,5)P 3 .
To address whether the plasma membrane association of GAP1 IP4BP may result from binding to a low resting level of PtdIns(3,4,5)P 3 , we transiently (Fig. 6). Again GFP-GAP1 IP4BP remained predominantly localised to the plasma membrane. Together these data emphasise that the plasma membrane association of GAP1 IP4BP is unlikely to result from its PH/Btk domain-binding resting levels of PtdIns(3,4,5)P 3 .
PtdIns(4,5)P 2 .
If the plasma membrane association of GAP1 IP4BP is a consequence of its PH/Btk domain-binding PtdIns(4,5)P 2 , then manipulation of the concentration of this lipid should result in the dissociation of GAP1 IP4BP from the plasma membrane. PLC and results in a subsequent decrease in plasma membrane PtdIns(4,5)P 2 levels (35). As shown in Figure 7, addition of ionomycin caused the dissociation of GFP-GAP1 IP4BP from the plasma membrane and a simultaneous appearance of GFP fluorescence in the cytosol (Fig. 7A). This dissociation was rapid and paralleled control experiments using the well characterised in vivo PtdIns(4,5)P 2 -binding PH domain from PLC-δ 1 (Fig. 7B (35,36)). To test the role of PLC in the ionomycininduced GAP1 IP4BP dissociation we incubated GFP-GAP1 IP4BP expressing HeLa cells with the relatively specific PLC inhibitor U73122 (37). Before ionomycin addition, the localisation of GFP-GAP1 IP4BP was unaltered and upon ionomycin addition U73122-treated cells failed to show the plasma membrane dissociation observed in control cells (Fig. 7C).
As a second, independent, approach we made use of the observation that high concentrations of wortmannin (1 ¼M) induce a gradual fall in the levels of plasma membrane PtdIns(4,5)P 2 due to an inhibition of type III PtdIns 4-kinase (38,39).
Discussion
In The similar affinity of GAP1 IP4BP for PtdIns(4,5)P 2 and PtdIns(3,4,5)P 3 suggests, given that plasma membrane PtdIns(4,5)P 2 is more abundant than PtdIns(3,4,5)P 3 even after agonist stimulation (34), that the constitutive plasma membrane association of GAP1 IP4BP may occur as a consequence of its PH/Btk domain-binding plasma membrane PtdIns(4,5)P 2 . If such a mechanism were correct then one would expect that since GAP1 m is not constitutively associated with the plasma membrane it should have a lower affinity for PtdIns(4,5)P 2 . As stated above, this is indeed the case. Furthermore, any mechanism that induces a significant depletion of plasma membrane PtdIns(4,5)P 2 should result in the dissociation of GAP1 IP4BP . Experimentally we have presented direct evidence that reducing PtdIns(4,5)P 2 content within the inner leaflet of the plasma membrane, does indeed result in the plasma membrane dissociation of GAP1 IP4BP . Together therefore these data strongly suggest that, in an unstimulated cell, the plasma membrane association of GAP1 IP4BP results from its PH/Btk domain binding PtdIns(4,5)P 2 .
The conclusion that GAP1 IP4BP constitutes a receptor for Ins(1,3,4,5)P 4 rather than PtdIns(3,4,5)P 3 has a significant baring on the molecular events that may Future experiments will need to address the downstream consequences of these interactions on GAP1 IP4BP regulated Ras signalling. | 1,436.4 | 2000-09-08T00:00:00.000 | [
"Biology",
"Computer Science"
] |
P REDICTION OF PEDAL CYCLISTS AND PEDESTRIAN FATALITIES FROM TOTAL MONTHLY ACCIDENTS AND REGISTERED PRIVATE CAR NUMBERS
: Accident prevention is relatively a complex issue considering the effectiveness of the injury prevention technologies as well as more detailed assessment of the complex interactions between the road condition, vehicle and human factor. For many years, highway agencies and vehicle manufacturers showed great efforts to reduce the injuries resulting from the vehicle crashes. Many researchers used a broad range of methods to evaluate the impact of several factors on traffic accidents and injuries. Recent developments lead up to capable for determining the effects of these factors. According to World Health Organization (WHO), cyclists and pedestrians comprise respectively 1.6% and 16.3% in traffic crash fatalities in 2013. Also in Turkey crash fatalities for pedestrian and cyclists are respectively 20.6% and 3% according to Turkish Statistical Instıtute data in 2013. The relationship between cycling and pedestrian rates and injury rates over time is also unknown. This paper aims to predict the crash severity with the traffic injury data of the Konya City in Turkey by implementing the Artificial Neural Networks (ANN), Regression Trees (RT) and Multiple Linear Regression modelling (MLRM) method.
Introduction
Crash intensity prediction models are significantly important for transportation planning studies, and they are recurrently implemented in transportation safety issues.In this point, to assure a safety, traffic authorities need to understand the causes of a particular accident to propose the appropriate solutions.Based on the statistics of Turkish Statistical Institute (TUIK, 2013), more than 337.351 accidents reported in Turkey in 2013.In these accidents, 226.677 person died or injured.In literature, there exist many studies which investigated the role of demographic, socioeconomic, land use and network characteristics on accidents.Also many crash prediction models were justified such as poisson models, negative binomial models, linear regression models and empirical analysis techniques ( However, conventional methods such as regression analysis may be used for the traffic accident problems (Ozgan, 2008;Türe, 2008).One of the important roles of safety key performance indicators are to evaluate performance of action plans.Therefore, prediction business as usual has significant role in evaluation of system.For this purpose, monthly or annually data set investigation was seen in literatures.However, these is a gap in prediction cyclist and pedestrian accidents in literatures.Beside many studies, predicting the driver and vehicle characteristics is important for a better understanding the causes of accidents.Many studies are available in literature with dealing this topic.In the recent years, neural networks played a significant role for designing sophisticated models for traffic management, crash prediction and travel demand.Application of the ANN is well common solution for many of engineering problems by Despite the aforementioned efforts, there exist no study in literature considered the application of neural networks in directly predicting crashes of pedestrians and cyclists.This paper aims to predict the crash severity with the traffic injury data of the Konya city in Turkey by implementing the artificial neural networks (ANN), Regression Trees (RT) and multilinear regression modelling (MLRM) method.
Methodology
Artificial Neural Network (ANN) is used for modeling the statistical data.This method is a reliable way of representing the non-linear relations between the inputs and outputs of a system.ANN can consider the complex relations inside the data, and it tries to generalize.A typical ANN model can be implemented for various problem types such as pattern recognition, classification, prediction and optimization considering a statistical dataset.One another advantageous of the ANN is that, there is no need of selecting a regression model to the data.In this way, ANN model generally fits well with the proposed data.The ANN is basically a mathematical representation of the human brain that consist of numerous neurons connected to each other.Similar to the human brain, an ANN network is able to recognize the patterns in a data and suit well to the nonlinear characteristics of a system.An ANN consists of several data processing nodes called neurons.Basically, the neurons or nodes are grouped in several layers called input, output and one or several hidden layers.A three-layer artificial neural network is shown in Figure 1.In the ANN modelling, it is required to take attention of the large number of errors expected to be in the massive data set.The raw data used in the modelling progress in generally not clean.Errors are generally existed due to the equipment related problems and transcription errors while manually writing the accident records.
Data analyzing
In the scope of this study, the traffic accident data were gathered from local police department of Konya City in Turkey, and monthly registered passenger car's data set were obtained from Turkish Statistical Institute (TUIK) as shown respectively in Figure 2-3.The daily based data has interrupted properties.Therefore, this is the main reason for using monthly based data.On the other hand, cyclist and pedestrian accidents were used as outputs of the models.These data were defined as dependent variables in two different cases.Total accident numbers and registered passenger car numbers were defined as input and independent variables.Hence, modelling was manipulated using ANN and MLR simultaneously.
Known as a black box computational method, ANN is one of the most popular analyzing methods among high degree uncertainty modelling techniques.Therefore, it was used in analysis and predictions of the present research.Using different methods of radial based function (RBF) in comparison with generalized regression network (GRN), multi-linear regression (MLR) and regression trees (RT), four models were investigated in total.Then results were monitored by coefficient of determination (R 2 ) and mean squared percentage error (MSPE).In order to train the model, 80 percent of the data were used in an arbitrary manner (48 data for each inputs and output) and following to the procedure the rest of the data were used for prediction (20% or 12 data for each inputs and output).Using all this methods for model prediction, MATLAB© software was used for modelling ANN techniques, and SPSS software was used for MLR's.Models were developed to estimate the total number of cyclist and pedestrian crashes as well as crashes based on severity.A result of each case was presented in Figure 4(a-b).The consequences of modelling and prediction due to calibration and prediction procedure manipulated carefully.Then results were extracted and compared with different aspects and ratios (See Table 1 and Figure 5).As can be seen in Figure 5, all models presented unacceptable results except for GRN model for pedestrian accidents while the major predictions are over the trend line set.GRN model ensured significantly lower errors when compared to statistical and regression trees models for pedestrians.However, all models predicted lower values than observations especially for cyclist accidents.The main reason is the complex relationship between cyclist and vehicle accidents and also absence of value of Vehicle Travel Kilometer (VKT), Bike Travel Kilometer (BTK) and Pedestrian Travel Kilometer (PKT) in modelling.
Conclusions
In
Fig. 1 .
Fig. 1.A typical ANN structure with three layers (Consist of input, hidden and output).modeling of nonlinear systems.In addition, the implementation of the ANN's in civil engineering problems in the field of transportation is quite common.Gilmore et al. (1993) applied the neural networks in traffic management studies.In another study, Nakatsuji et al. (1994) implemented the neural networks for signal timing optimization.Ledoux et al. (1995) and Yin et al. (2002) tried to model the traffic flow using neural and fuzzy-neural networks.Smith and Demetsky (1997), Wilde (1997), and Smith et al. (2002) studied on the shortterm traffic flow forecasting by implementing neural networks.Ledoux (1996) integrated the neural networks on urban traffic systems.Srinivasan et al. (2007) studied on artificial intelligence based on congestion prediction.Delen et al. (2006) implemented an ANN application for predicting the contribution of the environmental factors, driver characteristics and road conditions to the road accidents.In a different study, Hashemi et al. (1995) compared the capability of a neural network (NN) model with multiple discriminant analysis and logistic regression to predict vessel accidents.Akgüngör and Doğan (2006) proposed an ANN model for prediction of the accidents (number of accidents, injuries and deaths).In their study, Altun et al. (2005) compared the ANN and nonlinear models according to various error expressions.The study results showed that the ANN model had better results according to nonlinear regression models.
Table 1 .
Methods and results The ANN was applied during training, testing and validation of 14 input layer, 1 hidden layer and 1 output layer for each case (cyclists and pedestrians).Also dummy parameters are defined to include monthly trends in modelling.The best-fit model was chosen according to the R 2 value, mean square errors percentage (MSEP).The highest R 2 value was obtained for the GRN of ANN for pedestrian accidents, demonstrating that the ANN gave the best prediction results for pedestrian accidents.The results from comparison of models for performance evaluation indicate that GRN model ensured significantly lower errors when compared to statistical and regression trees models for pedestrians.However, all models predicted lower values than observations especially for cyclist accidents.The main reason is the complex relationship between cyclist and vehicle accidents and also absence of value of Vehicle Travel Kilometer (VKT), Bike Travel Kilometer (BTK) and Pedestrian Travel Kilometer (PKT) in modelling. | 2,086 | 2015-01-01T00:00:00.000 | [
"Computer Science"
] |
INVESTIGATION OF CRITICAL MATERIAL ATTRIBUTES OF NANOCELLULOSE IN TABLETS
Objective: The present work aims to compare powder flow properties and post-compression characteristics of acid hydrolysed nanocellulose (AH-NC) a novel excipient with microcrystalline cellulose (MCC PH200) to demonstrate the application and performance of AH-NC. Methods: I-optimal design was applied separately for both the excipient, i.e., MCC PH200 and AH-NC. Independent variables were MCC PH200 as diluent (X1), AH-NC as diluent (X1), starch as disintegrant (X2), and PVP K30 as dry binder (X3). The dependent variables in design were Carr’s index (CI) (R1), angle of repose (AR) (R2), hardness (R3), friability (R4), disintegration time (DT) (R5), and T90 (R6). Results: Fourier-transform infrared spectroscopy (FTIR) and Differential scanning calorimetry (DSC) studies showed the compatibility of the drug with an excipient. CI was found in the range of 8%–17.84% for MCC PH200 and 5.25%–11.94% for AH-NC. AR was found in the range of 31.48–37.66 for MCC PH200 and 29.62–35.30 for AH-NC. The values of friability, DT, and T90 were almost identical in both the cases. Conclusion: Not only does AH-NC demonstrates better flow properties but also problems of weight variation and content uniformity are not observed when compared to MCC PH200. Hence, AH-NC is more suitable as an excipient for
INTRODUCTION
The production of nanocellulose (NC) and their application in different areas has gained increasing attention recently due to their low density, high surface area to volume ratio, higher Young's modulus, higher tensile strength, thermal stability, and biodegradable nature [1]. Extraction of NC has been carried out by acid hydrolysed (AH), enzymatic hydrolysed, homogenization, microfluidization, grinding, cryocrushing, and ultrasonication [2,3] from algae, tunicates, bacteria, and natural plant [4]. Application of NC as a nanocomposite has been studied [5][6][7][8]. However, application of NC in the pharmaceutical field has not been reported so far which basically is the objective of this study -to evaluate the usability of NC as novel tableting excipient produced by processing corn husk an agricultural waste, through AH. For that purpose, AH-NC was compared to commercially available grades of microcrystalline cellulose (MCC): Avicel PH 200. MCC was chosen for comparison due to its similarity to AH-NC in chemical structure. Besides, Avicel PH 200 is the most common grade of MCC used in tableting. Glibenclamide (GLB), an oral hypoglycemic agent for the treatment in non-insulin-dependent diabetes mellitus, was selected as a model drug in the present study [9][10][11]. Design Expert ® Version 12 was used for the data treatment of I-optimal design to ensure optimum use of time and cost to obtain a high quality of powder flow property using direct compression [12,13].
Differential scanning calorimeter (DSC)
Thermal properties of a drug, a physical mixture of drug and excipients, were investigated by DSC on a thermal analyzer (DSC-Thermal Analysis: Shimadzu Corporation). About 20 mg of each sample was heated from room temperature to 300°C at a rate of 10°C/min under nitrogen [14][15][16][17].
Hausner's ratio (HR) HR is the ratio of bulked density to the tapped density. HR was calculated according to the following equation [18][19][20].
It is defined as the angle between the free surfaces of a pile of powder to a horizontal plane. In the present study, the AR was determined using a fixed cone method [18][19][20]. The sample was carefully poured through the funnel until the apex of the cone, thus formed just touched the tip of the funnel. The mean radius (r) and height (h) of the heap were measured and the AR was calculated from the following equation:
Hardness
Hardness is termed as the tablet crushing strength or defined as the force required breaking a tablet in a diametric compression test. It was recorded using Monsanto hardness tester. Hardness of three tablets was measured and mean and the standard deviation was calculated and reported and expressed in terms of kg/cm 2 [21][22][23]. Friability Friability of the tablets was tested using Roche friability tester. Ten tablets (Fs) were placed in friabilator and operated at 25 rpm for 4 min [21][22][23]. Afterward, the fines were removed by sieving through a 250-μm mesh and the fraction above 250 μm mesh (Fa) was used to calculate the friability of tablets according to the following equation: Disintegration time (DT) In vitro DT was performed by USP disintegration apparatus at 50 rpm. Phosphate buffer (pH 6.8), 600 ml was used as disintegration medium, the temperature was maintained at 37°C ± 2°C and one tablet was Weight variation Twenty tablets were weighed individually and then the average weight was calculated. The weight of an individual tablet is then compared to the average. The tablet passes the test if no more than two tablets are outside the percentage limit [21].
Vora and Shah
placed in each of the six basket tubes of the apparatus and one disc was added to each tube. Time taken for the complete disintegration of the tablet was noted [21].
In vitro dissolution GLB release was determined using a dissolution apparatus USP type II (Paddle type). Tablet was added with sinkers in a dissolution medium consisting of 900 ml 0.05 M, pH 7.5 phosphate buffer and was stirred at 50 rpm at 37°C ± 0.5°C. Five ml of sample was withdrawn at defined time intervals and was replaced with the same volume of fresh dissolution media. The samples were analyzed spectrophotometrically (UV-1700, Shimadzu Corp., Kyoto, Japan) at 231.5 nm (Cruz-Antonio 83). Dissolution tests (n = 3) were carried out for all the batches and the percentage drug released was calculated using a standard calibration curve [24].
FTIR
Spectrum of GLB and physical mixture of GLB with excipients is shown in Fig. 1. GLB showed carbonyl stretching at 1712.67/cm, symmetrical and asymmetrical sulfonyl stretching at 1161.07 and 1344.29/ cm, respectively, and amide stretching at 3315.41 and 3365.55/cm. Comparison of functional group peaks of GLB and physical mixture revealed that there were no major changes observed for characteristic peaks which confirm the absence of interaction between drug and the excipients [10].
DSC
DSC thermograms of GLB and physical mixture of GLB with other excipients are compared as shown in Fig. 2. A sharp endothermic peak at 175.58°C in the thermogram of GLB was observed. Characteristic peak of GLB was observed at 175.22°C in a physical mixture containing MCC PH200, whereas peak of a mixture containing AH-NC was observed at 175.23°C. This confirmed that no major changes were observed in characteristic peaks showing compatibility between drug and excipients used in the formulation [11].
Responses for GLB tablet
Experimental trials (16 batches) and their observed responses are shown in Tables 3 and 4. From the results, it is suggested that the quadratic model is the best fit model for all responses.
CI (R1)
Model F-value for MCC PH200 is 479.04 and for AH-NC is 6.01 implies that the model is significant. The results of multiple regression analysis indicate a fairly high value of correlation coefficient, i.e., 0.9979 for MCC PH200 and for AH-NC is 0.8662. It is concluded that the value of CI can be predicted within the design space, with fair accuracy. The interaction terms are statistically significant in nature as shown in Table 5.
The highest value of the coefficient was seen with starch in the mathematical model. The reason for poor fluidity is starch, due to the a b
Vora and Shah
presence of moisture in it and fine particles. Curved lines in response surface plot indicate non-linear relation between independent and dependent variables as shown in Fig. 3.
AR (R2)
Model F-value for MCC PH200 is 34.99 and for AH-NC is 5.18 implies that the model is significant. The results of multiple regression analysis indicate a fairly high value of correlation coefficient, i.e., 0.972 for MCC PH200 and for AH NC is 0.8493. It is concluded that the value of the a b AR can be predicted within the design space, with fair accuracy. The interaction terms are statistically significant in nature as shown in Table 6.
Starch showed an insignificant effect on the AR due to lowest coefficient value, i.e., 16.81 for MCC PH200 and 10.95 for AH-NC. As shown in Fig. 4, left corner red in color shows that AR is on the higher side. The high amount of PVP K30, MCC PH200, and AH-NC showed decline in AR.
Hardness of tablets (R3)
Model F-value for MCC PH200 is 17.72 and for AH-NC is 7.72 implies that the model is significant. The results of multiple regression analysis indicate a fairly high value of correlation coefficient, i.e., 0.947 for MCC PH200 and for AH-NC is 0.8911. It is concluded that the value of the hardness can be predicted within the design space, with fair accuracy. The interaction terms are statistically significant in nature as shown in Table 7.
The coefficient associated with starch is negative, i.e., −3.75 for MCC PH200 and −8.25 for AH-NC. As shown in Fig. 5, if the amount of starch is increased in the powder blend, the hardness of the tablets will reduce. PVP K30, MCC PH200, and AH-NC showed positive coefficients. The hardness of the tablets should increase if PVP K30 and/or MCC PH200/ AH-NC are increased in the powder blend.
Friability of tablets (R4)
Model F-value for MCC PH200 is 53.88 and for AH-NC is 160.83 implies that the model is significant. The results of multiple regression analysis indicate a fairly high value of correlation coefficient, i.e., 0.981 for MCC PH200 and for AH NC is 0.9938. It is concluded that the value of the friability can be predicted within the design space, with fair accuracy. The interaction terms are statistically significant in nature as shown in Table 8.
As shown in Fig. 6, plots show a steep change in the values of friability of the GLB tablets. It may be concluded from the contour plot of friability of GLB tablets that low concentration of PVP K30 is not favorable to keep the friability below 1%. PVP K30 played a key role in managing the mechanical strength of the tablets.
DT of GLB tablets (R5)
Model F-value for MCC PH200 is 221.25 and for AH-NC is 6.67 implies that the model is significant. The results of multiple regression analysis indicate a fairly high value of correlation coefficient, i.e., 0.9866 for MCC PH200 and for AH-NC is 0.8771. It is concluded that the value of the DT can be predicted within the design space, with fair accuracy. The interaction terms are statistically significant in nature as shown in Table 9.
The coefficient associated with starch is negative, i.e., −742.6 for MCC PH200 and −733.13 for AH-NC. As shown Fig. 7, if the amount of starch is increased in the powder blend, the DT of the tablets will reduce.
T90 of GLB tablets (R6)
Model F-value for MCC PH200 is 2.05 and for AH-NC is 4.81 implies that the model is significant. The results of multiple regression analysis indicate a fairly high value of correlation coefficient, i.e., 0.9325 for MCC PH200 and for AH-NC is 0.8405. It is concluded that the value of the T90 can be predicted within the design space, with fair accuracy. The interaction terms are statistically significant in nature as shown in Table 10.
Overlay plot for GLB tablet
To check the predictive ability of all the mathematical points, two points were randomly chosen as shown in Fig. 9 for MCC PH200 and AH-NC.
DISCUSSION
Good flow is one of the primary requirements for high-speed tablet press and as shown in Fig. 10. CI for AH-NC was 5.24 (low), 7.7 (mean), and 11.9 (high), i.e., good to excellent flow while for MCC PH200, it was 8 (low), 11.17 (mean), and 17.84 (high), i.e., fair to excellent flow. An AR for AH-NC was 29.62 (low), 32.55 (mean), and 35.3 (high), i.e., good to excellent flow while for MCC PH200, it was 31.48 (low), 34.48 (mean), and 37.66 (high), i.e., good to fair flow [18]. The hardness of the tablets depends on interparticle bonding and it is important for maintenance of tablet shape during manufacture and transit. For an average tablet, the required hardness is 4 kg/cm 2 [18]. In case of AH-NC, hardness was 3.13 (low), 4.77 (mean), and 6.4 (high) while for MCC PH200, it was 3.3 (low), 4.41 (mean), and 4.9 (high). The major reason for better consolidation of particles in AH-NC could be the presence of small particles, wherein stronger particle-particle bond might have formed. The values of friability, DT, and T90 were almost identical in both the cases. Based on these results, it can be further concluded that problems of weight variation and content uniformity variation were not observed in the case of AH-NC. Hence, between AH-NC and MCC PH200, AH-NC is a better choice for large-scale tablet production.
CONCLUSION
Good quality tablets are dependent on attributes of diluents as they are used in the formulation to increase the bulk of formulations and to bind other inactive ingredients with the active pharmaceutical ingredients (APIs). GLB being a low-dose API should require a large fraction of diluent. Therefore, the attributes of diluents such as MCC PH200 and AH-NC become extremely important as they influence the quality of finished tablets. The present study focusing on powder flow properties, confirms the usage of AH-NC prepared from agricultural waste as an effective direct compressible vehicle for formulation design and product development when compared to MCC PH200. Furthermore, it is anticipated, this work will kindle more research and faith toward utilization of natural excipient extracted from agricultural waste in the solid oral dosage forms. Finally, based on the results of multiple regression analysis and ANOVA, it can be concluded that the three main effects, i.e., X1: Starch, X2: PVP K30, and X3: MCC PH200/AH-NC are critical material attributes. | 3,180.8 | 2019-05-29T00:00:00.000 | [
"Materials Science"
] |
Status and prospects of beef and veal production in Ukraine in the context of international economic integration
. Beef production is driven by the need to ensure the country’s food security, meet the processing industry’s demand for raw materials, and increase state budget revenues from exports. The purpose of this study was to highlight the status and trends of production in the world and Ukraine, to identify issues and find areas of development considering international economic integration. The methods employed were analysis, synthesis, generalisation , specification, mathematical, and graphical.
INTRODUCTION
The significance of beef production is driven by the need to provide food for the population, while it is also important for raw materials for processing companies in various industries, state budget revenues, job creation, exports, etc.In Ukraine, this development is facilitated by natural and climatic conditions, production of considerable volumes of crops, including fodder crops, the availability of relevant government programmes, the benefits of regional trade agreements, etc. Considering the importance of meeting the beef consumption needs of the majority of the population of Ukraine and the world, and the fact that cattle breeding and processing produce meat products, as well as such vital products as milk, cheese, leather, gelatin, etc., there is a need to expand research into the economic aspects of its production.
Investigating the state of livestock production in Ukraine, O. Izhboldina et al. (2021) note negative trends in the industry, addressing the decrease in livestock numbers, a considerable decrease in livestock productivity and the quality of products.V. Lavruk et al. (2021) argue that the development and gradual revival of livestock production will mainly depend on the effective efficiency of the economic mechanism, as well as the interdependence of its principal components, namely, the establishment of price, credit and tax policies, state regulation, investment, and innovation processes.N. Lialina (2018) notes that one of the key trends at the current stage of development of livestock production in Ukraine is a decrease in the concentration of production for most enterprises, which leads to a decrease in the role of the industry in their economy.This causes a noticeable decline in the efficiency of the industry.
According to O. Kravchenko (2019;2020), the economic interests of an agricultural producer as a business entity are not satisfied by its share in the retail price structure, which for beef and milk amounted to 30.5% and 32.8%, respectively (January 2019), which is confirmed by the decrease in production.Furthermore, the issue of food security should be considered, and research on food policy in Ukraine and the world is becoming increasingly relevant (Kuts & Bokiy, 2020).Studies on foreign trade in meat are essential as well (Kovalenko, 2021), as it also considerable affects its production.
The authors of this study believe in the importance of the key areas proposed by K. Andriushchenko et al. (2021), namely, modernisation of the technical and technological base and production processes according to export priorities, research, training, and implementation of the best world practices in product processing and farming, improvement of innovative developments at food enterprises, development of modern strategies for innovative development of agricultural enterprises, etc.
The issue of efficient beef production is vital not only for Ukraine, as evidenced by research by foreign scientists.B. Abebe et al. (2022) note the significance of cattle breeding in Ethiopia as the main source of meat-producing animals for internal and external markets.Compared to other African countries, Ethiopia has a massive number of cattle (about 65 million heads), but the quality and quantity of beef consumed per capita is rather low (8.4 kg per year).It is expected that the increase in production will be driven by population growth, high demand in the internal and external markets, etc.While investigating the issue of beef production, P. Greenwood (2021) notes that beef is a high-quality source of protein, and demand for it in the global market is growing.It is noted that digital and other technologies that allow for the rapid collection and use of data on the environment and cattle productivity should increase the productivity, efficiency, and welfare of animals.
When investigating beef production, researchers also paid attention to the specifics of its quality and consumption.D. Magalhaes et al. (2023) studied changes in beef consumption and consumer behaviour trends in Brazil, Spain, and Turkey.The study analysed the impact of economic factors, aspects of trust, health concerns, lifestyle influences on beef consumption, and purchase decision factors.Furthermore, C. Whitton et al. (2021) investigated meat consumption in a number of countries, considering gross domestic product.One of the crucial factors that changed consumer behaviour and led to a decline in consumption, mainly among Brazilian and Turkish consumers, was the availability of products.It was presented that lifestyle factors, such as eating out, availability of time for cooking, etc., change consumption patterns and should be carefully considered by the industry, factoring in the cultural differences and consumer needs.
V. -B.Hoa et al. (2023) paid particular attention to the product quality factor.They conducted research on the quality, taste, and flavour of meat from Korean cattle of various breeds.It was found that under identical feeding conditions, the breed has a considerable impact on the nutritional quality of beef.The issue of increasing beef production is being raised more often, which requires finding the reasons that hinder it and the factors that will facilitate its development.This determines the relevance of the present study.The purpose of this study was to highlight the status and investigate the trends in beef production in the world and Ukraine, to identify the principal problems and determine the areas of its effective development.It is advisable to determine the prospects for beef production in Ukraine based on the current situation, global trends, and participation in integration processes.
MATERIALS AND METHODS
The study employed the methods of analysis and synthesis to assess the development of beef production in Ukraine and the world, the abstract and logical method to draw conclusions, generalization and concretization to develop proposals, mathematical and graphical methods to investigate and display trends and the state of production.The information base of the study included the scientific papers of Ukrainian and foreign scientists, statistical data of the international organisation FAO (n.d.) and the State Statistics Service of Ukraine (n.d.).The study focused on global production volumes, i.e., in total and by country, imports, consumption, production by category of farms, considering zones and regions of Ukraine, production profitability, average consumer prices, population, cattle supply to processing enterprises, etc.
To assess the place and prospects of Ukraine in world production and the world market, the FAO period from 1961 to 2021 was used for all countries, and for Ukraine from 1992.Considering the significance and impact of the hostilities in Ukraine, the pre-and post-war periods were used.This study used the main statistical data on beef production in Ukraine and its regions, as well as global and country-specific data for comparison.
To determine beef self-sufficiency, the study employed the approach to food self-sufficiency proposed by B. Paskhaver (2018).The study analysed the trends and status of beef production in the world and Ukraine, considering the largest producers, natural and climatic conditions, and population.Calculations were made on the supply of live animals to processing enterprises, considering military operations.The study examined the production of beef and veal depending on the structure of producers, specific regional features, factoring in the profitability of production, the dynamics of average consumer prices, population, including migration processes.The prospects of post-war recovery are identified as a result of the analysis of production in recent years.A SWOT-analysis of the prospects for beef production in Ukraine was carried out, the reasons for the decline in cattle numbers were analysed and classified, and recommendations for the development of production were developed.
RESULTS AND DISCUSSION
The significance of agri-food production is becoming increasingly important, specifically due to population growth and climate change.The second of the UN Sustainable Development Goals is to end hunger, and it calls for less food to be thrown into landfills and more support for farmers, as a third of the world's food is wasted, while 821 million people are malnourished (Sustainable Development Goals, n.d.).Ukraine is one of the world's largest producers of wheat, corn, soybeans, sunflower seeds, and other crops, i.e., it produces mainly crop products.Global production of all types of meat reached 361 million tonnes (slaughter weight equivalent) in 2022, up 1.4% in 2022, although slower than the 4.5% growth in 2021 compared to 2020 (Table 1).The expansion was mainly driven by the rapid growth in meat production in China and strong growth in Brazil, Australia, and Vietnam.At the same time, the relative static nature of global production was partially offset by a drop in production in the EU, the US, Canada, Iran, and Argentina.Total meat production in China increased to 96 million tonnes, up 4.4% year-onyear.Global trade in meat and meat products reached 42 million tonnes (in slaughter weight equivalent).
At the same time, global beef production (fresh or frozen) increased by 2.6 times from 1961 to 2022, from 27.7 to 73.2 million tonnes, and the growth was quite steady.Therewith, the share of beef in global meat production in 2022 was 20.1%, although in some years it was much higher, e.g., in 1961 it was 38.8% (in 2000 it was 24.3%).Notably, poultry meat was produced the most, with a share of 34%.Major beef producers include countries in North and South America, Asia, and Australia.The top 10 producers accounted for 63.1% of global production in 2022, and the share of each country did not fall below 2%, unlike other countries.More than a third of beef is produced in the US and Brazil, with a share of 17.7% and 13.2% respectively.It is advisable to consider the change in the share of production of the main producing countries in world volumes (Table 2). 1 shows that the US share in world production was the highest in 2022, as well as in 1961, 2000 and 2010, although it has been declining.In 1961, the USSR held the second position (10.3%), but later, among its republics, only Russia entered the top ten largest producers, while Ukraine (from 2020) and Uzbekistan (2010 and 2021) entered the top twenty.Thus, the largest producers have largely stayed so for decades, including the United States, Russia, Brazil, and Argentina, although their positions have shifted somewhat, for instance, with Brazil's growing and Argentina's declining.The shares of Canada, the UK, Germany, and Italy decreased, but they stayed among the top 20 producers.China, India, and Mexico have seen noticeable increases in beef production, and although the latter two countries were among the 15 largest producers in 1961, the opposite is true of China, which has been among the top ten in the following years.This suggests that countries can considerably increase their exports, although the top positions in the global market have been held by the main supplier countries for decades.This, admittedly, also applies to Ukraine, but special attention should be paid to product quality.
Among the largest beef producers are countries with which Ukraine has regional trade agreements, including Turkey, France, Germany, Italy, the UK, and Canada.Considering the benefits of international economic integration, including liberalisation of foreign trade, access to innovative technologies, and foreign direct investment, it is expected that production and trade in these products will increase.Among the EU countries, France and Germany produce the most beef, but their share in global volumes in 2022 was 2.2% and 1.7%, respectively, and 9.3% in the EU-27 as a whole.However, it is also decreasing for them, specifically, in 1961 and 2000, for the countries that became members of the grouping, it was 19.4% and 13.6%, respectively.Italy, Spain, and Poland are also among the largest producers in the association, but each of them has a share of less than 1%.
In 2022, the EU produced 6.7 million tonnes of beef, down 1.1% from 2021.At the same time, with a decline in production in 2022, the EU increased exports of fresh and frozen beef by 1.4% compared to 2021, to 463 thous.t.Around 50% of this beef comes from the UK, with volumes largely unchanged from 2021.Deliveries grew to Asian markets, including Hong Kong and Japan, as well as to North American markets such as Canada and the United States.This and growth in other markets were enough to outweigh losses in other export destinations, especially Algeria.
On the other hand, imports dropped by 21% yearon-year to 236.4 thous t.Volumes declined from all key suppliers, but primarily from the UK and Brazil.Figure 1 shows the main beef producing countries, which accounted for 85% of total EU beef production.This is likely to be caused by the pandemic or a disruption in demand for food services in the EU.Beef production in the EU in the first half of 2023 was 4.5% lower than in the same period a year ago, with cattle slaughter in key producing countries limited.The largest drop in production in the EU was observed in Italy (-23%), followed by key producers France, Spain, Ireland, and Poland.The only major producers to see growth were Germany and the Netherlands.However, in Germany, this growth is contrary to the overall long-term trend of declining production.Overall, the total slaughter of adult cattle (bulls, steers, heifers, and cows) was 8.1 million heads, down 3.6% from the same period a year ago.Therewith, the total slaughter of cows was just under 3 million head, down 3.7% year-on-year.
In several countries, including France, Poland, and Spain, there has been a marked decline in cow slaughter rates.Cow production in France is currently at its lowest level in at least five years, which is likely to contribute to the continued high French prices on the EU market.Livestock slaughter in Poland is also historically low.Conversely, cow slaughter in Spain has increased in recent years and, although lower than last year's record level, stays historically high in 2023.Cow slaughter in Germany stayed relatively stable year-on-year after several years of decline.
Meanwhile, cow slaughter in the Netherlands has increased considerably compared to last year (+13%) and will stay at the same level as in 2021.From a market balance standpoint, considering the reduction in production and trade, it is shown that stocks available for consumption are lower across the bloc.Price inflation is affecting beef consumption in the EU, as it does in the UK, with consumption and retail data from key EU countries pointing to a fall in demand.Domestic consumption of beef in France fell by 2.5% in the first half of 2023, while average prices increased by 9.1%.Purchases of beef by households in Germany from January to July (inclusive) decreased by 6.2% year-on-year, while the average price increased by 6.9%, and the level of meat consumption is declining.Purchases of Italian beef fell by 4% year-on-year in the first half of 2023, to a level that is also lower than in the previous two years.Beef consumption in Spain is also declining.
Cattle prices in the EU have generally been on the decline since March.However, in recent weeks, cattle prices in a range of countries have shown an upward trend.Prices for young bulls and cows in Ireland, France, Germany, and Poland increased, while the price of young bulls also rose in Spain.In other countries, cattle prices continue to be lower, e.g., in the Netherlands and Italy.Ukraine's share of global production has declined significantly, from a high of 3.1% in 1992 to a low of 0.4% in 2022 and has not even reached 1% since 2005.Ukraine ranked 39th among beef producers in the world in 2022, behind even African countries such as Nigeria, Sudan, Ethiopia, Chad, and Zimbabwe, which are not the largest producers of grain and, accordingly, feed.Notably, Ukraine's beef production decreased by 5.3 times between 1992 and 2022, from 1.7 to 0.3 million tonnes, while global production increased by 1.4 times, from 53.3 to 72.4 million tonnes (Fig. 1).
Beef production in Ukraine
The figure suggests that the trend of beef production in the world and Ukraine is markedly different, but the presence of demand in the Ukrainian and global markets and consumption by the majority of the population should stimulate an increase in beef production in Ukraine.In 2022, the Ukrainian meat market increased beef imports to 2. 8 3).
It is essential to know how much beef the main producing countries produce per capita (Table 4).
Source: calculated based on FAO (2022)
Notably, the highest beef production per capita among the top 20 producers is observed in New Zealand, which ranked only 19th in terms of total volumes, as well as Australia, Argentina, even Zimbabwe, Brazil, and the United States.However, such prominent producers as China and India, which occupy the third and fourth positions, produce the least per capita of the countries represented -only 4.9 kg and 3.0 kg, respectively, and slightly more Pakistan -5.3 kg.This is the basis for researching demand in this market and increasing exports.For Ukraine, the situation is still better, with 7.1 kg of beef produced per capita.
In the context of declining global food security, a special place is occupied by Ukraine, where agriculture is currently the leading sector of the economy.Active Russian hostilities have had a direct impact on global food and agricultural markets.Despite a steady decline in the industry's performance, Ukraine stays an active player in the global beef and veal production market (Table 5).The calculations suggest that the downward trend continues: the total supply of live animals to processing enterprises in the post-war year was 7.9% lower, while the supply to enterprises increased by 2.2% and to household farms decreased by 3.7%.The catastrophic drop in the purchase of animals from the population is linked to the active military operations, which suggests that people are abandoning animals to cover their own beef needs.Positive dynamics is observed only in calves under 1 year of age by 6.4% (by 1.8% in enterprises); bulls over 2 years of age by 12.5% in enterprises and heifers over 2 years of age in the population by 23.4%.An interesting fact is the positive dynamics of changes in the average live weight of one head of cattle purchased by processing enterprises in the pre-war and post-war years (Table 6).Studies have shown that the average live weight of one head of cattle purchased by processing companies increased by 0.2% in the post-war year.This is especially true for calves under 1 year of age by 47.1%.An increase is also observed in heifers aged 1-2 years and heifers over 2 years old (by 3.9% and 2.2%, respectively).In enterprises, there was a 1.3% increase in cows, a 2.3% increase in calves under 1 year old, a 51.1% increase in calves under 1 year old, and a 2.0% and 1.3% increase in heifers over 2 years old, respectively.In household farms, the study observed an increase of 5.5% in bulls aged 1-2 years, 8.7% in bulls over 2 years, and 40.5% in heifers aged over 2 years.The positive dynamics in beef and veal production amid the hostilities gives hope for an accelerated post-war recovery in the livestock sector and the livestock industry as a whole.It is also advisable to determine the level of self-sufficiency of Ukraine in beef and veal (Table 7).The above data shows that in 2016-2023, production of beef and veal exceeded consumption at all times.There was a 2.5% decrease in initial stocks and a 16.0% decrease in beef production in 2021 compared to 2019.The war years only intensified the downward trend (Table 8).
Source: calculated according to data from the State Statistics Service of Ukraine (n.d.)
Imports of these products in the pre-war period (from 2019 to 2021) increased by 21.4%, while exports dropped by half.In 2022, the decline in these indicators will only intensify.Beef meat consumption per capita also decreased by 0.8 kg (13.11%) in 2021 compared to 2016, and in 2023 the figure became critical -6.5 kg.The above data shows that in 2016-2023, production of beef and veal consistently exceeded consumption.Notably, beef and veal production in Ukraine in 2021 decreased by 2.3 times compared to 1961, and by 6.4 times since 1990, with a 7.2 times and 23.2 times increase in enterprises, respectively, but a 1.5 times and 1.3 times increase in household farms, but then decreased in comparison to 2000, 2010, 2015, 2019-2021.
Active hostilities have also substantially reduced this figure, almost by half.
Although livestock keeping in household farms appears to be less costly, and its decline was smaller than in enterprises, it is worth considering the processes of migration, urbanisation, and the keeping of small numbers of livestock by household farms.However, under difficult conditions, livestock preservation by household farms is more likely than by large-scale enterprises, and therefore the activities of all categories of farms are significant.It is interesting to observe the production of beef and veal in the pre-war and post-war periods, in the context of further post-war recovery and Ukraine's place in the world market (Table 9).The overall growth rate of meat production (in total) in the pre-war period was 1.2%, which is clearly insufficient for both post-war recovery and the livestock sector's recovery from the protracted crisis.Today, overcoming current challenges and risks is becoming an arduous task for Ukraine.Beef production is largely dependent on the support of European and global partners, and it will also determine our country's place in the global market.Notably, the structure of beef and veal producers has changed considerably (Table 10), with the share of enterprises ranging from 91.1% (1990) to 30.1% (in 2022), while it increased for farm enterprises -from a minimum of 0.8% (1990) to a maximum of 2.3% (2022), and household farms (69.9% in 2022).Source: calculated according to data from the State Statistics Service of Ukraine (n.d.) However, the authors of this study do not consider this to be a basis for positive conclusions, as output has been declining, with a direct dependence on the share of enterprises in production (Fig. 2).
Figure 2. Beef and veal production and share of enterprises in total production, 1960-2022 Source: calculated according to data from the State Statistics Service of Ukraine (n.d.)
Beef production by enterprises is more efficient due to greater opportunities to apply modern technologies and technical equipment, attracting highly qualified personnel, etc.At the regional level, beef and veal production in Ukraine is mainly concentrated in the Forest-Steppe zone (Table 11), accounting for more than 43% of the total in 2019-2022.In 2010, 2015, and 2022, the lowest production was in the Steppe, but the value did not fall below 22.1%, and in 1990 and 2000in Polissia.The largest producers are Lviv and Ivano-Frankivsk regions, especially in 2010, 2015, 2022, as well as Kyiv, Vinnytsia, Kharkiv, etc. Therewith, production volumes in these areas are declining.This decline in beef and veal production is conditioned by a decrease in the number of cattle, specifically, as of 1 January 2022, compared to 1961 and 1991, by 6.7 times and 9.3 times, respectively.The largest number was in Khmelnytskyi, Odesa, and Zakarpattia regions, accounting for 24% of the total in Ukraine, as well as in Lviv, Zhytomyr, Vinnytsia, and Ivano-Frankivsk regions.That is, almost half of the country's livestock (49.7%, 815.5 thous.heads) is kept in 7 regions.Moreover, the share of enterprises fell to 38.0% (2022), although in 1991 it was 85.6%.
Table 1. Continued
As for the level of profitability of cattle meat production, with the exception of 1990-1994 and 2017, it was negative, and in 2020 it was -24.2% (Level of profitability of agricultural production in enterprises), which was also primarily caused by a decrease in production volumes.It is also worth noting the increase in average consumer prices (Consumer Price Indices for 2021.Statistical Collection) for beef in Ukraine, specifically, while in January 2021 the price was 149.51 UAH/kg, in December it was 182.84 UAH/kg.In December, the highest prices were in Kyiv, Kyiv, and Kirovohrad regions, at When investigating self-sufficiency, it is also worth considering the change in the population, because while in 2000 it was 49 million people, in 2020 it was about 42 million people, and this trend continues.Thus, in 2020, the population in Ukraine decreased by 314,062 people, with a decrease in all regions except Kyiv, where the growth was 7,486 people.Each region experienced a natural population decline.At the same time, in 2020, despite the overall population decline, there was a migration increase, specifically in Kyiv, Ivano-Frankivsk, Kharkiv, Odesa, Lviv, Poltava, and Khmelnytskyi regions.The ongoing hostilities in Ukraine have also led to migration processes, and thus, when reviving livestock production, it is essential to focus mainly on regions with a prospect of growing demand for agri-food products, including livestock.As a result of this study, a SWOT analysis (Table 12) was also carried out on the prospects for further beef production in Ukraine, which helped to identify its strengths and weaknesses.
Strengths Weaknesses
Availability of suitable natural conditions for raising livestock; Favourable natural and climatic conditions for growing crops, including fodder; historically established system of cattle breeding in all regions of Ukraine; availability of highly qualified personnel; availability of state development programmes; consumption of products by the majority of the Ukrainian population Insufficient and unbalanced fattening of cows; insufficient technological and technical equipment; lack of breeding work with the herd; simple reproduction of the industry due to low profitability of production; insufficient competitiveness of enterprises; deformed production structure (dominated by individual farms); curtailment of production; insignificant share in meat consumption by the population
Opportunities Threats
Introduction of a system for monitoring prices for products; favourable pricing policy for producers, coordination of standard production costs, price levels and incomes; attraction of foreign investment; preferential lending to producers; leasing for equipment supply; introduction of new technologies in the production of feed and livestock products; intensification of integration processes between meat producers and processors; creation of cooperative associations; increase in exports to the world market; improvement of customs and tariff protection of Ukrainian producers High cost of fodder and other material and technical resources; lack of support for the promotion of meat products on the foreign market; weak commercial and integration processes; and reduced competitiveness; growth of low-quality imports; bankruptcy of enterprises; military operations
Source: compiled by the authors of this study based on personal findings
Thus, it is necessary to promote the increase in production, considering the identified strengths, weaknesses, opportunities, and threats.It is advisable to take appropriate actions by the state through direct and indirect influence on production, including through direct payments, the tax system, etc. Considering the significant decline in beef and veal production due to a decrease in the number of cattle, the reasons for this are identified, which are proposed to be distinguished as general and those related to enterprises and household farms (Fig. 3).The growth rate of the cost of feed and animal husbandry outstrips the growth rate of the cost of livestock products (e.g., milk) personnel problems
High cost of modern technologies
Some of these issues can be addressed at the micro level, namely, through the involvement of highly skilled workers, while others require government intervention, primarily through the adoption of relevant regulations.Furthermore, the impact factor should also be considered, as the resolution of common causes affects businesses and household farms.For instance, lower feed costs should help to increase beef production in all categories of farms, while harmonisation of quality standards with European ones should increase beef exports, which will lead to a rise in production.
Considering the significance of exports for increasing production volumes, it is advisable to attach particular importance to raising quality standards, harmonising them with global and European standards, establishing joint ventures with business entities from countries that are significant exporters on the world market, and having trade agreements and free trade zones in place between countries.Thus, Ukraine has trade agreements with dozens of countries, both individual countries and integration groups, including the EU, EFTA, Canada, the UK, Turkey, Israel, etc.While integration implies free trade and the removal or reduction of trade barriers, it also requires considering non-tariff barriers and country-specific features.Thus, while for EU countries it is primarily European quality standards, for the latter two it is the consumption of halal and kosher products.In an effort to expand its position in the markets of Eastern countries and given the growing consumption of kosher products in developed countries, it is advisable to stimulate the production of halal and kosher beef.
Considering the state and trends of beef and veal production in Ukraine and the world, the study outlined certain areas of promotion of its post-war development as follows: attracting foreign investment on favourable terms; establishing joint ventures, including with the support of research institutions, advisory services for training, providing advice on efficient beef production, livestock maintenance, feeding optimisation, etc; state support, which includes direct payments (the amount of which will depend on the number of livestock, land availability, location, climate zone, and region) and indirect funds (through the mechanism of providing preferential loans, insurance system, and the procedure for providing high-yield cows for temporary use, whose calves will stay in the agricultural enterprise or household farm, and whose cows will be transferred to other farms.
Furthermore, it is proposed to introduce a procedure for granting benefits or tax holidays for enterprises/businesses that: raise new types of highly productive livestock or produce organic products; introduce modern technologies; provide the processing industry with quality raw materials.Therewith, it is necessary to stimulate the use of modern technologies, promote production efficiency, specifically feeding, etc.Moreover, it is worth promoting the introduction of innovative technologies, research results of Ukrainian scientific institutions, which may be cheaper and more suitable for local conditions.In addition, when increasing production, it is necessary to consider the specific features of the regions, namely, the zones and regions where production has historically been the most developed, where the most favourable natural and climatic conditions and feed resources are, as well as where production is noticeably declining, etc.
Consideration of the findings of this study and the recommendations provided will help to increase production volumes both in individual enterprises by eliminating the identified problems and in the industry as a whole by reducing existing threats and exploiting opportunities.The analysis suggests a rather negative trend of declining beef production in Ukraine, particularly in contrast to the global dynamics.To improve the efficiency of the development and functioning of the beef market in Ukraine, especially in the context of its convergence with the EU markets, V. Lyakhovets (2018) suggests the introduction of international experience.However, the authors of the present study believe that direct borrowing of the classical European model of the economic mechanism for regulating the beef market without factoring in the local organisational and economic conditions of the market environment is impossible, which is caused not only by the prohibitive cost of implementing its main principles against the background of the current crisis in the Ukrainian economy, but also by the need to find Ukraine's own and more efficient way of developing the beef market.The authors of this study believe that the amount of state support for the beef market depends on the real capabilities of the budget; tax, credit, price, investment, export-import, customs, monetary and credit policies, which affect the solvency of enterprises to provide production, material and technical resources.
A. Sakhno and I. Salkova (2021) agree with this, noting that in the production of meat sold on the internal market at prices higher than world prices, in the future, it is necessary to increase its own competitiveness not by increasing state support, but by reducing costs and improving the quality of meat products.The authors of the present study also came to the same conclusion.As noted above, and considering the findings of this study on the state, trends, features, and reasons for the decline in beef production, the issues of product competitiveness, increased profitability, consumer preferences, identification of regional reserves for production development, and improvement of product quality are still open.Yu.Sinyavina and T. Butenko (2021) note that quality improvement is an additional reserve for the economic efficiency of the industry.In Ukraine, this is one of the main prerequisites for both efficiency and growth in demand for products in the internal and external markets.Furthermore, the authors of the present study believe it is necessary to focus on quality as well, since it primarily affects public health and long-term demand.
The study proved the need for capital investment in the development of agricultural enterprises as a basis for the development of effective meat production activities.The authors of this study fully share the opinion of N.G.Kopitets and V.M. Voloshyn (2020) that meat production is provided by a range of agricultural and industrial sectors of the country, which requires a clear definition of priorities for the development of the meat market and mechanisms for state support for the livestock sector.
Therefore, considering the above, when developing the main aspects of the strategy for sustainable development of the country's agricultural sector in the future, the Ukrainian meat market should be given priority in the system of state regulation.M.O.Karpyak (2018) shares the same opinion, emphasising that today livestock production underlies sustainable development of crop production.In other words, the implementation of the state policy to support the development of livestock is a prerequisite for the sustainable development of the agricultural sector and has a considerable impact on improving food security and preserving the Ukrainian countryside.As noted above, consideration of the findings of this study on the state, trends, features, and reasons for the decline in beef production may help to increase it, but the issues of product competitiveness, increased profitability, consumer preferences, identification of regional reserves for production development, etc. are still open.These are promising areas for further research.
CONCLUSIONS
Thus, the study showed that global beef production is growing steadily, with its share not falling below 20.3% of total meat production in 1961-2022.Its main producers are primarily the countries of North and South America, Asia, and Australia, and among European countries -Germany and France.Beef production in the EU declined, although there was an increase in exports and a decrease in imports, primarily from Brazil and the UK.Ukraine exports beef mainly to the East.Its largest consumers include Argentina, the US, and Brazil.Ukraine's share of global production is declining.The world's largest beef producers do not always produce the most beef per capita, as observed in China and India.
In terms of food self-sufficiency in Ukraine, consumption of beef and veal in 2016-2022 was consistently lower than production.It is found that the share of agricultural enterprises in the structure of beef production has noticeably decreased, but there is a direct correlation between changes in the number of enterprises and production volumes.Notably, enterprises have greater opportunities to apply modern technologies, attract highly qualified personnel, enter foreign markets, etc.
Regionally, most beef and veal are produced in the Forest-Steppe zone, with over 43% since 2019, and the other two zones alternating between them.The regions that stand out are Lviv, Ivano-Frankivsk, Kyiv, Vinnytsia, and Kharkiv.The decline in production was primarily caused by a significant reduction in the number of livestock (by 6.7 times compared to 1961) and a negative level of profitability of cattle meat production.The increase in average consumer prices is not helping to boost demand for beef.The primary reasons for the decrease in the number of cattle are proposed to be divided into general, individual enterprises and households, which should help to accelerate their solution, including through the influence of the state.
Therefore, considering the growth of beef production in the world, its consumption by the majority of the population, favourable natural and climatic conditions, and one of the world's largest grain production volumes in Ukraine, as well as the production of other products and raw materials for processing enterprises, it is advisable to promote cattle breeding and beef production, specifically through the adoption of the necessary regulations, improving product quality, introducing modern technologies, preferential insurance, lending, and a range of other measures by the state and individual enterprises.
The study found a decrease in the supply of live animals to processing enterprises in the post-war period, with a significant decrease in the purchase of animals from the population due to military operations.However, positive developments in production give grounds for the resumption of cattle breeding in the postwar period.Still, there is a decrease in meat stocks, particularly beef, in Ukraine.An analysis of beef production before and after the hostilities, including growth rates, suggests that its recovery may depend heavily on cooperation with foreign partners.In the future, it is advisable to expand the research on the development of beef production, considering the climate crisis and the increase in the share of Ukraine's exports in the world market.
Figure 1 .
Figure 1.Beef production in the world and Ukraine, 1992-2022 Source: calculated based on FAO (2022) calculated according to data from the State Statistics Service ofUkraine (n.d.)
Figure 3 .
Figure 3. Reasons for the decline in cattle numbers Source: developed and constructed by the authors of this study based on personal findings
Table 1 .
ContinuedTable thousand tonnes, up 34.2% from 2021.The world's largest beef producers are the US, Brazil, China, and the EU.In 2022, they produced 50.1% of the world's beef.Brazil, the US, India, and Australia were the largest exporters of beef, accounting for 54.2% of all beef exports.The average level of beef consumption in 2022 was 9.2 kg per person.Argentina consumed 5.1 times more beef per person than the global average, while the US and Brazil consumed 4.3 times more.In India, this figure is 8.7 times lower, while in China it is almost 30% lower.The key exporters of beef to Ukraine were Brazil -2327 thous.t, the USA -1637 thous.t; India -1336 thous.t, Australia -1245 thous.t, and EU countries -a total of 901 thous.t (Table
Table 3 .
Beef production, consumption and exports in 2022
Main producing countries Production Consumption per 1 person per year, kg Main exporting countries
FAO (2022)lculated based onFAO (2022)
Table 4 .
Comparison of the shares of beef production in global volumes and per capita in the main beef producing countries, 2022
Table 6 .
Dynamics of changes in the average live weight of one head of cattle purchased by processing enterprises in the pre-war and post-war years Source: calculated according to data from the State Statistics Service ofUkraine (n.d.)
Table 8 .
Annual balance of beef meat in Ukraine in2019-2023, thous.t
Table 9 .
Livestock production (all categories of farms)
Table 10 .
Beef and veal production by category of farms, % 206.54 UAH/kg, 200.48 UAH/kg, and 193.80 UAH/kg, respectively, and the lowest in Chernihiv, Chernivtsi, and Poltava regions, at 169.38 UAH/kg, 168.98 UAH/kg, and 168.08 UAH/kg, respectively, which does not contribute to the increase in demand. | 8,920.8 | 2024-01-10T00:00:00.000 | [
"Agricultural and Food Sciences",
"Economics"
] |
Approaches to Stand-alone Verification of Multicore Microprocessor Caches
The paper presents an overview of approaches used in verifying correctness of multicore microprocessors caches. Common properties of memory subsystem devices and those specific to caches are described. We describe the method to support memory consistency in a system using cache coherence protocol. The approaches for designing a test system, generating valid stimuli and checking the correctness of the device under verification (DUV) are introduced. Adjustments to the approach for supporting generation of out-of-order test stimuli are provided. Methods of the test system development on different abstraction levels are presented. We provide basic approach to device behavior checking — implementing a functional reference model, reactions of this model could be compared to device reactions, miscompare denotes an error. Methods for verification of functionally nondeterministic devices are described: the «gray box» method based on elimination of nondeterministic behavior using internal interfaces of the implementation and the novel approach based on the dynamic refinement of the behavioral model using device reactions. We also provide a way to augment a stimulus generator with assertions to further increase error detection capabilities of the test system. Additionally, we describe how the test systems for devices, that support out of order execution, could be designed. We present the approach to simplify checking of nondeterministic devices with out-of-order execution of requests using a reference order of instructions. In conclusion, we provide the case study of using these approaches to verify caches of microprocessors with “Elbrus” architecture and “SPARC-V9” architecture.
Introduction
The key feature of modern microprocessor architecture is multicorenesscombining several computational cores on a single system on a chip (SOC).To reduce time needed to access RAM (Random Access Memory), device can incorporate several levels of cache hierarchy.Access to smaller caches could be executed faster than access to larger caches of the next level of the hierarchy.Caches can keep data for a single computational core or serve as data storage for several of them at the same time.A memory subsystem of a multicore microprocessor must maintain coherence of the memory.Task of maintaining correct state of memory is usually solved by implementing cache coherence protocol that defines a set of data states and actions on transitions between states in a cache [1].To optimize the design and the implementation of coherency protocol, caches can include a local directory -the device which keeps information on states of data in different components of the memory subsystem.Sufficient complexity of protocols and their implementations in multilevel memory subsystems can lead to hard to find errors.To ensure the robustness of a microprocessor, one must thoroughly verify its memory subsystem.The importance of the functional verification -the checking of correspondence between specifications of designs and their implementationsis obvious for many reasons.This activity could be found out to take more than 70% out of the total design development time.Two main approaches to functional verification of microprocessors are formal verification and simulation-based methods [2].Formal methods are exhaustive and based on analyzing static formal model.Models are large and formal verification techniques face the "combinatorial explosion" issue.Simulation-based methods are not exhaustive, but they are much more flexible and thereby employed at different stages.We can verify not only the static model of system, but also implementation.The object of simulation-based verification is RTL (Register Transfer Level) model of device.One of the approaches to microprocessor verification is execution of test programs on the microprocessor model and on the reference implementation of its instruction set, and comparison between them.Such approach is called system verification.It should be noticed that caches are often invisible from the point of view of a programmer.That is why designing programs capable for sufficient verification of a microprocessor caches is a complex task.One way to shorten the design of microprocessors is the application of unit-based verification.It is assumed that system is divided into a set of components and the general functionality of the components does not change [3].Such a way of verification is called stand-alone verification.This paper addresses the problem of stand-alone verification of microprocessor caches of different levels.The rest of the paper is organized as follows.Section 2 suggests an approach to the problem.Section 3 presents the test stimui generation methods.Section 4 reviews the existing techniques for designing test oracles.Section 5 describes a case study on using the suggested approach in an industrial setting.Section 6 concludes the paper.
Common View on Stand-alone Verification of Microprocessor Caches
The object of stand-alone verification is model of the device under verification (DUV) implemented in hardware description language (usually, Verilog or VHDL).It defines the behavior of the device on a register transfer level.The device specification defines a set of stimuli and reactions based on the state of the device.
To check the correctness of the device it is included in a test system -a program that generates test stimuli, checks validity of reactions and determines verification quality.Based on its functions test system can be divided into separate modulesstimulus generator, correctness checking module (test oracle) and coverage collector.Methods of estimation of verification quality are similar to that of other devices: information on functional code coverage is used to identify unimplemented test scenarios and refine stimulus generation by adding new test scenarios and improving existing stimulus generator.This approach is called coverage driven constrained random verification.Besides this, there are some approaches to microprocessor caches verification.In paper [4] authors propose using decomposition and abstraction for standalone verification.In our previous projects, we have used the decomposition methods for L2-cache verification: L2-cache was divided into several submodules for which reference cycle-accurate bit-to-bit models and test systems were implemented [3].This approach allowed to find bugs in submodules, but did not give the chance to check the cache in general.We also can use a SystemC reference model as presented in [5] but it is employing if SystemC models are used in other stages of an ASIC design flow.In paper [6] the approach to test oracle development for nondeterministic models is presented.However, this approach refers only test oracle developing of in-order cache execution and has no recommendation for caches with out-of-order execution.In such a way, the main goal of this work lies in developing some new techniques of standalone verification of microprocessor caches with different ordering of stimulus execution.Cache behavior exhibits a set of properties that should be considered while designing a test system for verification of the device: • Transactions (or requests) in the microprocessor system can be separated into three groups: primary requests -requests from subscribers (other caches, cores, etc.) to perform an operation with the memory (load/store), secondary requests -responses of the test system to some reaction of the cache, and reactions -output transactions from the cache • A device implements a part of cache coherence protocol • A device works independently with different cache lines -areas of memory of fixed size • Requests that work with the same cache line are serialized.It means that requests complete in the same order as they are received • Device implements data eviction mechanism and protocol to determine victim line (usually some variant of least recent used algorithm (LRU)).Using these properties of the device under testing while designing a test system could lead to the simplified structure using separate stimulus generators -the primary requests generator and the secondary requests generator.We also can use the fact that requests are serialized for checking the correctness of caches with outof-order execution.
The common approach
Test stimuli are usually generated at more abstract level than register transfers and interface signals.Based on the logical and functional similarity, groups of device ports are combined into interfaces.Interfaces are used to transfer transaction level packets [7].To transform packets between different representations on signal and transaction level, serializer and deserializer modules are implemented [8].Test system should generate stimuli similar to that in a real system.Should be noted that primary requests in real microprocessor are consequences of some memory access operation (loading, storing data, eviction, prefetch, atomic swap, etc).Secondary requests are answers for reaction packets from the device.It is usually convenient to use only a sequence of primary requests as a test sequence, and generate secondary requests automatically in corresponding modules.Properties of secondary requests could be changed based on secondary request generation modules configuration.In the test system interfaces are combined into groups that represent working with some devices.A test system should simulate the state of these devices to generate correct responses from it.
Generation of Primary Requests for Caches with Out-oforder Execution
Properties of the devices that support out-of-order execution should be considered while designing a stimulus generator: • Order of primary request can be different from the order of memory accesses in initial program • Primary request could be divided into several messages accepted at different times.Messages for one primary request are identified by common value of tag field • Request canceling mechanism is present.
To support out-of-order execution of memory access requests in a cache common approach was augmented.The module responsible for transferring of primary requests was replaced with high-level module that includes components working with interfaces of primary request parts.The order of the request for the module is identical to that of the test program, and reordering of request parts is executed based on module settings.
Correctness Checking
Let us consider the existing approaches to reaction checking.Richard Ho suggested two main methods: self-checking tests and co-simulation [9].Co-simulation is a method for reaction checking in which an independent reference model is used along with the target design model [4].The two models are co-simulated using the same stimuli and their reactions are verified.A reference model is implemented either in general purpose programming language (C, C++) or in specialized hardware verification language (SystemVerilog, "e", Vera).If test stimuli are the same, difference in model and device reactions means an error somewhere in the system [8].Reference models could be cycle-accurate or untimed functional.To implement the cycle-accurate model, behavior of the device must be specified on a register transfer level.Behavior of caches usually defined on a higher level of abstraction, because cache is not an essential part of a computational pipeline of a microprocessor.A cache is not a subject of strict temporal requirements.Besides this, the development of cycle-accurate model is labor-intensive when the design specification is changing and no stable through the verification phase.To simplify the development of reference models TLM (Transaction Level Modeling) is often used [4].To verify caches we also propose to implement functional models working on transaction level.
Checking of nondeterministic caches
If one wants to develop functional model of cache, its specification must have property of transaction level indeterminism.That is, identical transaction level traces of stimuli (a set of RTL traces is mapped into this single transaction level trace) must cause identical transactional reaction trace.It should be noted that caches often include a set of components (eviction arbiter, primary request arbiter serving different requesters), that do not hold that property.That is, different RTL traces that are mapped into the single transaction level trace could lead to different reaction traces.There are several methods to check the behavior of nondeterministic devices.
"Gray box" checking
One of the ways to solve aforementioned problem is to replace usual "black box" method of device verification.That is, we should not consider only external interfaces of the device while analysing its behavior.To determine which variant of Petrochenkov M., Stotland I., Mushtakov R. Approaches to Stand-alone Verification of Multicore Microprocessor Caches.Trudy ISP RAN/Proc.ISP RAS, vol. 28, issue 3, 2016, pp. 161-172.166 behavior was happened in the cache one could use "hints" from the implementation.To use this approach, a set of internal interfaces and signals is defined and its behavior is specified.This interfaces must be chosen in a way that information on their state could be used to eliminate nondeterminism.In general, for caches such signals are the results of primary request arbitration and the interfaces of finite automata of the cache eviction mechanism.Additionally, that information can be used in a request generator and for the estimation of verification quality.This method is usually easy to implement.Drawbacks of this methods are additional requirements for specification and reliance on interfaces that could also exhibit erroneous behaviour.
Dynamic refinement of transaction level model
Another approach is to create additional instances of model for each variant of behavior in case of nondeterministic choice in the device [6].Each reaction is checked against every spawned device model.If reaction is impossible for one variation of behavior, then it is removed from set.If set of possible states after some reaction becomes empty, the system must return an error.In general, this approach may cause exponential growth of number of states with each consecutive choice.However, for caches it could be implemented efficiently, because of several properties of caches: serialization of requests and cache line independence.Information on which nondeterministic choice was made in the device (for use in a request generator or for verification quality estimation) could also be extracted from reactions.The strong point of the approach compared to "gray box" method is elimination of reliance on implementation details of the device.Drawback is additional complexity of implementation.
Assertions
A test system generator imitates an environment of DUV.It also should be noticed that interaction between the device and its environment must adhere to some protocol.Based on that protocol, we can include functional requirements of protocols as an assertions in the generator.Then, violation of an assertion signals an error.Usage of assertions is an effective method of detection of a broad class of errors.In addition, to assertions that are common for all memory subsystem devices, several cache-specific assertions could be included.They represent invariants of cache coherence protocol.To check this invariants, coherence of states of a single cache line is analyzed in all parts of test system after each change.
Checking caches with out-of-order execution
Caches that support out-of-order request execution exhibit properties of limited nondeterminism.That is the memory access request are received in the device in multiple parts from several interfaces, with different unspecified timing characteristics.On the other hand, there is the "reference" order of memory access operations presented in original test program.If out-of-order execution introduces error to the canonical order, device must be cleaned and erroneous transactions must be restarted.Results of operations that completed successfully are deterministic.Based on these properties of the device, we propose to implement models of two types: • "Ignoring the cancelled transactions" mode • Strict checking mode In the first mode the result of checking is delayed until the moment of the request full completion.If completion was unsuccessful, checks are not made.In the strict mode we use the approaches which is similar to the dynamic refinement of model.Set of possible device states is maintained, and it is augmented with each stimulus and reaction.The number of possible states is limited by the number of simultaneously executed out-of-order requests.Shortcomings of the first mode are delays between erroneous transaction and the execution of actual checking and reduction of the set of errors that could be detected (for example, unnecessary cancel of request will not be detected).On the other hand, implementing that mode is much simpler task, so verification could be started sooner.
Case Study
The approaches described above were used for stand-alone verification of L2cache[3] and the L3-cache[6] of the microprocessor with "Elbrus" architecture and L1Data-cache (L1dc) of the microprocessor with "SPARC-V9" architecture.The test systems for stand-alone verification of this caches were developed using Universal Verification Methodology (UVM) [10].
Checking the "SPARC-V9" L1Data-cache with out-of-order execution
L1dc supports out-of-order execution of memory access operations.The test system structure for L1DC is presented in fig. 1.
Fig. 1. The principal structure of the test sytem forL1Data-cache of the"SPARC-V9"
microprocessor.
168
The test sequences for L1dc are the memory access assembly instructions.They are sent to computational core and the reordering buffer (Core, ROB Model).In this module, the instructions are split into multiple messages (containing either operation type, address or data).These messages are reordered and sent to DUV.Additionally, this module keeps information about initial order of instructions received, to send completion messages in correct order.
Checking the "Elbrus" L3-cache with nondeterministic behavior
The test stimulus generator was developed to verify the L3-cache of the "Elbrus" microprocessor [6].It is based on simplified model of microprocessor core with the L2-cache and the model of system commutator that simulates work in multiprocessor environment.If multiple cores request access to a single cache line, then the order of their execution is unspecified and defined by the device microarchitecture.Internal structure of a cache is also a subject of change, due to changes to requirements of physical design.To verify the device the approach based on dynamic refinement of behavioral model was chosen.To supplement that approach, a set of assertions were implemented in stimulus generator to check validity of the system state.The approach allowed using the same test system with minimal alteration for the next iteration of the "Elbrus" microprocessor.
Conclusion
The approaches described in this paper allows avoiding some shortcomings.It could help to avoid excessive subdivision of the verified unit on small subdevices and developing cycle-accurate models of them (as we done in our previous projects) on the one hand and the development and maintaining of complex cycle-accurate reference models of caches on the other.The approaches were used for stand-alone verification of caches of microprocessors developed by MCST.Stand-alone verification allowed finding several errors in different caches.The intermediate results of application introduced approaches in the multicore microprocessor caches verification if presented in table 1.We already had verified the L3-cache of the "Elbrus" microprocessor using another approach and we could find new 7 errors more with help of developed tests system based on nondeterministic caches checking approach.
Verified caches L2-cache "Elbrus" L3-cache "Elbrus" L1 data cache "SPARC-V9" Number of bugs 4 7 12
The test systems are developed as a UVM-environment.They were implemented to be flexible enough to set both the pseudorandom and directed test sequences.Using of aforementioned approaches while developing test systems helped find some new errors and simplify the test system development.Approaches could be used to verify other caches of different multicore microprocessors regardless of its architectures.Our future research is connected with improving the error diagnostics and localization of found bugs. | 4,160.8 | 2016-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Thermal transport characterization of carbon and silicon doped stanene nanoribbon: an equilibrium molecular dynamics study
Equilibrium molecular dynamics simulation has been carried out for the thermal transport characterization of nanometer sized carbon and silicon doped stanene nanoribbon (STNR). The thermal conduction properties of doped stanene nanostructures are yet to be explored and hence in this study, we have investigated the impact of carbon and silicon doping concentrations as well as doping patterns namely single doping, double doping and edge doping on the thermal conductivity of nanometer sized zigzag STNR. The room temperature thermal conductivities of 15 nm × 4 nm doped zigzag STNR at 2% carbon and silicon doping concentration are computed to be 9.31 ± 0.33 W m−1 K−1 and 7.57 ± 0.48 W m−1 K−1, respectively whereas the thermal conductivity for the pristine STNR of the same dimension is calculated as 1.204 ± 0.21 W m−1 K−1. We find that the thermal conductivity of both carbon and silicon doped STNR increases with the increasing doping concentration for both carbon and silicon doping. The magnitude of increase in STNR thermal conductivity due to carbon doping has been found to be greater than that of silicon doping. Different doping patterns manifest different degrees of change in doped STNR thermal conductivity. Double doping pattern for both carbon and silicon doping induces the largest extent of enhancement in doped STNR thermal conductivity followed by single doping pattern and edge doping pattern respectively. The temperature and width dependence of doped STNR thermal conductivity has also been studied. For a particular doping concentration, the thermal conductivity of both carbon and silicon doped STNR shows a monotonic decaying trend at elevated temperatures while an opposite pattern is observed for width variation i.e. thermal conductivity increases with the increase in ribbon width. Such comprehensive study on doped stanene would encourage further investigation on the proper optimization of thermal transport characteristics of stanene nanostructures and provide deep insight in realizing the potential application of doped STNR in thermoelectric as well as thermal management of stanene based nanoelectronic devices.
Introduction
The synthesis as well as characterization of graphene, due to its intriguing electronic, 1 thermal 2 and mechanical 3 properties, has instigated enormous research interest into two dimensional (2D) nanomaterials. [4][5][6][7][8] Recently, the synthesis of the 2D structures of heavier group-IV elements namely silicene, germanene and stanene 9,10 have incited attention due to their graphene like honeycomb structure. Stanene is a 2D buckled hexagonal allotrope of tin (Sn) 11 with enhanced thermoelectricity 12 and quantum anomalous Hall effect. 13 It has promising potential as a topological insulator 14 and a topological superconductor 15 as well as a quantum Hall insulator. 16,17 Furthermore, spin orbiting coupling (SOC) induces a bulk bandgap of $0.1 eV for freestanding stanene while it shows zero bandgap without spin orbiting coupling. 11 It has gapless edge states with band dispersion in the bulk gap as well as helical edge states with the spinmomentum locked which can be used for dissipationless conduction. 11,18 Moreover, there are reports of very low thermal conductivity 17,19 and high carrier mobility 20 of pristine stanene which make stanene a promising candidate for next generation thermoelectric applications. Signicant improvement in the thermoelectric gure of merit (zT) can be achieved in a system with simultaneously good electrical and low phonon transport. This fact urges the investigation of electrical as well as thermal transport characteristics of stanene nanostructures and explores the prospect of stanene in thermoelectric applications.
Chemical doping of materials with foreign atoms is an effective way to alter material properties. Wei et al. synthesized nitrogen doped graphene using chemical vapor deposition and observed that it shows n-type behavior with decreased mobility hence decreased conductivity but enhanced on/off ratio. 21 Panchakarla et al. also synthesized bilayer structures of boron and nitrogen doped graphene which resulted in p-type and n-type doping respectively and reported that both types of doping caused increase in electrical conductivity in the bilayer structure. 22 On the other hand, quantum anomalous hall effect and tunable topological states have been reported by Zhang et al. in 3d transitional metals doped silicene. 23 Garg et al. performed density functional theory calculations and reported band gap opening in stanene with doped boron-nitride 24 whereas Shaidu et al. observed superconductivity in lithium and calcium doped stanene. 25 The doping characteristics of 31 different adatoms on monolayer stanene have also been investigated by Naqvi et al. 26 On the other hand, thermal transport characterization of doped stanene is yet to be explored. However, the thermal transport in doped graphene with nitrogen 27 and hydrogen 28 has been studied. The tunable thermal conductivity of silicene by isotopic doping 29 and germanium doping 30 has also been reported. The stanene analogues of 2D hexagonal group-IV elements carbon and silicon i.e. graphene and silicene of nanometer size are reported to have much higher thermal conductivities 31,32 compared to that of stanene nanostructure. 33 The calculated thermal conductivity of 10 nm  3 nm sized pristine stanene nanoribbon (STNR) is 0.95 W m À1 K À1 . 33 These suggest that a detail investigation on the thermal transport characteristics of doped stanene nanostructures is signicant for the proper understanding of possible industrial applications of stanene nanostructures.
In this study, we perform equilibrium molecular dynamics (EMD) simulation for the calculation of the thermal conductivity of carbon and silicon doped zigzag STNR. We investigate how the carbon and silicon doping concentrations inuence the STNR thermal conductivity along with the calculation of heat current autocorrelation function (HCACF) and phonon density of states (PDOS). We also carry out a comparative study on thermal transport variation in STNR due to carbon and silicon doping. Subsequently, the effect of various types of doping patterns namely single doping, double doping, and edge doping on the thermal transport of STNR has been evaluated. Finally, the impact of varying temperature as well as nanoribbon width on the thermal conductivity of doped STNR has been examined at different carbon and silicon doping concentrations. considered in this study. Geometric optimization process was carried out involving energy minimization with steepest decent algorithm accompanied by equilibration and thermalization. The Sn-Sn bond length in the equilibrated STNR is 2.83Å with the geometry optimized buckling height of 0.88Å and lattice constant of 4.68Å. These values are consistent with the reported literature values. 24,26,[34][35][36] We have modeled three types of doping patterns in STNR with carbon and silicon atoms of different concentration as shown in Fig. 1(c)-(e) and investigated the impact of doping in thermal transport of STNR. The single doped structures result from the random substitution of a tin atom by the dopant atom, either carbon or silicon, as represented in Fig. 1(c). Fig. 1(d) depicts the double doped structure which is generated by replacing a pair of bonding tin atoms by a pair of bonding dopant atoms. Edge doping is considered as a particular form of single doping which involves the substitution of a tin atom by the dopant atom only on the edge of the nanoribbon structure as shown in Fig. 1(e).
Simulation details
EMD simulations using LAMMPS (Large-scale Atomic/ Molecular Massively Parallel Simulator) 37 has been carried out in order to compute the thermal conductivity of carbon and silicon doped STNR. The Sn-Sn bond interaction in stanene has been modeled using the optimized Tersoff type bond order potential parameters proposed by Cherukara et al. 19 On the other hand, for describing the C-C and Si-Si atomic interaction, optimized Tersoff and Brenner empirical potential 38 and Stillinger-Weber (SW) potential 39 parameters have been used, respectively. Furthermore, the Sn-C and Sn-Si bonding interactions are described by employing standard 12-6 Lennard-Jones (LJ) potential V(r) as following: where 3 and s are energy parameter and distance parameter respectively as well as r and r c are interatomic distance and cutoff distance respectively. In this work, Universal Force Field 40 has been used for computing the LJ potential parameters. The calculated values of these parameters for tin-carbon interaction of carbon doped stanene sample are 3 Sn-C ¼ 10.58 meV, s Sn-C ¼ 3.664Å and r c,Sn-C ¼ 3.5 s Sn-C ¼ 12.824Å. The same set of parameters for silicon doping have been calculated to be 3 Sn-Si ¼ 20.69 meV, s Sn-Si ¼ 3.8615Å and r c,Sn-Si ¼ 3.5 s Sn-Si ¼ 13.515Å. We applied periodic boundary condition along zigzag direction in our EMD simulation. The system energy was minimized using steepest descent algorithm and velocity-Verlet integrator was employed for the numerical integration of the equations of atomic motions with a time step of 0.5 fs. The system equilibration as well as thermalization was performed applying Nose-Hoover thermostat for 1.6 Â 10 5 time steps followed by NVE ensemble for 2 Â 10 5 time steps. Linear response theorem 41 is applied for calculating thermal conductivity in EMD. In this case, the heat current vectors along with their correlations are computed throughout the simulation. Thermal conductivity is related to the ensemble average of HCACF by the well-known Green-Kubo formulation: Here, K x is the thermal conductivity in x direction, V is the system volume, K B is the Boltzmann constant, J x (t) is the heat current in x direction and T is the system temperature. The STNR surface area and van der Waals thickness i.e. stanene interplanar separation (3.3Å) 36,42 are multiplied in order to compute the system volume. s represents the time required for the reasonable HCACF decay termed as correlation time. The term with angular bracket of eqn (2) represents the ensemble average of the HCACF. For the implementation of eqn (2) in EMD computation, the integral term is employed as the summation of discrete terms 43,44 shown in the following equation: where molecular dynamics (MD) simulation time step is denoted by Dt, N is the total number of simulation steps and M represents the number of time steps required for HCACF such that MDt corresponds to the correlation time s. J x (m + n) and J x (n) denote the heat current in x direction at MD time-steps (m + n) and n, respectively.
We recorded the heat current data in every 5 steps in order to obtain the HCACFs. Subsequently, 10 of the obtained HCACFs were averaged for computing the heat current autocorrelation values. The thermal conductivity values were calculated applying eqn (2). Finally, the converged value of average thermal conductivity is taken as the average of 5 independent microcanonical ensembles each with a different initial velocity.
Fix Phonon command 45 of LAMMPS has been employed for evaluating the phonon density of states (PDOS). It involves the direct calculation of the dynamical matrices from MD simulation based on uctuation dissipation theory. Once the dynamical matrices were obtained, PDOS was calculated using an auxiliary post-processing code called 'phana'. In this study, a tricubic 46 interpolation method with uniform q (wave vector) points was taken under consideration for the calculation of PDOS.
Results and discussion
Our estimated thermal conductivity for 15 nm  4 nm pristine STNR at room temperature is 1.204 AE 0.21 W m À1 K À1 . Khan et al. 33 reported the room temperature thermal conductivity for 10 nm  3 nm sized zigzag STNR to be 0.95 AE 0.024 W m À1 K À1 by using EMD which is in good agreement with our result. Moreover, Cherukara et al. 19 estimated the thermal conductivity value of 2.8 AE 0.2 W m À1 K À1 at 300 K for 80 nm  80 nm zigzag stanene sheet and they predicted the lowering of this thermal conductivity value with nanostructuring. This also conforms well to our obtained result. On the other hand, using rst principle calculations, Nissimagoudar et al. 47 computed the room temperature thermal conductivity of zigzag stanene sheet to be 10.83 W m À1 K À1 and Peng et al. 17 reported the room temperature stanene thermal conductivity to be 11.6 W m À1 K À1 . The authors also expected further reduction in the thermal conductivity values with decreasing dimensionality and this is in accordance with our result as well. However, STNR doped with carbon and silicon exhibits thermal conductivity variation as shown in Fig. 2(a) and (b), respectively. For both the doping materials, the thermal conductivity of the STNR increases with the increase of doping concentration. The calculated room temperature thermal conductivities of doped STNR at 2% doping concentration of carbon and silicon are 9.31 AE 0.33 W m À1 K À1 and 7.57 AE 0.48 W m À1 K À1 , respectively.
As STNR is doped with impurity, most of the high frequency phonons are localized due to the impurity centers. 48 Therefore, the contribution of heat conduction from high frequency phonons is largely suppressed. As a result, the low frequency phonons with longer wavelengths play the dominant role in heat transport under these circumstances. Now, due to the low Debye temperature of pure stanene, there is an elevated scattering rate of high frequency phonons resulting in their low phonon group velocity and thus low thermal conductivity of pristine stanene. 17 On the other hand, low frequency phonons in stanene have comparatively high group velocities and hence low scattering. Therefore, majority of thermal transport contribution in pristine stanene comes from these low frequency phonon modes. 17 The impurity centers due to doping localize and suppress the high frequency phonons which have greater scattering rates. As a result, the weakly scattering low frequency phonon modes conducive to thermal conduction become more dominant. Hence, there is an overall improvement in the thermal conductivity of the carbon and silicon doped STNR. This fact is further illustrated by the HCACF proles depicted in the insets of Fig. 2(a) and (b) for carbon and silicon doped structure, respectively. There is an enhanced localization hence suppression of high frequency phonons having greater scattering rates with increase in doping concentration. Consequently, the HCACF proles decay at slower rates with the increasing doping concentration for both carbon and silicon doping. Slower decay rates of HCACF prole result in the calculation of higher thermal conductivity of doped STNR. The thermal transport in the doped stanene can be further explained considering the phonon density of states (PDOS) for pristine stanene as well as its carbon and silicon counterparts graphene and silicene, respectively as shown in Fig. 3. The PDOS proles of graphene and silicene both have large peaks at higher frequency regions ($50 THz and $10 THz respectively) compared to that of stanene ($2 THz). These peaks at high frequency regions for both graphene and silicene result in their much larger thermal conductivity values than that of stanene. 49,50 Therefore, the incorporation of these comparatively high thermal conductivity materials into the low thermal conductivity nanostructure such as stanene would enhance the thermal transport property of the overall system.
As can be seen from Fig. 4, the thermal conductivity of carbon doped STNR is higher than the silicon doped nanostructure since the ratio of thermal conductivity for carbon and silicon doped STNR is greater than one at all concentrations. This can be attributed to the mass effect of these elements. The carbon doped STNR has smaller average atomic mass than that of silicon doped STNR. Smaller average atomic mass results in higher Debye temperature which corresponds to higher value of thermal conductivity. 51 This can be further explained from the fact that high atomic masses lower the sound velocity in materials thereby reducing the thermal conductivity. 52 As a result, doping stanene with the heavier atom i.e. silicon has less increase in thermal conductivity compared to doping with the lighter atom i.e. carbon.
Next, we consider the impact of doping patterns on the thermal conductivity of STNR. Fig. 5(a) and (b) show the thermal conductivity variation of single, double and edge pattern doped STNR as a function of carbon and silicon doping concentration, respectively. The results suggest that the thermal conductivity of STNR increases with increased doping concentration for all three types of doping patterns. The thermal conductivity of double doped structure has higher value compared to other two patterns while of the remaining two patterns, single doping has greater thermal conductivity enhancement impact than edge doping.
In case of the double doping pattern, the doping centers act more like a molecule (i.e. C-C, Si-Si) doping center and the number of localized low frequency phonons is low. 53 Hence, the delocalized low frequency phonons available for double doping pattern contribute to the large thermal conductivity enhancement.
For single doping pattern, the single doping centers cause degeneracy in the low frequency region around discrete single dope centers which results in localization of more low frequency phonon modes compared to double doping pattern. 53 As a result, thermal conductivity enhancement due to low frequency modes in single doped structure is not so high as that of double doped STNR. Furthermore, since edge doping is a special case of single doping pattern, along with the enhanced localized low frequency phonon modes, edge dope centers additionally cause phonon edge scattering. This, in turn, limits its thermal conductivity enhancement impact in comparison with single and double doping patterns. For understanding this phenomenon further, Fig. 5(c) and (d) can be taken under consideration which depict the reasonable decay of HCACF proles as well as their envelopes for single, double and edge doping patterns with carbon and silicon doping respectively. In both of these gures, it can be observed that the HCACF prole decays in the shortest time for edge doping pattern followed by single doping and double doping patterns respectively, thus substantiating the thermal conductivity variations found for these doping patterns. Fig. 6(a) and (b) depict the total energy during the simulation time for several STNR doping patterns at 0.6% and 1% carbon and silicon doping concentration, respectively. In both cases, it can be observed that the energy variations of the doped STNRs are negligible. This, in turn, implies that the STNR structures of various doping patterns with carbon and silicon dopants are energetically well stable.
Next, the temperature dependence of thermal conductivity for doped STNR has also been investigated for different doping concentrations. Fig. 7(a) and (b) present the thermal conductivity of a 15 nm  4 nm doped STNR with carbon and silicon atoms, respectively as a function of temperature for doping concentrations ranging from 0.3% to 1.6%. The thermal conductivity of STNR monotonically decays with increasing temperature for a specic doping concentration. This trend is in agreement with the studies of thermal conductivity for doped graphene by Goharshadi et al. 27 This also conforms well to the results of Ye et al. where it is reported that the thermal conductivity of graphene nanoribbon (GNR) is reduced with increasing temperature due to signicant decrease in relaxation time. 54 Furthermore, Peng et al., 17 Cherukara et al. 19 and Khan et al. 33 also reported similar temperature dependence in thermal transport of pristine stanene nanostructures.
The thermal conductivity drooping characteristics with the increasing temperature at a particular doping concentration can be explained considering phonon-phonon anharmonic interaction or Umklapp scattering at an elevated temperature. At high temperature, Umklapp scattering becomes highly signicant 55 and the thermal conductivity is dominated by the highly energized thermally excited phonons. As a result, thermal conductivity decreases with the increase of temperature. It is observed that thermal conductivity maintains an inverse relation T À1 with temperature at the beginning but at much higher temperature values, this functional relation is no longer applicable. At sufficiently high temperatures, enhanced anharmonic interactions between the two acoustic phonon modes are accompanied by higher order scattering process [56][57][58] which results in non-linear thermal resistivity. Similar decaying characteristics of thermal conductivity with the increased This journal is © The Royal Society of Chemistry 2018 temperature are observed for other doping concentrations while the curves shi upward with the increasing doping concentration. This is in agreement with the earlier observation that for a specic temperature, the thermal conductivity of doped stanene increases with the increasing doping concentration.
The width dependence of STNR for different carbon and silicon doping concentration has been studied as depicted in Fig. 8(a) and (b), respectively. The gures display the thermal conductivity change of STNR with respect to the nanoribbon width ranging from 2 nm to 6 nm for carbon and silicon doping concentrations of 0.5%, 0.7%, 0.9%, 1.2% and 1.6% while length of the ribbon is kept constant at 15 nm. The thermal conductivity increases with the increasing width for a specic doping concentration. This result is in line with the investigation on the width dependence of thermal conductivity by Khan et al. 33 for pristine stanene, by Sevik et al. 59 for pristine hexagonal boron nitride nanoribbon as well as by Cao 60 and Yang et al. 61 for graphene nanoribbon. Ye et al. also found a decreasing trend of GNR thermal conductivity with the reduction of width and attributed it to more intensied boundary scattering with smaller nanoribbon width. 54 The set of curves in Fig. 8(a) and (b) dri upwards for increasing doping concentrations.
Two factors namely, edge localized phonon effect or boundary scattering effect and anharmonic phonon-phonon scattering effect need to be considered to provide the better insight of the width effect on STNR thermal conductivity since both the factors adversely affect the thermal conductivity. As the doped STNR width increases, the impact of boundary scattering is reduced resulting in the rise of thermal conductivity. Moreover, with the increase of the ribbon width, the probability of Umklapp scattering is heightened as the number of available phonons increases.
As these two processes contend with each other, the thermal transport characteristics are regulated by the more dominant one. For comparatively narrow STNRs which is the case of our study, the lowering of boundary scattering effect in wider ribbon is more dominant than the intensied Umklapp scattering effect and therefore thermal conductivity rises with the increase in ribbon width. 62-65
Conclusions
We investigated the impact of carbon and silicon doping concentration as well as doping patterns namely single doping, double doping and edge doping on the thermal transport characteristics of STNR employing equilibrium molecular dynamics simulation in this study. Thermal conductivity of STNR follows an increasing trend with the increasing doping concentration, for both carbon and silicon dopants. This can be attributed to the localization of the high frequency phonon modes having greater scattering rates allowing the weakly scattering low frequency phonon modes to contribute to the thermal conductivity enhancement. Doped STNR with carbon atoms shows higher thermal conductivity than silicon doping owing to the mass difference of carbon and silicon i.e. carbon being lighter than silicon. Double doping pattern, among the considered three patterns, is found to be the most inuential in the thermal transport improvement of STNR as this pattern causes the least amount of low frequency phonon modes localization. On the other hand, edge doping pattern yields the least amount of thermal conductivity variation. We also investigate the thermal conductivity as a function of temperature and width of the ribbon. Both carbon and silicon doped STNR shows a decaying thermal conductivity with the increasing system temperature at a particular doping concentration due to high frequency phonon-phonon scattering. Moreover, the doped nanoribbon thermal conductivity continues to increase with the increasing nanoribbon width since the boundary scattering in doped STNR decreases as the width increases and the Umklapp scattering process is least dominant for the range of nanoribbon width considered in this study. Our results would provide valuable insight in realizing the possible application of doped stanene nanostructures in thermoelectric and nanoelectronic devices.
Conflicts of interest
There are no conicts to declare. | 5,485.2 | 2018-09-05T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Sustainable power management in light electric vehicles with hybrid energy storage and machine learning control
This paper presents a cutting-edge Sustainable Power Management System for Light Electric Vehicles (LEVs) using a Hybrid Energy Storage Solution (HESS) integrated with Machine Learning (ML)-enhanced control. The system's central feature is its ability to harness renewable energy sources, such as Photovoltaic (PV) panels and supercapacitors, which overcome traditional battery-dependent constraints. The proposed control algorithm orchestrates power sharing among the battery, supercapacitor, and PV sources, optimizing the utilization of available renewable energy and ensuring stringent voltage regulation of the DC bus. Notably, the ML-based control ensures precise torque and speed regulation, resulting in significantly reduced torque ripple and transient response times. In practical terms, the system maintains the DC bus voltage within a mere 2.7% deviation from the nominal value under various operating conditions, a substantial improvement over existing systems. Furthermore, the supercapacitor excels at managing rapid variations in load power, while the battery adjusts smoothly to meet the demands. Simulation results confirm the system's robust performance. The HESS effectively maintains voltage stability, even under the most challenging conditions. Additionally, its torque response is exceptionally robust, with negligible steady-state torque ripple and fast transient response times. The system also handles speed reversal commands efficiently, a vital feature for real-world applications. By showcasing these capabilities, the paper lays the groundwork for a more sustainable and efficient future for LEVs, suggesting pathways for scalable and advanced electric mobility solutions.
renewable energy sources like photovoltaic (PV) panels offers an added sustainability dimension to LEVs.PV panels can harness solar energy to charge the energy storage system, reducing the reliance on grid electricity and further enhancing the environmental benefits of LEVs 8,9 .Compact and efficient power trains are essential for light motor solar electric vehicles, significantly impacting their productivity.The size of the power electronic interface plays a pivotal role in determining the design of lighter power trains for photovoltaic (PV) assisted electric vehicles 10,11 .This study aims to investigate two critical aspects of the power electronic interface: the development of a lighter hybrid PV, battery, and supercapacitor power supply (HPS) and a lighter SRM converter for electric vehicle (EV) power trains 12,13 .Additionally, this study delves into the realm of efficient and coordinated control through machine learning, presenting a means of achieving an efficient drive system 14,15 .Various hybrid power systems, including PV, battery, fuel cell, and others 16 , have been extensively reviewed for their application in light solar EVs.To interface multiple sources to the DC bus, multi-input non-isolated converters have been proposed 17,18 .These converters, integrated with fuzzy logic control, can dynamically determine the instantaneous power share among the various sources, contributing to an optimized power management scheme 19,20 .Furthermore, a novel battery-super capacitor energy storage system 21 has been developed with a joint control strategy for average and ripple current sharing.This system addresses the dynamic energy storage and discharge requirements of light EVs, contributing to improved performance and efficiency.The development of a light and efficient power electronic interface, alongside intelligent and coordinated control strategies, is pivotal for the widespread adoption and success of PV-assisted light electric vehicles in the future 22,23 .In the domain of power electronics, bi-directional power flow has emerged as a vital feature for facilitating regeneration during braking in light motor solar electric vehicles.For this purpose, interfacing converters have been equipped with bi-directional power flow capabilities, enabling the integration of hybrid power from photovoltaic (PV) and battery sources 24 .Furthermore, an enhanced DC bus regulation has been achieved through the development of an additional stage for battery interfacing using three-level converters.This advancement not only reduces the size and stress of components but also facilitates battery charging while ensuring power factor correction during the charging process from the utility grid 25,26 .The single-stage integration of hybrid power eliminates the need for a maximum power point converter at the PV interface, thereby simplifying the topology 27 .Efforts have also been made towards optimizing the sizes of power sources according to specific applications, improving bi-directional power conversion capability, integrating various functions into a single converter, conducting thermal stability analysis, and integrating auxiliary functions into the interface converter [28][29][30][31][32][33][34][35] .However, these advanced topologies, with their merits of multiple source interfaces, have also led to complex interfaces and an increased number of power converters and associated filter components 36,37 .
In the realm of control strategies, various models, including model-based, predictive control, and heuristic approaches, have been developed for efficient power sharing and rapid dynamic responses in the switched reluctance motor (SRM) drive [38][39][40] .These approaches encompass heuristic methods such as genetic algorithms 38 , energy scheduling based on predictive demand 41 , and hierarchical power allocation predicated on the C-rate of the battery and PV power availability 42,43 , aimed at facilitating current sharing among the available sources in a hybrid power supply 44 .Genetic algorithms, for instance, provide an approach to optimizing the current distribution among the different power sources to meet the load requirements, enhancing the overall efficiency and responsiveness of the system 38 .Other strategies include model predictive current reference generation, which leverages mathematical models to predict future current demands 45 , driving cycle-based power demand estimation and sharing function determination, which use historical data on driving patterns to estimate future power requirements 46 , and anticipatory demand control, which anticipates future demand changes based on a range of inputs, such as weather conditions and driver behavior 47 .Recent advancements in control coordination have introduced machine learning techniques such as artificial neural network (ANN) based deep reinforcement learning 48 , ANN for system dynamics estimation 49 , and virtual energy hubs 50,51 , which are being utilized for the control of power conversion.ANN-based methods have the ability to learn from data and adjust control strategies accordingly, making them highly adaptable to varying conditions and requirements.Notable innovations in SRM current control involve the use of fuzzy logic to determine torque reference and instantaneous current 52 , supervised learning for torque ripple minimization 53 , and modified output voltage shape with multi-level converters for improved torque response .Fuzzy logic control provides a more intuitive way to control torque and current in an SRM, whereas supervised learning methods can be used to fine-tune control parameters based on real-world data, enhancing overall efficiency and performance.Modified output voltage shapes with multi-level converters, meanwhile, can provide better torque response and smoother operation by adjusting the voltage waveform to match the motor's requirements 54 .Additionally, dead-beat control based on the motor model has been employed to minimize torque ripple 55 , and online learning techniques have been used for torque sharing function to enhance steady-state and dynamic drive response.Dead-beat control, for instance, uses a motor model to predict future torque demands and adjust control parameters accordingly, while online learning techniques enable the control system to adapt and improve its performance over time based on realtime feedback.
The research problem addressed in this paper is the optimization of power management in light electric vehicles (LEVs) through the integration of a hybrid energy storage solution (HESS) and machine learningenhanced control.Specifically, the focus is on achieving optimal power flow between batteries, supercapacitors, and photovoltaic (PV) panels to improve vehicle performance, extend battery life, and increase the sustainability of LEVs.Traditionally, LEVs have relied solely on batteries for energy storage, which can be limiting due to their energy density, charging times, and life cycle limitations.The integration of supercapacitors offers a solution to these limitations, as supercapacitors have high power density, rapid charge-discharge characteristics, and longer lifespans compared to batteries.Additionally, the use of renewable energy sources such as PV panels further enhances the sustainability of LEVs by reducing the reliance on grid electricity.However, effectively managing the power flow between batteries, supercapacitors, and PV panels is challenging, especially in dynamic and nonlinear LEV systems.Traditional control strategies may struggle to optimize power flow in real-time, resulting in suboptimal performance and reduced battery life.
To address this challenge, this paper proposes a novel control strategy that integrates a HESS comprising batteries, supercapacitors, and PV panels with machine learning algorithms.By leveraging ML's ability to learn and adapt to complex and changing systems, the proposed control strategy aims to optimize power flow in realtime, ensuring optimal performance and efficiency.
The key contributions of this paper include: • The development and implementation of a novel control strategy for LEVs that integrates a HESS with machine learning algorithms.• The demonstration of the feasibility and effectiveness of the proposed control strategy in a real-world LEV application, showcasing its ability to optimize power flow, enhance vehicle performance, and extend battery life.• The validation of the proposed control strategy's ability to increase the sustainability of LEVs by reducing their reliance on grid electricity and enhancing their overall efficiency.
The findings of this research have significant implications for the design and operation of LEVs, as they offer a more sustainable and efficient alternative to traditional battery-powered vehicles.Additionally, the proposed control strategy has the potential to be applied to other types of electric vehicles, as well as other energy storage and renewable energy systems, further expanding its impact on the field of sustainable transportation.
The paper is organized as follows: In Section "System modelling", we detail the hybrid energy storage solution (HESS), outlining its integration of batteries, supercapacitors, and photovoltaic panels.In this section, we also present the mathematical models that describe the dynamics and behavior of the proposed drive system.Section "Controller modelling" covers the control structure for the proposed converters, including the machine learningenhanced control strategy designed to optimize power flow between the various energy storage elements.In Section "Simulation results and performance evaluation", we share the simulation setup, including performance metrics and results from the validation of the proposed system.We discuss improvements in power efficiency, battery life, and overall LEV performance.Finally, in Section "Conclusion and future research directions", we offer a summary of the key findings and contributions of the study, along with implications for future research and development in sustainable transportation and energy management.
System modelling
With the objective of reducing the size of the power conversion interface for electric vehicle drive firstly, a Hybrid Power Supply (HPS), which integrates battery power into a DC bus in two cascaded stages and PV power in one stage is developed as shown in Fig. 1 56,57 .The power converter associated with PV source is a unidirectional converter which feeds PV power into DC bus through boost converter 58,59 .The objective of control of the boost converter is necessarily maximum power absorption and transfer to the DC bus.The power converters associated with Battery and Supercapacitor is bi-directional converters.Switch S 1 facilitates the buck mode of operation for transferring power from DC bus to battery while switch S 3 facilitates the transfer of power from the Battery to the DC bus.Similar operation is achieved for supercapacitor with switches S 2 and S 4, respectively.L Bat and L sc serve as filter inductors for the transfer of power.The battery feeds the supercapacitor bus in the first stage, which feeds the DC bus in the second stage.The proposed topology has two advantages.First, the size of the inductor between the battery and supercapacitor interface, L Bat , is reduced compared with conventional topology for the same allowable current ripple.Second, the voltage stress on the power switches at the battery-supercapacitor interface is reduced as compared to conventional topology.Secondly, the number of power switches in the SRM power converter is also reduce to four by maintaining one switch common in commutation of each phase as shown in Fig. 1.The operation of this converter is like an asymmetric bridge converter with the duty cycle of common switch is thrice to that of other switches.Switch G 1 commutates in common to all three phases which is connected to high side of HPS.Switches G 2 , G 3 , and G 4 commutate, respectively for each phase connected to the low side of HPS.The 6/4 pole SRM is controlled through direct torque control scheme with reference generated through machine learning-based torque estimation, as seen from Fig. 1.Space vector modulation is utilized for the current control of the drive.
Hybrid power supply dynamics
The differential equations governing the switching of PV converter are given in (1) and (2), where i PV and V PV are the instantaneous current and voltage of PV source, d PV is the duty cycle of converter, V Bus is the DC bus voltage, L PV is the filter inductor in interface, A is the material constant of PV array.Now, the maximum power condition is achieved at the instant where.Now at maximum power point, according to Eq. (3) dP PV di PV = 0 which implies Discretizing Eqs.(1) and ( 5), we get where t s is the sampling time and is the reciprocal of switching frequency.d PV (k + 1) is thus calculated from (6) with sampled values satisfying Eq. ( 7) which corresponds to maximum power point operation.
The differential equation governing the switching of supercapacitor interface converter is given in (8) , where i sc and V sc are the instantaneous current and voltage of Battery, d 1 is the duty cycle of battery interface converter, V Bus is the DC bus voltage, L sc is the filter inductor in interface.
Discretizing the differential equation, Now, the current to be generated in the next sample being the reference value of current i sc * , duty cycle for the next sample is estimated as follows: The differential equation governing the switching of battery-supercapacitor interface converter is given in (11), where i Bat and V Bat are the instantaneous current and voltage of Battery, d 2 is the duty cycle of battery interface converter, V sc is the supercapacitor bus voltage, L Bat is the filter inductor in interface. (1)
Dynamics of SRM:
The magnitude of the rotor flux space vector and its position are very important aspects in designing DTC.The rotational d-q coordinated system can easily be designed with the help of rotor magnetic flux space vector [60][61][62] .In many existing methods, the flux model has been implemented in this paper by utilizing monitored rotor speed and stator voltages along with currents.It is obtained from basic stationary reference frames (α, β) associated with the stator.The rotor flux space vector is achieved and are resolved into the α and β components as follows 63,64 .www.nature.com/scientificreports/ Where L s and L r are stator and rotor self-inductance, L m is motor magnetizing inductance, R r and R s are denoted for rotor and stator Resistance, ω is the angular speed of the rotor, P p is pole pairs in SRM, T r is rotor time constant, T s is stator time constant, and σ is used for leakage constant.
Controller modelling
The control strategy of the proposed system is sophisticated and involves several interconnected layers, each serving specific purposes to ensure the efficient operation of the PV-assisted EV drive 65,66 .The first layer, which is akin to a pattern recognition machine learning algorithm, is responsible for setting the instantaneous torque based on the detected driving pattern, estimating the PV power output, and tracking the maximum available power from the PV system 67,68 .This layer relies on historical data and real-time inputs to make accurate predictions and optimize torque and power output.The second layer operates using mathematical models of the system and the motor itself.It employs these models to estimate the speed of the motor without relying on traditional speed sensors, thereby reducing cost and complexity.Additionally, it controls the hybrid power supply, adjusting the flow of power from the PV, battery, and supercapacitor to meet the instantaneous power demand of the drive 69,70 .The final layer is focused on coordinating the power flow throughout the entire interface.It ensures that power is distributed optimally among the different sources to maintain a stable DC bus voltage, regulate the system's response to load changes, and ensure efficient utilization of all available energy sources.This coordination is vital for the overall performance and reliability of the PV-assisted EV drive, as it ensures that the drive system operates efficiently and reliably under various operating conditions.
Machine learning for torque and PV power estimation, MPP tracking
The machine learning algorithm in the proposed system is fed with three main types of input data: the difference between the actual motor speed and the reference speed for torque reference generation, the irradiance level for PV power estimation, and the error in the conductance for maximum power point (MPP) determination [71][72][73] .The algorithm employs a multi-layered approach, consisting of two inner layers, to establish a relationship between the input data and the desired output values.In the first inner layer, pattern recognition techniques are used to identify the appropriate torque reference, PV power level, or MPP reference.This process is illustrated in Fig. 2, which outlines the implementation of pattern recognition for each of these outputs.The structure of the machine learning model is carefully designed, and the weights associated with each connection between nodes are updated in each iteration based on a predetermined criterion.This iterative process allows the algorithm to learn and improve its performance over time, ultimately leading to more accurate torque references, PV power estimations, and MPP determinations.
The pattern recognition-based machine learning algorithm utilized in this study incorporates a deep understanding of motor dynamics and solar irradiance variation to predict and optimize the electric vehicle's performance 74,75 .Specifically, the algorithm determines optimal torque settings based on input parameters like the error function of motor speed, reference speed, and irradiance for PV power estimation.In the initial layer, the algorithm estimates the required torque through a unique multi-layered machine learning model, which relies on deep neural networks.The model processes the input parameters to predict the output torque, taking into account the highly nonlinear characteristics of the electric vehicle's drive system.The training process employs an extensive dataset consisting of 14,000 samples.This dataset encompasses a wide range of driving scenarios, including various combinations of vehicle speeds, load profiles, and ambient conditions.The machine learning model undergoes iterative adjustments to its internal weights, improving its accuracy and predicting capability with each training cycle.The training process involves both forward and backward propagation techniques, refining the network's internal structure to enhance its performance 76,77 .This iterative learning process continues ( 27) Multi-layered machine learning for pattern recognition for torque, PV power and MPP.
until the algorithm achieves a satisfactory level of accuracy in predicting the desired torque.The performance of the machine learning algorithm is evaluated through rigorous testing, ensuring its accuracy, precision, and robustness across diverse driving conditions.The algorithm's superior predictive capabilities are showcased through its ability to accurately determine torque references, enabling optimal power management and efficient energy utilization in light electric vehicles.These advancements in machine learning-based control algorithms not only enhance the efficiency and performance of electric vehicle drives but also pave the way for future innovations in autonomous driving and intelligent transportation systems.Algorithm for Multi-layered ML pattern recognition model implementation is shown in Fig. 3.
Model based SRM Speed estimation
Speed estimation is a critical aspect of motor control in electric vehicle (EV) systems.It is traditionally achieved through the use of speed sensors, which can be costly and introduce complexity to the system [78][79][80] .To address these challenges, we propose an innovative approach that leverages mathematical models and a model reference adaptive controller (MRAC) to estimate speed without the need for physical speed sensors.This approach is illustrated in Fig. 4, which shows a block diagram of the speed estimation process.In this system, the output of the switched reluctance motor (SRM) converter depends on both the voltage at the DC bus (V Bus ) and the pulses generated by the pulse width modulation (PWM) generator.These converter voltages can be accurately estimated using mathematical expressions based on the motor and converter models.This eliminates the need for physical voltage sensors, significantly reducing the cost and complexity of the system.The core of the speed estimation process lies in the mathematical model of the SRM converter, which accurately describes the relationship between V Bus , the PWM pulses, and the motor speed.This model is utilized in the MRAC to adaptively estimate the motor speed based on the observed behavior of the converter.Overall, this approach offers a cost-effective and reliable alternative to traditional speed sensing methods, making it an attractive option for EV applications.
The following equations can estimate the speed:
Model based current control of HPS
The proposed control structure for the Hybrid Power Supply (HPS) system in Light Electric Vehicles (LEVs) is a novel approach that combines principles of Proportional-Integral (PI) control for current reference generation and Model Reference Adaptive Controller (MRAC) for duty cycle generation.The main objectives of this control algorithm are to regulate the DC bus voltage to its permissible value and facilitate instantaneous power supply sharing between the battery and supercapacitor for varying load conditions.The control scheme, as depicted in Fig. 5, consists of two primary components: the current reference generation and the duty cycle generation.The first part focuses on generating the appropriate current references for the battery and supercapacitor based on the desired DC bus voltage.It involves the use of a PI controller that adjusts the current references to maintain the DC bus voltage within acceptable limits.The second part of the control scheme involves the generation of the duty cycles for the converters that interface with the battery and supercapacitor.These duty cycles are calculated based on the power sharing requirements and the load variations.The MRAC plays a crucial role in ensuring that the duty cycles are adjusted in real-time to meet the dynamic power demands of the system.Overall, the proposed control structure offers a robust and efficient solution for regulating the HPS system in LEVs.It provides precise control over the DC bus voltage and enables seamless power sharing between the battery and supercapacitor.
The PV interface converter, situated within the Hybrid Power Supply (HPS) system of Light Electric Vehicles (LEVs), performs a critical role in managing power distribution efficiently.It operates independently from the battery and supercapacitor converters, ensuring that the Direct Current (DC) bus receives the maximum available power from the solar panels at all times.This autonomous operation ensures the optimal utilization of solar energy in the system.Meanwhile, the battery and supercapacitor converters complement the power www.nature.com/scientificreports/supply by providing additional power when the PV system alone cannot meet the demand.The battery and supercapacitor converters are designed to distribute the remaining power needed to meet the load demand equitably.This ensures a balanced and consistent power supply to the vehicle.To facilitate seamless power distribution among the PV, battery, and supercapacitor converters, a sophisticated control scheme has been developed.This control strategy is based on a model-referred duty estimation-based Proportional-Integral (PI) current regulation approach.This approach continually assesses the current states and references of the converters to generate optimized switching pulses.These pulses regulate power flow, maintain the DC bus voltage, and enable effective power sharing among the converters.As a result, the model-referred duty estimation-based PI current regulation scheme ensures efficient and balanced power distribution within the HPS system of LEVs.This innovative approach significantly contributes to the advancement of sustainable and eco-friendly electric transportation by improving vehicle performance, reliability, and energy efficiency.
Error in the DC bus voltage serves as input to PI regulator, which determines the magnitude and direction of current supplied by the hybrid combination of battery and super capacitor.Then, the weighted average current estimator separates the reference for battery current and weighted transient current estimator separates the reference current to be absorbed or delivered by supercapacitor at instant.The weights for average and ripple current estimators are the factors by which hybrid reference current i h * is raised by (1-d 2nom ) and (1-d 1nom ) respectively, where d 2nom and d 1nom are the nominal duty cycles of stage 1 interface converter and stage 2 interface converter respectively.During average and ripple extraction from reference current, the averaging of reference current is limited by the c-rate of battery and the ripple extracted shall be supplied by the supercapacitor instantaneously.Further, the model equations described in (6) concerning the condition satisfied in (7) generate the instantaneous duty cycle for PV interface converter.The duty thus generated is compared to constant frequency triangular carrier waveform to generate switching pulses for S PV .Also, Eq. ( 16) serves as a reference to generate duty cycle for the stage 2 converter while Eq. ( 10) serves as a reference for duty cycle generation for stage 1.
Coordinated control of drive
Coordinated control for optimal current regulation into Switched Reluctance Motor (SRM) for speed and torque commands plays a crucial role in ensuring the smooth and efficient operation of the SRM drive [81][82][83] .The control scheme is depicted in Fig. 6, where the SRM model estimates torque based on phase voltages and currents.The obtained instantaneous torque reference from the supervised model is then compared with the estimated torque, resulting in torque hysteresis.Similarly, flux hysteresis is developed from the SRM model, as illustrated in Fig. 6.These two hysteresis components serve as inputs for determining the instantaneous voltage vector, as presented in Table 1.In Fig. 7, the corresponding voltage vectors are generated from the integration of the estimated speed to identify the sector.However, due to the specific topology of the converter with four switches, one switch common in all three phases, the generated vectors are identified differently, as shown in Fig. 7. Accordingly, the corresponding switches of the leg are turned ON to control the current flow into the SRM.
Simulation results and performance evaluation
A detailed simulation of the proposed drive was conducted using MATLAB/SIMULINK, wherein the load was modeled to reflect real-world electric vehicle drive cycles, encompassing scenarios like acceleration, maintaining a constant velocity, and vehicle deceleration.The parameters employed in the system simulation are outlined in Table 2, covering various aspects such as power sources, the motor itself, power switches, filter elements, and specifications pertinent to machine learning.Throughout the simulation, the proposed drive underwent rigorous testing to assess its performance across a spectrum of critical metrics.Initially, the accuracy and real-time viability of the machine learning algorithm were scrutinized for its capacity to generate torque references, estimate PV power, and identify the maximum power point (MPP) voltage.Subsequently, the regulation of hybrid power supplies (HPs) and the distribution of power among diverse sources were evaluated.Furthermore, the drive was put through a battery of tests to evaluate its response in different operational scenarios.This encompassed examining steady-state torque ripple, the transient response of torque and speed, and the response when reversing the speed command.Through this exhaustive testing, the performance characteristics and the efficacy of the proposed drive were gauged, ensuring a thorough understanding of its capabilities and limitations in varied operating conditions.
Performance of supervised learning
The training and validation processes of the machine learning algorithm were meticulously monitored and evaluated.Table 3 84,85 .Moreover, Fig. 9 illustrates the gradient of error, showcasing how it stabilizes after eight iterations.The validation checks are also graphically represented in Fig. 9. Figure 10 presents an error histogram for a sample set of twenty data points, exhibiting the frequency distribution of errors encountered during the validation process.Encouragingly, for 95 percent of these data points, the mean square error was observed to be within a negligible range of 0.1 percent.To further validate the efficacy of the algorithm, Fig. 11 offers an in-depth analysis of the mean squared error, with specific emphasis placed on the zeroing of mean squared error from the eighth iteration onwards.This meticulous analysis of training and validation processes serves to affirm the reliability and robustness of the developed machine learning algorithm in accurately estimating torque reference, speed, and PV power.
Performance of HPS
In Fig. 12, the voltage profiles of the DC bus, supercapacitor bus, and battery bank are depicted, demonstrating their adept regulation to nominal values with precision, as demonstrated in Fig. 6.A comprehensive examination of this regulation process and its associated voltage stress is provided in the subsequent subsection.Notably, the ensuing discussion reveals an admirably stringent regulation standard, wherein a deviation of under 5 percent is observed across the entire span of load variations within the nominal range.
In the simulated scenario, the voltage regulation is meticulously maintained to nominal values, ensuring precise control over the distribution of power among the sources.As displayed in Fig. 13, the power shares reflect the current allotments among the different components.Furthermore, the figure visually represents how the PV-generated power, which is dependent on irradiance, is channeled to the DC bus.Meanwhile, the battery and supercapacitor share the remaining power requirements, with the supercapacitor rapidly accommodating any sudden load variations.This flexible arrangement ensures the efficient and seamless adaptation of the system to changing conditions, optimizing the performance of the light electric vehicle under different driving scenarios.
Performance of SRM control
The performance of the drive in response to a 50 N-m torque increase was examined through simulation.As shown in Fig. 14, the drive torque response exhibits precise tracking of the new torque demand, with a transient time of just 0.01 s and zero steady-state error.This rapid adjustment is complemented by a minor dip of 15 rpm in speed, as depicted in Fig. 15, which is quickly resolved within 0.4 s of the changeover.These results underscore the drive's ability to efficiently adapt to abrupt variations in torque demand, ensuring a smooth and uninterrupted driving experience.The implementation of a multi-layered machine learning algorithm, including pattern recognition for instantaneous torque setting and PV power estimation, contributes significantly to the drive's agility and accuracy in responding to dynamic torque demands.
Simulating the drive for an 80 N-m step change in speed demand provides further insights into its robust performance.As illustrated in Fig. 16, the drive speed precisely tracks the new speed demand, exhibiting zero steady-state error and a transient time of merely 0.08 s.Concurrently, a surge of 10 N-m in drive torque is observed in Fig. 17 during the transition, swiftly settling within 0.01 s.The torque response under these conditions highlights the drive's effective management of sudden changes in speed demand, showcasing its adaptability and reliability in varying driving scenarios.The implementation of the multi-layered machine learning algorithm significantly contributes to this precise and agile response, underscoring its role in ensuring smooth and consistent drive performance.
Simulating a scenario of a sudden speed reversal from + 80 rpm to -80 rpm provides crucial insights into the drive's resilience and performance under extreme conditions.In this experiment, we examined how the drive responds to such abrupt changes in speed demand, ensuring the safety and stability of the vehicle in unpredictable situations.As depicted in Fig. 18, the drive's speed tracking capabilities are commendable, showcasing an error-free transition and a remarkably swift transient time of just 0.08 s.This rapid response underscores the drive's agility and adaptability, vital attributes for navigating dynamic and ever-changing environments.However, the transition also exposes a brief dip in drive torque, as illustrated in Fig. 19.This temporary dip occurs due to the absence of a load during the transient speed reversal, but it is rapidly corrected within a mere 0.01 s.This quick recovery reflects the drive's robustness and its ability to maintain consistent performance even during the most challenging conditions.By simulating scenarios such as these, we can better understand the drive's capabilities and potential areas for improvement.Furthermore, it allows us to refine control strategies and drive algorithms, ultimately enhancing the overall performance, safety, and efficiency of electric vehicles.www.nature.com/scientificreports/
Comparison to existing power supplies
To provide a comprehensive evaluation of the proposed hybrid power supply (HPS) system and its accompanying control system, we conducted a rigorous comparison with existing power supplies commonly used in PV-assisted electric vehicle (EV) drives.This comparison aimed to assess the robustness and accuracy of the proposed HPS and control mechanism across various performance metrics, including DC bus regulation, stress on the supercapacitor for transient requirements, and optimal sizing of power supply components.5).
Drive component sizing comparison
The merit of the proposed HPS topology in terms of steady-state ripple in battery interface inductor and series switch voltage stress is evaluated in this section.The mathematical expression for inductor current ripple is obtained as follows: For a bi-directional converter with inductor at battery side for conventional topology, following the differential equation as And for cascade converter topology, following the differential equation as in (14) Substituting for considered nominal values for V Bus , V Bat , and V sc , the following expressions for battery inductor size are obtained for conventional topology as
And for cascaded converter topology it is
The percentage change in battery inductor size as per considered nominal values of voltages is obtained as Now, the voltage sizing of diodes and switches in SRM converter was obtained from the blocking voltage level during turn OFF interval of the respective switch or diode.In these intervals, the blocking voltage across the switch combination was obtained as V sw = V S /2.The diode during turned OFF, should block the maximum value of source voltage.Therefore, the voltage rating of any diode was V D = V S /2.The RMS current rating of power switches is determined from power to be delivered by the converter.Now, the RMS current rating of G x or D x where X = 2,3,4 is obtained as and that of G 1 and D 1 is 3I rms,X The battery interface converter, a critical component in electric vehicles (EVs) using photovoltaic (PV) power, was subjected to rigorous analysis in this study.A comparison was made between the conventional topology and the proposed cascaded converter topology, focusing on the reduction of component sizes while maintaining www.nature.com/scientificreports/or improving performance.The battery interface inductor, an essential element, was computed using Eqs.( 19) and ( 20) for both topologies.It was found that the proposed cascaded converter topology led to a substantial reduction in the inductor's size.Additionally, the voltage stress on series switch S2 was evaluated under OFF conditions.The results showed a significant decrease in voltage stress from 1 pu in the conventional topology to only 0.16 pu in the cascaded converter topology.This reduction in voltage stress, along with the downsizing of the battery interface components, is a testament to the effectiveness of the proposed topology.Furthermore, the sizing of switches in the switched reluctance motor (SRM) converter was optimized, resulting in fewer switches and improved efficiency without compromising performance.The results of this comparative analysis underscore the potential of the proposed topology to enhance the performance and efficiency of battery interface converters in EVs using PV power.
Comparison to existing drive output characteristics
The performance of the proposed hybrid power supply (HPS) with the proposed control scheme was compared to existing power supplies typically used in photovoltaic (PV)-assisted electric vehicle (EV) drives.Additionally, the proposed control strategy was also applied to a conventional power supply to assess its robustness and effectiveness.Table 6 provides a detailed comparison of the proposed control scheme with the HPS in terms of DC bus regulation, supercapacitor stress for transient requirements, and power supply component sizing.The results demonstrate the robustness and accuracy of the proposed control strategy, particularly when used in conjunction with the HPS.The analysis indicates that the proposed control scheme can effectively regulate the DC bus voltage, manage transient requirements without placing excessive stress on the supercapacitor, and optimize power supply component sizing.These findings underscore the potential of the proposed control scheme and HPS in enhancing the performance and efficiency of PV-assisted EV drives.
Conclusion and future research directions
In conclusion, this paper has presented a comprehensive study on the development and performance evaluation of a novel PV-assisted EV drive system with a focus on efficient and sustainable power management.We introduced a unique topology and mathematical model for the proposed drive, which integrates hybrid energy storage solutions and advanced control strategies, including machine learning.Our simulation results demonstrate the effectiveness and real-time feasibility of the machine learning algorithm for torque reference generation, PV power estimation, and MPP voltage identification, with a mean squared error within 0.1 percent for 95 percent of samples after the eighth iteration.Additionally, we showcased the robustness and accuracy of our control scheme through various performance indices such as DC bus regulation, power sharing among various sources, and transient response, with stringent regulation of less than 5 percent observed for all possible variations in the nominal range of the load.Our study also introduced a new approach to current control in a hybrid power system that addresses load changes effectively and efficiently.This approach, based on model reference adaptive control, offers improved performance over traditional methods.Additionally, our proposed control scheme for the SRM drive provides precise torque control, reduced torque ripple, and fast transient response.Our simulation results confirm that our proposed control strategy successfully handles changes in torque demand and speed commands, ensuring accurate and rapid responses, with a torque ripple of 0.04 pu and a speed settling time of 0.5 s for a step change in reference speed.We compared the performance of our proposed HPS with existing power supplies for PV-assisted EV drives, showcasing superior DC bus regulation and reduced supercapacitor voltage stress, with a DC bus regulation as low as 2.7 percent and a supercapacitor voltage stress as low as 1.6 percent.Moreover, we presented a detailed analysis of the sizing of drive components, including the battery interface inductor and series switch, demonstrating significant reductions in size and voltage stress with our proposed topology, with a 93.75 percent reduction in battery interface inductor size and a 0.16 pu series switch voltage stress.
Overall, this study makes several significant contributions to the field of PV-assisted EV drives.We introduce a novel topology and mathematical model, propose efficient control strategies, and provide detailed simulations and analyses of the performance of the proposed system.Our work demonstrates the feasibility and benefits of integrating PV, battery, and supercapacitor energy storage systems in an EV drive, paving the way for more sustainable and efficient electric mobility solutions.Furthermore, our findings contribute to the development of advanced control and power management strategies for renewable energy-based transportation systems, promoting the adoption of PV-assisted EV drives and supporting the transition towards a greener and more sustainable future.
Future research directions for PV-assisted EV drives encompass several key areas.One such area is the advancement of control strategies.Deep reinforcement learning and artificial intelligence have shown promise in enabling real-time optimization of PV-assisted EV drives.Research in this domain can lead to more sophisticated and adaptive control algorithms that optimize energy efficiency and overall performance.Another area of interest is the exploration of advanced multi-level converter designs.These converters have the potential to improve power density and reduce component stress, thereby enhancing the overall efficiency and reliability of PV-assisted EV drives.Innovative battery management techniques also offer promising avenues for future research.Energy storage integration is critical for the effective operation of PV-assisted EV drives, and developing novel battery management systems can improve the overall energy efficiency and lifespan of these systems.Continuous system optimization and performance evaluation are also important areas for future research.By rigorously evaluating the performance of PV-assisted EV drives under various operating conditions, researchers can identify areas for improvement and fine-tune the design and control strategies to enhance the system's reliability and efficiency.Furthermore, researchers can extend the scope of their work to include other renewable energy sources for hybrid energy systems.This can involve integrating technologies such as wind power or geothermal energy to create more robust and resilient energy systems for EVs.Rigorous real-world testing and validation are crucial for ensuring the reliability and safety of PV-assisted EV drives.Researchers should collaborate with industry partners and government agencies to conduct extensive testing and validation under various operating conditions to ensure that these systems meet the highest standards of safety and performance.Finally, accelerating the commercialization and adoption of PV-assisted EV drives is essential for realizing their full potential.This can be achieved through industry-government partnerships and incentives that encourage the widespread adoption of these systems.By focusing on these key areas, researchers can help advance the state of the art in PV-assisted EV drives and contribute to a more sustainable and resilient future in the realm of electric mobility.
Figure 1 .
Figure 1.Schematic of HPS-fed SRM drive for light electric vehicle.
https://doi.org/10.1038/s41598-024-55988-5www.nature.com/scientificreports/Discretizing the differential equation, Now, the current to be generated in the next sample is the reference value of current i Bat * , duty cycle for the next sample is estimated as follows: SRM Converter dynamics The switches G 1 and G 2 are turned ON as shown in Fig. 1, which results in + V Bus voltage level at Phase A output terminals.The switches G 1 and G 2 are turned OFF, the complementary action of turned OFF G 1 and G 2 force diode D 1 and D 2 to turn ON, which results in − V Bus voltage level at Phase A output terminals and the energy in phase A winding is freewheeled into source.During this interval, Similar dynamics for other phases shall be provided as follows: during energizing phase B, and during de-energizing phase B. during energizing phase C, and during de-energizing phase C.
4 Figure 5 .
Figure 5. Model referred duty estimated PI current control for HPS.
Figure 7 .
Figure 7. Vector based instantaneous switch combinations for SRM current control.
Figure 9 .
Figure 9. Gradient of mean squared error and validation checks.
Figure 10 .
Figure 10.Error histogram for twenty test samples.
Figure 11 .
Figure 11.Performance of supervised learning pattern.
Figure 12 .
Figure 12.Voltage of DC Bus, Supercapacitor and Battery bank.
Figure 13 .
Figure 13.Power delivered by sources and load power demand.
Table 1 .
Current Control Space Vector Dynamic Switching.
Table 2 .
Simulation Parameters.with corresponding speed and PV power estimations.The subsequent analyses of training and validation performance are captured through various figures.For instance, Fig. 8 offers insight into the tracking of target values across iteration cycles, with each iteration cycle manifesting a distinct fitness level denoting the learning capability of the artificial neural network (ANN) for the training pattern Inputs: e(N) (or I or ΔG) Outputs: T* (or P PV or V PV *) Error bound: 0.01 Activation Function: Sigmoid Vol.:(0123456789) Scientific Reports | (2024) 14:5661 | https://doi.org/10.1038/s41598-024-55988-5www.nature.com/scientificreports/
Table 3 .
Sample Training Data for ANN.
Table 4serves as a visual representation of the comparative analysis, highlighting the key attributes and performance characteristics of the proposed HPS and control system.It illustrates how the proposed system fares against conventional power Torque response for a step change in reference speed.Torque response for reversal of speed.supplies in terms of addressing transient load demands, maintaining the stability of the DC bus voltage, and ensuring the overall reliability and efficiency of the power delivery.Through this comprehensive comparison, we aim to demonstrate the superiority of the proposed HPS and control mechanism in terms of robustness, accuracy, and performance, setting a new standard for PV-assisted EV drives (Table
Table 4 .
Comparison of Power Supplies for PV assisted EV Drive.
Table 6 .
Comparison of Power Supplies for PV-assisted EV Drive. | 9,928.8 | 2024-03-07T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Non-Immersion Ultrasonic Cleaning: An Efficient Green Process for Large Surfaces with Low Water Consumption
: Ultrasonic cleaning is a developed and widespread technology used in the cleaning industry. The key to its success over other cleaning methods lies in its capacity to penetrate seemingly inaccessible, hard-to-reach corners, cleaning them successfully. However, its major drawback is the need to immerse the product into a tank, making it impossible to work with large or anchored elements. With the aim of revealing the scope of the technology, this paper will attempt to describe a more innovative approach to cleaning large area surfaces (walls, floors, façades, etc.) which involves applying ultrasonic cavitation onto a thin film of water, which is then deposited onto a dirty surface. Ultrasonic cleaning is an example of the proliferation of green technology, requiring 15 times less water and 115 times less power than conventional high-pressurized waterjet cleaning mechanisms. This paper will account for the physical phenomena that govern this new cleaning mechanism and the competition it poses towards more conventional pressurized waterjet technology. Being easy to use as a measure of success, specular surface cleaning has been selected to measure the degree of cleanliness (reflectance) as a function of the process’s parameters. A design of experiments has been developed in line with the main process parameters: amplitude, gap, and sweeping speed. Regression models have also been used to interpret the results for different degrees of soiling. The work concludes with the finding that the proposed new cleaning technology and process can reach up to 98% total cleanliness, without the use of any chemical product and with very low water and power consumption.
Introduction
Ultrasonic cleaning is based on removing dirt particles through the acoustic cavitation of a liquid. During the process, a great number of gaseous microbubbles implode, releasing strong shockwaves, high-speed microjets, and soaring high temperatures [1]. The physical principles of the cavitation bubbles have been widely studied by many authors, from their nucleation [2] to their collapse [3] and clustering [4].
Acoustic cavitation is the foundation of one of the most sophisticated cleaning methods on the market, [5] "ultrasonic cleaning." Gallego and Graff [6] acutely reviewed this technology and, in agreement with other authors, such as Fuchs, [7] depending on the size of the bubbles and the implosion intensity, a conclusion was drawn that ultrasounds can remove dirt from a surface but also modify the solid surface, or even the liquid itself. In this sense, there are many other applications which can and do benefit from acoustic cavitation, such as surface modification [8], catalysis [9], adsorbent regeneration [10], plant extraction [11], immobilization [12], and nano-emulsification [13].
Regarding the implementation of this technology for cleaning purposes, the most widespread method is based on adhering one or more ultrasonic transducers (piezoelectric or magnetostrictive) to the base of a metal tank. Some authors, such as Tangsopa and Thongri [14], have tested the distribution of the transducers in the tank in order to reach optimal working conditions. These elements are electrically connected to a signal generator which amplifies the source frequency (50-60 Hz) to the one required for the process (>20 kHz). When the vessel is filled with the solvent liquid (usually water and a surfactant) and the transducer is activated, a high-frequency acoustic field is generated, leading to cavitation throughout the volume of the tank. By introducing a solid object into the vessel, its surface will work as a nucleation area, and, therefore, cavitation will focus on it. The same approach can also improve the efficiency of other processes. For instance, Charee, Tangwarodomnukun, and Dumkum [15] used ultrasonic cavitation to remove the debris formed during the laser micromachining of silicon. From an environmental point of view, ultrasonic cleaning is a "green" alternative to conventional cleaning methods, such as polluted coatings and surfactants [16,17]. Other mechanical approaches, such as brushing, can achieve better cleaning results but can also damage the surface due to friction between solid components [18]. Scratching becomes especially critical when particles to be removed are made from hard materials [19]. Apart from ultrasonic cleaning, high-pressurized waterjets provide the only mechanical and noncontact solution. This technology is widely used in the cleaning industry and can handle nonregular surfaces, but, as we will soon find out, it consumes a great amount of power and water. Therefore, the key to this technology is nozzle geometry itself [20,21]. In places where water scarcity is a major problem, the use of pressurized water is not only an economical issue, but also an environmental one [18].
Ultrasonic cleaning is a very interesting alternative for this reason; but the need for this technology to fully immerse the body makes errant the cleaning of large-sized or anchored surfaces, such as floors, walls, ceilings, tables, cars, planes, etc. Certain developments and improvements can be used to try to find a solution to the aforesaid constraints. "Ultrasonically Activated Stream", or "UAS", techniques, for instance, are based on applying ultrasonic vibration through a waterjet mechanism. This is the case for the so-called StarStream UAS ® , developed by Offin and Leighton [22], which uses a waterjet that flows through a 23 kHz transducer. Studies performed by Howlin et al. [23] show that the UAS device is able to effectively remove various types of biofilms from dental plaque (Streptococcus mutans, Actinomyces naeslundii, and Streptococcus oralis). In the semiconductor industry, megasonic cleaning is very widespread. Due to the needs of on-line manufacturing, researchers from Honda, such as Chen et al. [24], have developed a method of transferring megasonic cavitation through waterjets. In cases where the component to be cleaned exceeds the dimensions of the waterjet, the same company has managed to develop certain curtain and waterfall configurations to circumvent this issue.
By analyzing how the same principle at ultrasonic frequency is applied to nonsubmersible surfaces, this paper suggests a radically different approach by introducing ultrasonic vibration into a thin liquid layer, which is then deposited onto a dirty surface. The main problem with this procedure, however, is that liquids tend to atomize when high ultrasonic energy is applied [25]. Each liquid presents a different atomization threshold [26] that must be adjusted to avoid loss of mass. However, vibration can dramatically increase wettability between solid and liquid phases [27] owing to the vibration-induced force on the surface [28]. This phenomenon helps to form a stable capillary bridge between the vibrating surface and the dirty one. The capillary bridge assumes the same function as the conventional tank but obviates the need for immersing the dirty surface in a confined volume.
With the aim of shedding light on this novel approach, a thorough understanding of the effect of the main process parameters on non-immersion ultrasonic cleaning is needed. For the sake of producing a justifiable evaluation of cleaning results, specular surfaces (mirrors) have been selected. This is because many authors, such as Fernandez-Garcia et al. [29], demonstrate that relative reflectance measurement is directly proportional to levels of cleanliness.
Working Principle
A key concept in understanding the mechanism which generates cavitation in a thin film of liquid is the atomization threshold. According to the work of Lozano et al. [26], when ultrasound is applied to a small volume of liquid, the fluid will atomize depending on the amplitude and frequency of the sound, and, moreover, the physical properties of the liquid itself. Thus, for each combination of liquid and frequency, a cavitation amplitude threshold can be established, as shown in Figure 1. In order to clean a surface, the vibration amplitude should, ideally, be close to the atomization limit, but never exceed it. By doing this, two critical conditions are satisfied: first, it is possible to generate as much cavitation as possible without atomization, which proves decisive for the resulting level of cleanliness. Second, the wettability between the surface, the liquid, and the vibrating element can be increased. This dramatically enhances the liquid layer volume between solid elements.
As shown in Figure 2, the cleaning procedure is based on sweeping the resonating element (sonotrode) over the dirty surface, always ensuring the presence of the liquid film between the two solid elements. In this case, the cleaning device has been assembled on a trolley that sweeps the tip of the sonotrode while the floor is being soaked by means of a 5mm diameter hose. Depending on the physical properties of both the surface and the dirt, some liquids will remain behind the sonotrode. To guarantee a constant film, two strategies can be adopted: - Wetting the dirty surface prior to sweeping. -Renewing the liquid interface continuously by means of a hose. The picture on the right shows the same device cleaning a dirty floor by sweeping the sonotrode. The difference between the dirty and clean areas is clearly visible to the naked eye.
Equipment
The main commercial devices used during the experimentation were as follows:
Experimental Set-Up
Commercial and specific components were assembled for experimental characterization. Regarding the ultrasonic cleaning system, the main difference lies in the sonotrode or ultrasonic horn, which was designed and manufactured for the present purpose of cleaning. The equipment consisted of the following: commercial Bandelin ultrasonic generator: it transformed the electric signal from 50Hz to 20kHz and increased the output voltage. It autotuned the frequency, and output power could be manually adjusted from 10 to 100%. -commercial Bandelin piezoelectric transducer: it turned the high-frequency output of the generator into a vibration by means of piezoelectric rings. The whole thing resonated at about 20 kHz so that the vibration was maximized. -sonotrode: this was the working tool that came into contact with the liquid layer. It amplified the output of the booster and resonated at about 20 kHz. Being specifically designed for this purpose and manufactured using aluminium, it was mechanically attached (screwed) to the transducer. The main difference to other standard sonotrodes is its rounded tip, which increases the attaching surface and eases the sweeping operation, as shown in (Figure 3). With the aim of controlling the sweeping speed and gap between the sonotrode and the dirty surface, the whole device was assembled on the 3-axis Kondia Maxim CNC milling machine, as shown in Figure 4. A specific holder was manufactured to attach the transducer and the sonotrode to the machine header. The cleaning experiments were performed on a DIN A4 mirror, whose reflectance was measured before and after the cleaning by means of 15R Devices and Services Reflectometer, which was placed on the mirror surface and measured the reflectance through the emission and reception of a laser spot. The mirror was soiled artificially with a mixture of real desert dust and tap water, and then left to dry. In this way, the dust was cemented and could not be removed by blowing or the force of gravity.
Design of Experiment
The main factors governing the non-immersion ultrasonic cleaning process were as follows: acoustic power P [W] of the sonotrode tip: it was modulated through the power limitation of the generator. Its upper limit is 60 W, since, above this value, the cleaning liquid (water) is atomized. Working at said power, the tip of the sonotrode vibrated at 15 µm. The output of the experiment was the cleanliness factor, which was indirectly measured through the relative reflectance. Relative reflectance R rel (1) or cleanliness was the ratio between its absolute value, measured by the reflectometer, and the maximum value (R max ) that the clean mirror could achieve. The obtention of the maximum values will be explained later on. A perfectly clean mirror would reach value 1 in terms of relative reflectance.
Due to the inhomogeneity of the dirt, 5 measurement points were considered for each cleaning experiment. The sample's size was a standard DIN A4 (210 × 297 mm). The maximum power consumption of the device is 60 W. As for water consumption, a 1 mm layer on the whole mirror sample, equivalent to 0.06237 L, would do in the worst-case scenario.
The procedure of characterizing and soiling the mirrors started with measuring the absolute maximum reflectance of each mirror under perfect cleaning conditions (R max ). All the mirrors were cleaned with deionized water, soap and a microfibre cloth. After cleaning and rinsing the mirrors, ethanol and a hot air dryer were used to remove the remaining water residue. Once the mirrors were perfectly clean, the absolute reflectance of each one was measured on 5 different points, and the average value was then calculated.
After consolidating on the degree to which perfect cleanliness was achieved, the mirrors were artificially soiled. For this reason, real desert sieved dust from Almeria (Spain) was used. The maximum particle size after sieving was 150 µm. In order to increase the adhesion between the dust and the mirror, the dust was moistened with tap water at the same volume proportion ( Figure 5). The mixture was applied to the surface of the mirrors and dried with hot air. This lead to a cementation process which significantly increased adhesion. This kind of soil cannot be removed by simply blowing or pouring water on it. However, to distinguish between the different degrees of soiling, the present procedure will be denominated "slight" or "soft" soiling. Table 1 shows the results of all the measurements before and after the slight soiling procedure. With the aim of understanding not only the isolated effect of each variable, but also the interaction between them, the experiments were based on a 2 k + 1 factorial model. This design was based on a linear regression model which included the cross and triple interaction of certain factors. It included a central point that determined if quadratic factors were relevant or not. Considering the three variables (factors) mentioned, it required at least 9 experiments to saturate the equations ( Figure 6). The response function was the relative reflectance (cleaning factor) measured after each experiment. Table 2 summarizes the factors to be included in the experimental design and the limitations. As will be shown later on, the 9-point experiment was focused on understanding the general behavioral process under slight and intense soiling conditions. Once the influence of each parameter was known, an optimized experimental design was performed. As a way of evaluating the cleaning efficiency for harder soil, the whole experiment was repeated for intensively soiled mirror samples. The sample preparation procedure was the same for the slightly soiled ones, but, this time, the mirrors were dried and soiled twice. In this way, the number of particles on the surface was visibly higher, reaching cleanliness values between 0.13 and 0.06. Table 3 shows the reflectance values for each mirror sample. The results of the non-immersion ultrasonic cleaning were compared with pressurized waterjets. This technology is based on pressurizing a water flow and releasing it into the atmosphere at a very high speed. The strong turbulence in output mixes the water with the air and expands the jet in a conical shape. If the surface to be cleaned is too far from the nozzle, particle removal will be lower, making the process dependent on the working distance instead. The experiment was based on testing the cleaning at different distances between the mirror and the nozzle (Figure 7). The mirrors were cleaned for 5 s (equivalent to 3.6 m min −1 ), consuming 1.25 L per probe.
Results and Discussion
The experiments were each repeated three times, in every soiling scenario, in order to accurately assess and evaluate the validity of the results. Since reflectance is measured on a single point, it was measured in three different locations after each cleaning experiment. The results from the experimentation are summarized in Table 4 (slightly soiled) and Table 5 (intensively soiled). All the factors are represented in their codified value (1, −1 or 0). The output for each experiment is the absolute average in relation to relative reflectance (cleanliness factor, CF). The table also includes the standard deviation (SD) for each set of experiments. The data above make obvious the great sensitivity and care involved in the process. With the proper selection of power, gap, and speed, the cleaning factor reaches up to 0.94, while an improper combination turns into a cleaning factor below 0.2, in the worst-case scenario. Figure 8 shows the cleaning results for experiments 4 and 5, where the difference can be visually appreciated due to the intensive soiling. Both experiments represent the best and worst cleaning conditions. The intense cleaning conditions (experiment 5) allow for a cleanliness factor of up to 0.968, where few particles remain on the surface. The reason for not reaching the maximum values could be related to the limitations of the ultrasonic cleaning mechanism itself and/or to the rinsing process.
Compared to the intensive cleaning process, poor cleaning shows a nonhomogeneous finishing. In spite of the undesirable results, poor cleaning conditions provide an important clue about the cleaning mechanics. Insufficient cleaning occurs in a branched shape (Figure 8), the so-called "streamed cavitation", similarly described by Lauterborn. The areas affected by the streamers show similar cleaning results to the intensely cleaned area, but the streamers do not cover the whole surface and the cleaning is, therefore, insufficient for its intended purpose. Another very interesting point about Figure 8 is the fact that the distribution of the branches presents a well-defined periodic distribution, which should be studied in more detail. This branched cleaning does not only affect visual appearance, but, also, the dispersion of results. As mentioned before, the measurement of the reflectance is taken on several points. Consequently, a nonhomogeneous cleaning approach leads to a greater dispersion of cleaning results. This is especially significant in the case of the intensively soiled mirrors, where the difference between clean and dirty areas is easily visible to the naked eye. It could explain why the standard deviation reaches its highest values in the worst cleaning conditions. This first result, together with the visual appearance of the mirrors, concludes that the main mechanism for the removal of particles is cavitation itself. If there are no bubbles collapsing on the surface of the mirror, the particles do not detach from the surface. In this sense, collateral cleaning mechanisms, such as dissolution or turbulence of the aqueous medium, are discarded. The ideal cleaning occurs when the aforesaid cavitation streamers cover the whole surface of the mirror seamlessly.
From Mason's [5] theoretical point of view, for a given sound frequency, cavitation intensity is related to acoustic power. This fact is clearly reflected in the cleaning factors in Tables 4 and 5. However, the results also show that lower gaps lead to significantly better results than large ones. In terms of sound propagation theories, this does not match with the theoretical approach. The acoustic impedance of water is relatively low, and, as such, cleaning efficiency should be almost the same with a gap of 5 mm or 2 mm. It may be that cavitation bubbles can nucleate more easily on solid surfaces. In ultrasonic cleaning baths, it is very common to see how the cavitation streamers are formed close to the ultrasonic transducer. In this case, two solid surfaces come into play: the mirror and the sonotrode tip. If a gap of 5 mm is enough to reduce cleaning effects, most parts of the cavitation must be nucleating directly onto the surface of the sonotrode tip, as shown in Figure 9. Only the largest streamers reach the surface of the mirror, resulting in a branched and inhomogeneous cleaning. If the gap is reduced to 2 mm, more streamers will reach the mirror surface, resulting in a more homogeneous cleaning. As can be deduced from Figures 8 and 9, cleaning must ensure that the streamers reach the mirror surface and that the scanning speed is, at the same time, slow enough to allow the streamers to cover the whole dirty area. In case the speed does not satisfy the productivity criteria, it is possible to increase the power, thus affecting the size and intensity of the streamers. However, the power is restricted by an atomization limitation for each liquid. A linear regression model has been used for each scenario so that the results are linearized as a 4-dimensional equation (Figure 10). The value of the determination factor R2 was 75.74% for the first set of slightly soiled mirrors, and 83.67% for the intensively soiled ones, with each one taking into account the contribution of each codified factor, summarized in Table 6. Also taking into account that, in both cases, R2 is higher than 75%, it can be ascertained that the global behavior of the system is linear enough (quadratic factors are not relevant). Table 6 shows that the effect of each factor varies according to the intensity of the soiling. It can be observed that, for soft soiling, the most significant factors are the gap (H), the speed (V), and the power (P), with a very similar effect. However, the cleaning behavior in the case of intensively soiled samples depends, firstly, on the power (P), and, secondly, on the gap (H). In fact, the crossed term, VHP, carries more weight than the speed (V) itself. This phenomenon could be linked to the cavitation streamers already mentioned. If they are not powerful enough [P] or they are too separated from the mirror [H], they will not detach the soil particles and the cleaning will always show poor results, regardless of the speed factor [V].
With linear regression models, it can be concluded that cleanliness tends to maximize its value with: (1) Higher power (P) (2) Lower gap (H) (3) Lower speed (V) The power, P [W], can be increased by up to 60% of its nominal value; otherwise, atomization takes place and the process is no longer stable. The gap can be minimized but it should stay big enough to avoid physical contact (scratching). The minimum gap is defined by the geometrical accuracy of the whole equipment. In this case, values below 1 mm are too risky, as the sonotrode might scratch the mirror. Sweeping speed determines the exposure time of the mirror to the ultrasonic vibration, on one hand, and the productivity of the process, on the other hand. From Table 6, it can be concluded that the three main process parameters have an optimum operating point which does not depend on the soiling intensity, but, instead, on external constraints. The power must be at maximum output without exceeding the atomization limit. The gap must be at a minimum, without compromising the contact between the surfaces. The sweeping speed must also be at a minimum, without compromising productivity.
Following the previous optimization criteria, the power and the gap have been set to 60% and 1 mm subsequently. The speed has been varied from 1000 to 4000 mm min −1 . The procedure for soiling and reflectance measurement is the same as in previous experiments, considering a slightly contaminated mirror and another intensely contaminated one. Table 7 shows the cleanliness factors for each case. As can be seen, the results are quite similar to each other, in that they are generally quite high. Indeed, up to 0.98% cleanliness can be reached, which is very close to the perfect cleaning result (CF = 1).
It can also be concluded that, once the process's conditions reach certain thresholds in terms of power and gap, the amount of soil particles does not affect the final cleaning results; but, obviously, the sweeping speed does. This could be explained by the fact that the strong turbulence generated by cavitation can easily remove large particles; however, since the flow speed of the hydrodynamic boundary layer is almost zero, submicron sized particles are much harder to remove.
Regarding water and power savings compared to conventional waterjet cleaning, the results are summarized in Tables 8 and 9. Table 8 shows few cleaning differences in pressurized waterjet cleaning when distance is modified. This could be accounted for by the fact that the minimum speed required to remove particles is already obtained at 1.5 m. However, for larger surfaces, a 0.5 m distance would not be realistic enough, as the effective cleaning diameter would be below 50 mm. Table 9 concludes that, from an environmental point of view, pressurized waterjet technology cannot compete with non-immersion ultrasonic cleaning, needing 15 times more water and 115 times more electric power than the latter. At first glance, it might seem that the low cleaning speed of the ultrasonic method penalizes the power and water consumption but, as can be observed, both variables are much lower than for pressurized waterjets. Any improvement regarding the sweeping speed would only increase the efficacy of the process. The reason for these highly positive results lies in the particle removing mechanism itself. As demonstrated, ultrasounds use cavitation, while pressurized waterjets are based on hydrodynamic flow. It happens to be that, for small sized particles, the boundary layer between the solid and the liquid elements drastically reduces the flow of speed. As shown in Figure 11, if the particles are smaller than the boundary thickness, they become extremely difficult to remove using the pressurized waterjet method, as the relative speed in this area is almost zero. This is the reason why ultrasonic cavitation is much more effective. However, the present non-immersion ultrasonic cleaning approach is limited to flat surfaces, while pressurized waterjets can handle any kind of surface geometry. Gaps between 2 and 5 mm can easily be controlled for precise surfaces; but for general purposes, such as floor cleaning, adaptative mechanisms, such as pneumatic actuators or springs, should be considered. The main advantages and disadvantages of the ultrasonic method are summarized in Table 10.
Conclusions and Outlook
The work presented describes the research performed in the field of non-immersion ultrasonic cleaning for specular surfaces. The process is based on sweeping an ultrasoundemitting sonotrode over a liquid layer without exceeding atomization levels. The effects of gap, power, and sweeping speed have also been studied, and linear regression models have been used to support the following conclusions:
•
Regardless of soiling intensity, the conditions to obtain the maximum relative reflectance of 0.98, at a maximum power of (36 W), without reaching atomization, are to have a minimum gap of 1 mm (no contact) and minimum sweeping speed of 1000 mm min −1 .
•
The effect of each process parameter is different depending on the soiling conditions. • For soft soiling conditions, power, gap, and sweeping speed affect the process on balance. • For hard soiling conditions, power and gap are the most significant factors.
•
Poor cleaning conditions lead to inhomogeneous cleaning because of the formation of cavitation streamers.
•
Once the power and gap are tuned to optimum settings, the relationship between the cleaning factor and the sweeping speed is almost linear. The degree of soiling is not relevant under aforesaid conditions. • Compared to pressurized waterjets, the present technology consumes 115 times less power and 15 times less water, even in the most conservative scenarios. • A 1 mm gap with a 1000 mm min −1 could account for the main technical and productive constraints of the technology.
This paper has focused on the cleaning of reflective surfaces soiled with cemented desert dust. Obviously, an immediate recommendation would be for the cleaning of optical elements located in desert areas, such as solar concentration plants, photovoltaic panels, or even telescope lenses. However, this technology can also be successfully applied and used in many sectors, e.g., urban floor and wall cleaning, indoor cleaning, vehicle maintenance, industrial machinery, etc. Further research should focus on the implementation of the existing technology and/or further material combinations, such as metal and oil or ceramics and organic matter, per se. On the other hand, the main drawbacks regarding sweeping speed and gap should be addressed in a separate paper. New sonotrode designs, cleaning fluids, and adaptative guidance systems could help increase the productivity and accessibility of the technology under evaluation. | 6,487.6 | 2021-03-26T00:00:00.000 | [
"Materials Science"
] |
The Conditional-Potts Clustering Model
This article presents a Bayesian kernel-based clustering method. The associated model arises as an embedding of the Potts density for class membership probabilities into an extended Bayesian model for joint data and class membership probabilities. The method may be seen as a principled extension of the super-paramagnetic clustering. The model depends on two parameters: the temperature and the kernel bandwidth. The clustering is obtained from the posterior marginal adjacency membership probabilities and does not depend on any particular value of the parameters. We elicit an informative prior based on random graph theory and kernel density estimation. A stochastic population Monte Carlo algorithm, based on parallel runs of the Wang–Landau algorithm, is developed to estimate the posterior adjacency membership probabilities and the parameter posterior. The convergence of the algorithm is also established. The method is applied to the whole human proteome to uncover human genes that share common evolutionary history. Our experiments and application show that good clustering results are obtained at many different values of the temperature and bandwidth parameters. Hence, instead of focusing on finding adequate values of the parameters, we advocate making clustering inference based on the study of the distribution of the posterior adjacency membership probabilities. This article has online supplementary material.
INTRODUCTION
Clustering with the Potts model is a rather recent nonparametric technique introduced by Wiseman (1996, 1997) under the name of super-paramagnetic clustering. There is a large literature on the subject in the physics community (Agrawal and Domany 2003;Ott et al. 2004;Reichardt and Bornholdt 2004). Its impact has reached the medical (Stanberry, Murua, and Cordes 2008), bioinformatics (Getz et al. 2000;Einav et al. 2005), and the computer science and machine learning communities as well (Domany et al. 1999;Quiroga, Nadasdy, and Ben-Shaul 2004). It also has been mentioned in the statistical literature, but as Potts model clustering (Murua, Stanberry, and Stuetzle 2008), where its link with other kernel-based methods and nonparametric density estimation was presented. A similar, simpler model has also been used as a probabilistic framework for K-nearest-neighbor classification (Cucala et al. 2009). One of the main advantages of superparamagnetic clustering over other kernel-based methods is the simultaneous estimation of the clustering and the number of clusters. That is, there is no need to specify the number of clusters a priori. The method uncovers it. From the statistical point of view, one of the advantages of the method is its connection to the well-known Potts model density, also known as the Boltzmann's density. Having a probabilistic framework aids in making inferences about the clusters and their number. In fact, the clustering is estimated by Markov chain Monte Carlo (MCMC) simulation of the labels' distribution.
Let X = {x i ∈ R p , i = 1, . . . , n} be our data. In the super-paramagnetic clustering framework, the observations form the vertices of a graph. Let us denote this data graph by G(X). We say that two observations x i , x j are neighbors if their corresponding vertices are joined by an edge of the graph G(X). In this case, we will write i ∼ j . The edge weights are given by the similarities between the neighboring points. We assume that the similarities are given by a Mercer kernel k ij = k(x i , x j ), that is, a continuous symmetric nonnegative definite kernel (Girolami 2002). The similarities usually depend on the distances between the observations ||x i − x j ||, and a scale parameter σ , the bandwidth, that controls the relative spread of the distances, so that (1) In practice, the data graph is made rather sparse by setting k ij = 0 for observations that are too far apart. One way to accomplish this is by using the K-nearest-neighbor graph, that is, a graph where the only edges associated with each x i are those associated with the Knearest-neighbor points of x i . In this case, the work of Blatt, Domany, and Wiseman (1997) recommends using K = 10 for high-dimensional data, whereas the work of Stanberry, Murua, and Cordes (2008) recommends using a moderate value of K in the range 5 < K ≤ 30. In many applications, such as in bioinformatics (e.g., microarray data), the similarities correspond to the correlation between two signals (e.g., signals coming from two different probe sets or genes that measure the expression levels under several experimental conditions or over certain elapsed time). In this case, ||x i − x j || = 2(1 − corr(x i , x j )). When the Euclidean distances are used, one may want to control for different variations in spread of the different covariates making up each data point x i . This may be achieved by considering a multi-bandwidth model (one σ j for each covariate j ∈ {1, . . . , p}), or by standardizing the covariates. In practice, it is hard to estimate just one bandwidth σ , let alone p of them. We prefer to standardize the covariates. Most of the literature treats the kernel bandwidth as a fixed parameter. It is usually set to the mean of the data point similarities Wiseman 1996, 1997). However, Murua, Stanberry, and Stuetzle (2008) showed that treating it as variable may give better clustering results. Although they suggest the use of a data-driven adaptive bandwidth, we believe that its incorporation as a parameter of the model is more appropriate.
We assume that given the data, the class labels follow the Potts model density. This assigns labels i ∈ {1, . . . , q} to each observation x i , i = 1, . . . , n, so that observations similar to each other are likely to be assigned the same label. Statistically, let z i = 1 if x i has been assigned to the th label class, and zero, otherwise. The Potts model density is given by p({z i }|σ, T , X) = (Z(σ, T , X)) −1 exp where δ ij . = q =1 z i z j = 1 if and only if x i and x j are assigned the same label; T, the temperature, is one of the main parameters of interest in this work, and is the normalizing constant of the Potts density. Since any label configuration {z i } defines a partition of the data, we will refer to them as partitions. They can be efficiently sampled by MCMC algorithms (Swendsen and Wang 1987). The labels do not necessarily indicate cluster membership. However, two observations lying in the same cluster must share the same label. The labels are rather auxiliary variables that help to merge similar observations together. The information on the data clustering is obtained from the probabilities Q ij (σ, T ) that two given data points lie in the same cluster (under the Potts model) We refer to the Q ij (σ, T )'s as the membership adjacency probabilities (they are also known as the spin-spin correlations in the physics literature). In practice, the Q ij (σ, T )'s are estimated from several partition samples generated sequentially using MCMC.
The Conditional-Potts Model. Note that the dependency on the data in the interactions or edge weights k ij (σ ) is present only through the mutual distances (or similarities) between the data points (for convenience in the notation, we will sometimes drop the σ from the k ij 's). Therefore, in what follows, it will be useful to think of the mutual distances (or similarities) between the data points as the observed data. We set (i.e., define) our likelihood for the parameters (σ, T ) to Given a prior distribution for the parameters, π (σ, T ), we consider the Bayesian embedding of the super-paramagnetic clustering whose density is proportional to As a consequence, in this framework, the Potts model density corresponds to the marginal conditional p({z j }|σ, T , X) of the labels given the data and the parameters. In practice, we are interested in estimating the posterior distribution of the parameters (σ, T ). It is straightforward to see that this posterior is given by where Z(σ, T ) is the overall normalizing constant of the density given by (5) (i.e., the integral over the data of Z(σ, T , X)), and as before, Z(σ, T , X) is the Potts model density normalizing constant. Our procedure to unveil the clustering of the data is based on estimates of this posterior. Note that all what we need to estimate the data clustering are the posterior adjacency probabilities Q ij (σ, T ). Equation (6) allows us to get estimates of the marginal posteriors Q ij = p(δ ij = 1|X) by sampling from (6) and then from (2). We will refer to the model given by (5) or (6) as the conditional-Potts clustering model as opposed to the Potts model or super-paramagnetic clustering procedure given by (2). A clustering of the data is obtained by consensus: having obtained several partitions of the data, the probabilities of cluster membership adjacency of two data points are readily estimated from the proportion of times these two data points are found in the same component (Sokal 1996). A new graph, the consensus graph, is constructed using these probabilities. An edge between two data points exists in the consensus graph if and only if the frequency of cluster membership adjacency between these points is larger than a predefined threshold, say Q. A consensus clustering is formed by the connected components of this graph. Note that changing the threshold Q may result in different clusterings of the data. The super-paramagnetic clustering procedure fixes it at Q = 0.5. Choosing the threshold has not received much attention in the literature. However, we show in this work that, together with the temperature, this is a key parameter of the procedure. Our experiments of Section 5 show that dramatic clustering differences may occur depending on its value. Therefore, it is advantageous to conceive a more principled way to obtain a suitable clustering of the data. That is in part what we propose in this article. We advocate to shift the focus of the problem from trying to estimate optimal values of the parameters (σ, T ) to try to obtain good estimates of the marginal posterior adjacency membership probabilities Q ij . Note that in contrast to the Q ij (σ, T ) used in the super-paramagnetic clustering, these quantities do not depend on the parameters (σ, T ). Consequently, we can use these probabilities to get posterior consensus clusterings that are independent of the values of (σ, T ). Their distribution implies an inherent distribution on the number of clusters and hence on the clusterings of the data. This latter distribution is given by the frequency of each particular consensus clustering, say π (c), where c ∈ {1, 2, . . . , n} denotes the number of clusters, as a function of a random membership probability threshold drawn from the distribution of the Q ij 's. We suggest to draw inference on the data clustering from π (c). We show that, in general, several different clusterings are admissible and that some of them have more weight than others. We try to formalize this point by defining for each clustering a measure of cluster evidence (see Section 3).
The advantages of our conditional-Potts clustering model led Linard et al. (2012) to apply our methodology to their large evolutionary biology history data. Evolutionary systems biology studies the evolution of biological networks by integrating evolutionary information extracted from multiple biological levels, from the DNA level (genome context) to protein and phylum levels (protein sequence and structure, species sharing common genes, etc.) (Medina 2005;Loewe 2009). Aiming at easing the analysis of complex evolutionary histories with a human-genome perspective, Linard et al. (2012) developed a formalism to put together diverse sources of data from 17 vertebrates species whose genomes are comparable with the human genome. In this work, we apply our clustering methodology to identify genes that share similar evolutionary histories. That is, to find a meaningful clustering of the barcodes from an evolutionary biology systems history point of view (see Section 6).
Another important contribution of this work is the introduction of a less computational intensive procedure than the super-paramagnetic clustering and the conditional-Potts clustering to do the clustering with the Potts model. This procedure is heavily based on a data-driven estimate of a very informative prior, which is derived from random graph theory and the connection between kernel-based methods and kernel density estimation (Murua, Stanberry, and Stuetzle 2008). We refer to this latter procedure as the informed conditional-Potts clustering. We show in Section 5 that its performance is similar to that of the main stochastic Population Monte Carlo method introduced in this work for Potts model clustering (see Section 2.3).
The article is organized as follows. The stochastic procedure to estimate the posterior densities of the parameters is described in Section 2. The estimation of the consensus clustering is described in Section 3. The informed conditional-Potts clustering method is introduced in Section 4 together with the elicitation of the informative priors for the temperature and kernel bandwidth parameters. The results of the application of our model to certain artificial and real datasets are shown in Section 5. Section 6 presents the application of our methodology to the clustering of evolutionary history of the human proteome barcode data. Section 7 presents some conclusions and a discussion. The proofs of the theorems that led to the elicitation of an informative prior for the temperature, and that established the convergence of our algorithm are shown in the online supplementary materials.
SAMPLING FROM THE POSTERIOR DISTRIBUTION
The normalizing constants Z(σ, T ) and Z(σ, T , X) of the conditional-Potts clustering model are intractable when the data size is moderately large. Therefore, sampling directly from the posterior π (σ, T |X) is not possible. In addition, even MCMC methods such as the Metropolis-Hastings algorithm to generate samples (σ , T ) from the posterior do not work, since knowledge of the ratios Z(σ, T )/Z(σ , T ), Z(σ, T , X)/Z(σ , T , X) are necessary. The literature suggests several possibilities to overcome this problem. The most popular technique seems to be based on path sampling (Ogata 1989;Richardson and Green 1997;Gelman and Meng 1998). However, this involves the estimation of the normalizing constant for all (σ, T ), or at least for a reasonable grid of values of the parameters; it does not take into account the uncertainty in the estimators arising from the Monte Carlo integration. Instead, we follow a different adaptive MCMC procedure inspired by the work of Atchade, Lartillot, and Robert (2013). Section 2.3 describes our adaptation of their algorithm to the conditional-Potts clustering model. It relies on the Wang-Landau algorithm (Wang and Landau 2001) to produce a "flat histogram" on the parameter space. Like the Atchade et al.'s algorithm (Atchade, Lartillot, and Robert 2013), our algorithm generates a stochastic process for which the distribution of (σ m , T m ) approaches the desired posterior distribution as m → +∞. The process is not necessarily Markovian, although it involves Metropolis-Hastingslike sampling. The main idea is to replace the normalizing constants Z(σ, T ), Z(σ, T , X) by a series of stochastic approximations of them. These are derived through population Monte Carlo techniques using a small fixed grid of values of the parameter space. A variant of the Wang-Landau algorithm is used to sample on this parameter-space grid.
THE WANG-LANDAU ALGORITHM
In the original Wang-Landau algorithm, the goal is to sample from the target distribution, say π (u), for u in one of the discrete "energy" states g ∈ {1, 2, . . . , d}. But π (u) is only known up to a normalizing constant. Wang and Landau (2001) suggested sampling instead from π c (u) ∝ d g=1 π(u) c(g) 1 g (u), where c(·) is a function of the energies, and the indicator function 1 g (u) = 1 if and only if u ∈ g, and is zero otherwise. Once an energy state g is visited, c(g) is modified so as to make another visit to g more unlikely. In fact, at iteration m + 1 of the sampler, c m+1 (g) = c m (g)(1 + γ m 1 g (U m+1 )), where U m+1 denotes the current state of U. The sequence γ m is a slowly decreasing random sequence that controls the amount of penalty given to current visited energies. The goal is to make visits to all energy states uniformly as m → ∞. If this is the case, π c m (u c m (g ) may be used as an estimate of the probability of the energy state g. The algorithm seems to be very efficient (Dayal et al. 2004;Ghulghazaryan, Hayryan, and Hu 2006;Zhou et al. 2006). Atchade, Lartillot, and Robert (2013) had extended it to more sophisticated target densities without assuming a discrete energy space. We have adapted this latter procedure to our problem.
DECOMPOSITION OF THE NORMALIZING CONSTANT
As mentioned above, the data X enter into the model through the distances D ij = ||x i − x j || used to evaluate the edge weights. Without loss of generality, we may assume that all distances lie in the interval [0, 1] (the scale parameter σ will absorb the differences in scaling). To simplify the calculation, we will assume that there is no interaction among the distances, that is, we will assume that the domain of integration of the distances is a rectangular region. This assumption yields where β(σ, T ) = − log( 1 0 e −k(D ij /σ )/T dD ij ) (this quantity can be easily computed for all (σ, T ) that need to be evaluated). Note that Z(σ, T ) turns out to be the normalizing constant of the Potts density with constant interaction β(σ, T ).
Let κ(σ, T ; σ , T ) be a kernel function summing up to unity on a fixed grid for all (σ, T ). A population Monte Carlo estimate of Z(σ, T ) can be derived from the following identity that combines (7) with the decomposition of the normalizing constant suggested in Atchade et al. (2010): This hints at estimating Z(σ, T ) by sampling from the Potts densities p(·|β(σ g , T g )), g = 1, . . . , d. Similarly, a population Monte Carlo estimate of Z(σ, T , X) can be derived from (3) by As before, this hints at estimating Z(σ, T , X) by sampling from the Potts densities p(·|σ g , T g , X), g = 1, . . . , d. However, (8) and (9) still require the knowledge of Z(σ g , T g ) and Z(σ g , T g , X). We overcome this problem by considering a stochastic Monte Carlo algorithm based on parallel runs of the Wang-Landau algorithm: one to estimate Z(σ g , T g , X), and a second one to estimate Z(σ g , T g ). The proposals (σ , T ) are generated from stochastic approximations of the posterior distribution based on the approximations of (8) and (9). The size of the grid, d is a parameter to be chosen, which depends on the problem at hand. Atchade, Lartillot, and Robert (2013) suggested using a moderate value, for example, 100 (see also Section 5).
A POPULATION MONTE CARLO ALGORITHM FOR POTTS MODEL CLUSTERING (PMC2)
Our algorithm follows closely that of Atchade, Lartillot, and Robert (2013). The main difference lies in the incorporation of a parallel run of the Wang-Landau algorithm as well as a mixing step to generate the parameters. In what follows, a draw from the Potts model density p(·|σ, T , X) will be denoted simply as {z m i } (for = 1, . . . , q and i = 1, . . . , n) or {δ m ij } (for i, j = 1, . . . , n), whereas a draw from the Potts model p(·|β(σ, T )) will be denoted as {z m β; i } or {δ m β;ij }. The superscript m will denote the iteration or sample number during the run of the algorithm. We will refer to the algorithm that follows as Population Monte Carlo for Potts Model Clustering, and will denote it by PMC2 for short. Given the current state of the parameters at iteration m, ({z m i }, {z m β; i }, T m , σ m ), and I m , I m β ∈ {1, . . . , d}: Step 1.
Note that Equations (13) and (14) resemble Equations (8) and (9), respectively. The expectations have been approximated by the population Monte Carlo sampling estimates.
Steps 1 and 2 correspond to the two parallel Wang-Landau algorithms.
Step 3 is the sampling step. This differs from the original Wang-Landau algorithm in that instead of only sampling from the grid of parameter values, the sampling is also done on the whole space of parameter values. Substeps 1.2 and 2.2 ensure that the sampling on the grid is done so as to obtain on the long run a flat-histogram, that is, that on the long run all points in the grid are equally likely to be sampled.
Step 4 is added so that the posterior adjacency membership probabilities may be estimated marginally, that is, independently of the parameters (σ, T ).
The label vectors z β, i and z i are initialized via the corresponding Potts model density using the Swendsen-Wang algorithm (Swendsen and Wang 1987;Sokal 1996). We suggest monitoring the convergence of the algorithm by computing the relative Euclidean distance between the estimated marginal adjacency probabilities yielded by successive iterations of the algorithm. Recall from the introduction that these probabilities are the quantities of interest to estimate the clustering structure of the data.
ASYMPTOTIC PROPERTIES
At convergence, e c m (g) (respectively, e c β;m (g) ) is proportional to the corresponding normalizing constant Z(σ g , T g , X) (respectively, Z(σ g , T g )). The rejuvenation step-size parameters γ m , γ β;m control the speed of convergence of e c m (g) and e c β;m (g) , respectively. They should slowly decrease to zero. We choose to update them according to the heuristic suggested in Atchade, Lartillot, and Robert (2013, sec. 2.4). The convergence a.s. of these sequences is assured by the Wang-Landau algorithm (Atchade and Liu 2010;Atchade, Lartillot, and Robert 2013), since the two sequences are built independently of each other.
Let the filtration The convergence in probability of the samples (σ m , T m ) is established as in Atchade, Lartillot, and Robert (2013) by an application of their general stochastic convergence Theorem 2.1. To ensure that the assumptions of this theorem are held, we only need that (P1) The sequences γ m and γ β;m are strictly positive random sequences adapted to {F m } satisfying m γ m = m γ β;m = +∞; and with probability one, m γ 2 m < +∞ and m γ 2 β;m < +∞. (P2) The proposal q m (σ , T |σ, T ) used to generate the samples (σ m+1 , T m+1 ) in Step 3 of the PMC2 algorithm is bounded from above and away from zero independently of m and the pairs (σ , T ) and (σ, T ). Moreover, The kernel κ(σ , T ; σ, T ) is bounded from above and away from zero independently of the pairs (σ , T ) and (σ, T ).
(P4) The prior density π (σ, T ) is bounded from above and away from zero.
CLUSTERING EVIDENCE
Having estimates of the marginal posterior adjacency membership probabilities Q ij , we can obtain a consensus clustering of the data as the connected components of a consensus graph. As mentioned earlier, a consensus graph depends on a threshold Q on the probabilities Q ij . It is clear that as the threshold increases, more clusters are found. We note that in the super-paramagnetic clustering the optimal clustering is chosen by fixing the threshold to Q = 0.5. We show in our experiments that, in general, this choice does not necessarily give the best results, unless the temperature T is explicitly chosen so as to give good consensus clusterings for Q = 0.5. To avoid having to select a threshold, we adopt a strategy based on what we called the clustering evidence associated with each possible clustering. For each possible value of Q ∈ Q = {Q ij , i, j = 1, . . . , n}, we obtain the corresponding consensus clustering P(Q), and the number of clusters c(Q) in it. A histogram of c(Q) may be used as support for favoring certain clusterings over others. It may be interpreted as follows: the chances of selecting a clustering with c(Q) components are given by the frequency of such a clustering have the threshold been chosen randomly according to the distribution of the posterior adjacency membership probabilities. The normalized frequencies (or "probabilities") derived from this histogram will be denoted by π (c(Q)). For any c ∈ {c(Q) : Q ∈ Q}, let F (c) = c ≤c π (c ). The clustering evidence odds of a clustering with at most c clusters against a clustering with more than c clusters is defined as odds(c) .
. We consider the pair (odds(c), π(c)) as the evidence supporting a clustering with c components. The best clusterings are chosen as the clusterings with the best clustering evidence: ideally, π (c) must be large in comparison with the other normalized frequencies π (c ), and the odds(c) must be much larger than 1.0. Clusterings with odds against them too large are discarded. In our experiments (see Section 5), we note that sometimes it is sufficient to look at π (c) to decide on the clustering, since some datasets present a large mode at a specific number of clusters. There are occasions where π (c) presents several modes of roughly the same order of magnitude. In these cases, it is better to look at the odds of the different modes. Often, there are clear jumps in magnitude at specific number of clusters c's. The preferred clusterings are associated with these values. Figure 1 illustrates these observations on the clustering evidence for two of the datasets used in our experiments of Section 5. The Gaussian dataset, which is composed of 50 Gaussian clumps, gives strong evidence for 49 clusters (the peak in π (c)). However, the Olive oil dataset gives evidence for three, four, six, or nine clusters. The evidence for more than 11 clusters is much weaker, since the odds against them are too large. Hence, one could choose nine clusters as the preferred clustering. But it may be worth to investigate the solutions with three and six clusters. Further analysis of these and other datasets can be found in Sections 5 and 7.
THE INFORMED CONDITIONAL-POTTS CLUSTERING (IPRIOR + PMC)
In our experiments, we observe that we can obtain similar clusterings by fixing the temperature and bandwidth parameters to their maximum a posteriori (MAP) estimates (σ M , T m ). In this case, the clustering evidence is given by the adjacency membership probabilities Q ij (σ M , T m ), and not by the posterior probabilities Q ij as above. In what follows, we will refer to this method of choosing the clustering as MAP+PMC. One of the main results derived from our experiments (see next section) is that a good clustering may be obtained by maximizing a data-driven prior of the critical temperature and kernel bandwidth instead of using the MAP estimates. The prior is elicited in the next section. It is used in Section 4.2 to build the informed conditional-Potts clustering method.
ELICITATION OF SENSIBLE PRIORS FOR THE PARAMETERS
Consider an extension of the Potts model where on each edge of the data graph a Bernoulli variable b ij , the bond, is introduced. This bond is set to 1 with probability p ij (σ, T ) = p ij (σ, T , {k ij }) = 1 − exp(−k ij (σ )/T ), if i ∼ j and x i and x j share the same label; otherwise the bond is set to zero. The resulting joint label/bond model is known as the Fortuin-Kasteleyn-Swendsen-Wang model (Sokal 1996). The model was discovered in an effort to develop an efficient MCMC algorithm to sample the labels {z i }. The resulting algorithm is the Swendsen Wang algorithm (Swendsen and Wang 1987). The bonds are auxiliary variables that help split the clusters. The marginal over the bonds is known as the random cluster model. Searching for clusterings in the data corresponds to searching for splits in the connected components of the random graph generated by the bonds given the labels. For example, a large variance in the size of the largest component indicates an imminent split of the component. There is a vast literature on this subject for the random cluster model with constant edge-weights (for a list of references, see, e.g., the excellent notes of Sokal 1996, and the book of Bollobas 2001). In this latter model, p ij (σ, T ) = p(σ, T ), for all x i , x j . The probability of having a bond b ij = 1 is then constant. The corresponding random cluster model density reduces to q C p(σ, T ) n 1 (1 − p(σ, T )) n 0 , where n 1 = number of edges with bond b ij = 1, n 0 = number of edges with bond b ij = 0 among neighboring data points, and C = C({b ij }) = number of connected components in the graph. This corresponds to a random graph with parameter p = p(σ, T ). If the original data graph is a r-nearest-neighbor graph, then the resulting random graph is a r-regular random graph, that is, each vertex has exactly r edges. The search for a cluster split in the data corresponds to the appearance of a giant (connected) component in the graph (Bollobas 2001, p. 138, chap. 6), which is a connected component consisting of more than half the number of vertices of the graph. It can be shown that if p is large enough, then this is the only nontrivial component of the graph. The appearance of a giant component is also referred to as a phase transition in the graph. The probability p causing it is the phase transition probability or the critical probability. Since, in general, our weights k ij (σ ) are not constant, we cannot directly use these results to find a good temperature for clustering. However, we can still devise a method to elicit a good prior density for the sought temperatures based on random graph theory. Consider for i, j fixed, the graph G(k ij (σ )) with the same vertices and edges of the original data graph G(X), but whose edges have now constant weight, so that p = exp{−k ij (σ )/T } for all existing edges in G(X) (we stress here that all edge weights are equal to the constant k ij (σ )). If m is the number of edges in G(X), then there are m graphs G(k ij (σ )), one for each different edge weight. The idea is to find estimates of the random graph phase transition probabilities for each of the graphs G(k ij (σ )), i, j ∈ {1, . . . , n}. These are then used to find estimates of the critical temperatures T ij associated with each particular value of the edge weight k ij (σ ). We show below in Section 4.1.1 the explicit connection between these probabilities and the critical temperatures. The temperatures so obtained are used to construct a prior density for the critical temperatures. Since this density depends on a given bandwidth σ , we only are able to elicit a conditional prior that will be denoted by π (T |σ ). Its formal derivation is given below.
The Temperature Prior: The Connection to Random
Graphs. The critical temperatures are inferred by using the random cluster model associated with the Potts model with equal edge weights. We consider random r-regular graphs of order n = the number of data points in X. In the random cluster model, the degree r = r(T ) of the vertices depends on the probabilities of having bonds equal to one (b ij = 1), which are given by p ij = 1 − exp{−k ij (σ )/T }. The idea is to increase the degree r until a giant component appears. This would signal a merging of the clusters, and hence a cluster transition. Since the critical degree r c depends on the temperature, the transition must occur at a particular temperature T c = r −1 (r c ). This computation is done separately for every value p ij . The prior for the critical temperatures is then obtained by estimating the density associated with the set of critical temperatures {T c (i, j )}.
To simplify the calculation, we suppose that each possible r-regular graph G(n, r) generated from the data has the same probability. The problem can be formulated as finding the value of r, so that, with high probability, G(n, r) is connected. To find a bound on this probability, we follow the second proof of Theorem 7.3 in Bollobas (2001). This gives the probability that a giant component appears as the number of edges r increases. Let p G denote the probability of having a connected component in G(n, r) of order at most n/2. Note that there cannot be a component of less than r + 1 vertices as each vertex is connected to r other vertices.
Theorem (probability bound). Let
Then The proof can be found in the online supplementary materials. Suppose now that this bound is upper bounded by p B . Then with probability at least 1 − p B there is only a giant component, as isolated vertices cannot exist. We choose the smallest value of r, say r c = r(p B ), that satisfies s r+1 /( √ 2π (r + 1) 5/2 (1 − s)) ≤ p B . Typically, we set p B to be equal to a small value such as 0.01. Note that the number of edges necessary for the connectivity of the graph is then r c n/2. Assuming that all edges are equally likely to occur, we have p = p c = r c n/(2m), where m is the number of possible edges in the graph. For a K-nearest-neighbor graph, m is bounded by nK. The corresponding critical temperature is then given by
The Bandwidth Prior: The Connection With Kernel Density Estimation.
Exploiting the connection between Potts model clustering and kernel density estimation, we derive a prior for the bandwidth parameter based on an adaptive bandwidth kernel density estimator (Abramson 1982;Silverman 1986, sec. 5.3). Having an estimate of the data densitŷ p(x), the adaptive bandwidth at observation x i is given by We note that in many applications the data graph corresponds to a K-nearest-neighbor graph. A quick estimatep knn (x) of the data density is readily available in this case. We will use this observation to derive a data-driven procedure to estimate an optimal bandwidth and temperature for clustering.
THE INFORMED CONDITIONAL-POTTS CLUSTERING
The idea is to find empirical estimates, that is, data-driven estimates, of the random graph phase transition probabilities for each of the graphs G(k ij (σ )), i, j ∈ {1, . . . , n}. These are used to find empirical estimates of the critical temperatures T ij associated with each particular value of the edge weight k ij (σ ). The explicit relation is given by Equation (16). The sample of number-of-edge temperatures so obtained is used to construct a density estimate of the critical temperature, sayπ (T |σ ). We stress that this density depends on all edge weights present in the data graph for any given bandwidth σ . The same idea can Figure 2. The logarithm of the data-driven priors for the Iris and Yeast cycle datasets. The "*" indicates the value that maximizes the prior. be applied to derive a data-driven prior for the kernel bandwidth. Equation (17) defines a sample of n possible values for the bandwidth parameter. We use an empirical estimate of the density,π (σ ), associated with the bandwidths {σ knn (x i )} as our data-driven prior for σ . In our experiments of Section 5, we have used a kernel density estimator for botĥ π (T |σ ) andπ (σ ). The kernel employed was the Epanechnikov kernel with bandwidths set to the standard deviations of the corresponding samples. Figure 2 shows the data-driven priors for some of the datasets used in our experiments of Section 5.
We say that a region in the temperature-bandwidth space is stable if there is high similarity between clusterings yielded by nearby points. Stanberry, Murua, and Cordes (2008) observed that these regions are also stable regions for the number of clusters and may correspond to phase-free regions of the random cluster model (i.e., no giant component appears in these regions; see Section 4.1.1). Very often in our experiments, the informative data-driven prior maximizer (σ p , T p ) lies in the high-density region of the posterior, and the clustering evidence is similar for both choices of the parameters: the MAP estimates and the informative prior maximizer. In what follows we will refer to this way of estimating the clustering as the informed-Prior PMC or iPrior + PMC for short. We believe that the good results achieved by the iPrior + PMC procedure indicate that, in general, our data-driven prior presents a high-density region in a stable region of the temperature-bandwidth space, as signaled by the posterior found through the PMC2 algorithm. Based on our experiments (see next section), we support the use of the iPrior + PMC procedure as an admissible alternative to PMC2 and MAP + PMC whenever the computational cost is an issue, for example, in the case of very large datasets.
EXPERIMENTAL RESULTS
Here, we present a comparison of the clustering performance of the conditional-Potts clustering achieved through PMC2 with that of the MAP + PMC and iPrior + PMC procedures presented above. For completeness in the comparisons, we also show the results associated with the super-paramagnetic clustering (which we will denote as SPMC). As is now customary in the machine learning and clustering literature, we measure the goodness of fit of the resulting clusterings with the adjusted Rand index (ARI). The ARI is a measure of similarity (agreement) between two clusterings (or partitions). It was first suggested by Rand (1971) and then corrected for randomness by Hubert and Arabie (1985). A perfect (Yeung et al. 2001) match is signaled by an ARI score of 1.0: the closest the score is to one, the more similar the clusterings are.
In all the simulations, we set a 12-cell grid of values for the kernel bandwidth and a 30-cell grid of values for the temperature. The grids were uniform grids on a slightly larger interval containing the range of values indicated by the informative data-driven priors. More explicitly, if the informative prior suggested the region (σ min , σ max ) × (T min , T max ), we placed the 12 × 30 grid points uniformly in (σ min /κ, κσ max ) × (T min /κ, κT max ), for a fixed κ in the range 1 < κ < 1.5. The idea behind this heuristic is to not give all the weight to the prior so as to safeguard against cases where it has missed some important information. For the datasets in our experiments, the value of κ was not very relevant. We set it to κ = 1.05. We also tried larger values, but obtained similar results. We stressed that the data-driven priors were not used as priors for the conditional-Potts clustering model. The priors used were uniform priors. The temperature grid is finer because finding a good temperature is critical. The sampling of the parameters was started after a burn-in period of at least 300,000 iterations. We did not notice much difference between the results yielded by using only 1000 or 10,000 samples after the burn-in period. The final clusterings were constructed by consensus clustering based on clustering evidence as described in Section 3.
PERFORMANCE ON REAL AND SIMULATED DATA
In this section, we report the results of a simulation carried out to study the performance of the conditional-Potts clustering model on three different artificially generated datasets and three real datasets. For comparison purposes, in addition to our own ARI scores, we also report the best ARI scores published elsewhere for the real datasets below (see Table 1). These are the scores to be compared with the results given in Tables 2 and 3. Based on Table 2. Clustering results based on the clustering evidence given by PMC2. "Q" stands for a representative adjacency membership probability threshold associated with the clustering evidence, "c(Q)" for the number of clusters, and "ARI" stands for the adjusted Rand index Table 3. Results based on clustering evidence with a fixed set of values for the parameters. "T" stands for temperature, "σ " for bandwidth, "Q" for a representative adjacency membership probability threshold associated with the clustering evidence, "c" for c(Q), and "ARI" stands for the adjusted Rand index. MAP + PMC and iPrior + PMC stand for the procedures with clustering evidence drawn from the MAP (σ M , T M ) and the datadriven prior maximizer (σ p , T p ). SPMC stands for the super-paramagnetic clustering. The best ARI for each dataset has been highlighted in bold letters Artificial data Real data these scores, the reader may get an idea of how difficult it is to cluster some datasets into the groups selected by some experts (see, e.g., the Yeast cycle data below). The artificial datasets were (a) a 5-clump-3-arc dataset (Murua, Stanberry, and Stuetzle 2008) whose clusters present high variation in shape and distribution and are not very well separated; (b) a three-ring version of the Bull's eye data (Blatt, Domany, and Wiseman 1997), which are a real challenge for most clustering methods; and (c) a 50-Gaussian mixture dataset whose differences in cluster volume may produce difficulties when choosing the appropriate temperature-bandwidth parameters. The data are plotted in Figure 3. The three real datasets were the well-known (a) two-class Iris data (Anderson 1935), which serve us as a test data to check that the procedures work; (2) the nine-class Olive oil data (Forina and Armanino 1982), which are a moderately difficult data to cluster; and (c) the 5-phase subset of the Yeast cell cycle data (Cho et al. 1998), which is very difficult to cluster correctly. Further description of the datasets used in the simulation is shown in the online supplementary materials. The results for all six datasets are shown in Table 2. Figure 4 shows the clustering evidence yielded by PMC2 for all datasets, except for the 50-Gaussians and Olive oil datasets. The evidence yielded by PMC2 is clear for the Bull's eye and the Iris datasets. For the Iris data, π (c) presents a unique large peak at c = 2. For the Bull's eye data, there are similar peaks at c = 2 and c = 3. The odds evidence makes us lean toward c = 3, since a large jump in odds is not produced until a much larger number of clusters. For the Yeast cycle data, there is a peak at c = 1, but the odds against a larger number of clusters do not increase till c = 5, where a jump from 3.9 (c = 5) to 11.1 (c = 6) is produced. Also, the probabilities π (c) for c = 4, 5, or 6 are not that small relatively to π (1). Therefore, if one believes that there are at least two clusters, then c = 5 is a reasonable estimate following the clustering evidence. For the 5-clump-3-arc dataset, the clustering evidence is not totally clear. There are several similar-size peaks at c = 5, c = 11, and c = 13. We choose c = 11 because of the large jump in odds against a larger number of clusters between c = 11 and c = 13. The results associated with the MAP + PMC, iPrior + PMC, and SPMC are shown in Table 3. Linard et al. (2012) created barcodes that convey information on 10 different evolutionary parameters. Each barcode is associated with one human gene and describes this gene when compared with the closest related homolog proteins in 17 early vertebrates. The parameters are the human protein length, the mean length difference between the human reference protein and the corresponding 17 vertebrate species proteins, the mean number of regions shared between the human reference protein and the vertebrate species, the mean sequence identity between the human reference protein and the vertebrate species, the mean number of protein domains in the vertebrate species, the mean domain conservation, the mean hydrophilicity, the mean number of inparalogs with respect to the vertebrates, the mean number of co-orthologs, and the mean synteny between the human genome and the vertebrate genomes. The data consist of 19,778 evolutionary barcodes representing the evolutionary history that led to the complete human proteome since the emergence of vertebrates. Linard et al. (2012) applied our conditional-Potts clustering procedure using its informative data-driven prior version. Barcodes with missing values had been removed for the clustering leaving a total of 19,465 barcodes. Prior to the clustering, each barcode coordinate had been standardized by their mean and variance. The Euclidean distances between any two barcodes were computed so as to obtain the interactions between vertices in the conditional-Potts clustering model. They found 303 clusters. A Gene Ontology (GO; Ashburner et al. 2000) enrichment analysis was performed to find out if they were biologically relevant: 75% of the clusters had an enrichment p-value < 0.025 (the maximum recommended p-value is 0.05; the lower the p-value, the larger the enrichment). In particular, one of the clusters grouped numerous olfactory receptors that are known to have experienced a vast expansion during the chordate evolution. Indeed, the number of olfactory receptors ranges from a dozen in fishes to over a thousand in rodents (Zhang and Firestein 2009). For further details, the reader is referred to the work of Linard et al. (2012). In the present work, we applied our complete methodology to these data and shed light on why Linard et al. (2012) found about 300 clusters. To estimate the posterior membership adjacency probabilities and the posterior joint density of the bandwidth-temperature parameters, we run our PMC2 algorithm with a burn-in period of 200,000 iterations. Our inference was based on the 5000 samples after the burn-in period. Figure 5 shows the clustering evidence based on PMC2 as well as those associated with the specific critical parameters given by the MAP and iPrior maximizer. The results are discussed in the next section.
DISCUSSION AND CONCLUSIONS
Concerning the evolutionary history of the human proteome data, one can infer from Figure 5 that the clustering evidence for more than about 260 clusters is very weak. Observe that the distribution of π (c) almost vanishes after about c = 260. A closer look at the clustering given by 265 clusters shows that most clusters are of moderate size (the median is 37 barcodes per cluster). It also shows the existence of a large cluster containing about 59% of the barcodes. Similar results are found in the other two clusterings given by the clustering evidence associated with (σ M , T M ) and (σ p , T P ). The three clustering are very similar. Their mutual adjusted Rand indexes are all above 0.60. We note that the 75% highly enriched clusters found in Linard et al. (2012) correspond to about 220 clusters, which is in close agreement with the clustering evidence found here.
Concerning the results in Section 5, one can see that all four clustering procedures applied to the six datasets performed similarly. However, we note that the results given for SPMC are associated with the best performing and admissible large peak T s in the temperature trajectory associated with the magnetization (variance of the size of the largest cluster). Sometimes, it was difficult to decide which peak to choose among two or three competitive peaks: the wrong peak yielded much poorer results. Therefore, our choice of the peaks was sometimes biased toward peaks yielding better results. In this sense, the performance of SPMC has been overestimated in the comparisons reported in this article. We will denote by (σ s , T s ) the parameters associated with this procedure. Note that σ s is simply the square root of the mean of the distances between any two points in the dataset. The procedures based on single values for the parameters, MAP + PMC, and iPrior + PMC perform very well. Note that the value of the temperatures T M and T p may be very different. The informative datadriven prior tends to give more weight to smaller temperatures than the posterior. A closer look at the posterior reveals that the iPrior maximum lies in a relatively high-density region of the posterior but not necessarily near the MAP estimate. These observations indicate that there is a vast region in the bandwidth-temperature space for which good clustering results may be found. This has already been observed by Stanberry, Murua, and Cordes (2008) on fMRI data, and has been suggested by Blatt, Domany, and Wiseman (1996) as a result of what is known about the Potts model in ferromagnetism. Note from Table 3 that the optimal temperature for the SPMC is sometimes very low. The reason is that to obtain a good clustering associated with a high threshold of Q = 0.5 (associated with SPMC), one needs a low temperature. At low temperatures, the chances of creating bonds between neighboring points in the data graph is very high (and equal to 1 − exp{−k ij (σ )/T }). Hence, neighboring points will tend to share more often the same connected component, that is, Q ij (σ, T ) will be large more often.
A comparison between the order of operations required to run PMC2 and the order of operations needed by SPMC reveals that, in general, from a purely computational point of view, PMC2 is more efficient for large datasets. Let M be the number of burn-in iterations used in a run of PMC2. Let m be the number of iterations after the burn-in period. Also, let d be the size of the internal grid representing the grid in the bandwidth-temperature space where the posterior density is to be evaluated. The Swendsen-Wang algorithm requires O(n log n) operations to get the connected components of the graph. The updating of the estimation of the normalizing constant requires O(dn) operations. Therefore, the order of operations during the burn-in period, which corresponds to the Wang-Landau algorithm, is about O(M{dn + n log n}). After the burn-in, we need to sample (σ, T ) and interpolate the normalizing constants for parameter values out of the basic grid. This part comprises roughly O(m{n log n + dn + d 2 }) operations. It should be noted that in general M is much larger than m. SPMC requires O(d T (M + m)n log n) operations, where d T is the size of the grid used for the temperature. Hence, the difference in the order of operations between PMC2 and SPMC is about O((M + m){dn + d 2 m/(M + m) − (d T − 1)n log n}). Therefore, this difference will be negative for small grids and for large n. For example, for the barcode data, we have n = 19, 465, m = 5000, M = 200, 000, d = 12 × 30. If we set d T = 50, the difference is about −5 × 10 11 . The left panel of Figure 6 shows, in log-log scale, the real time per grid point spent by PMC2 on the six datasets used in Section 5 against the theoretical order of computations per grid point divided by 3.4 × 10 9 . This latter quantity is the number of operations per second claimed by the computer used to run the algorithm. The computer is a desktop PC running Linux GNOME 2.28.2. A simple linear regression of observed times as function of the theoretical times yields a coefficient of determination larger than 0.99, thus indicating a good theoretical prediction. The regression fitted values are also displayed in Figure 6. For comparison purposes, the right panel of Figure 6 shows the times spent per grid point (i.e., time/(12 × 30) for PMC2) for PMC2 and SPMC when SPMC is run for the same number of iterations per grid than PMC2 (i.e., M + m = 301, 000). It is clearly seen that PMC2 is much more efficient per grid time spent on each grid point than SPMC. However, as a note of caution, we remark that in general, SPMC may be run for fewer iterations than PMC2 on each temperature. Since the time spent per temperature by SPMC is proportional to (M + m), a simple computation shows that SPMC timing per temperature is comparable to PMC2 run with M + m = 301, 000 when only about 21,000 iterations per temperature are run, which for most problems may be appropriate.
In summary, our model and experiments show that the search for an optimal clustering reduces to a search for a stable high-density region of the parameter posterior and not to a point search in the parameter space. Procedures that serve to uncover the totality or parts of this region will perform as well as procedures that search for an optimal point in the parameter space. This may explain why just estimating the posterior on a grid of the parameter space or just optimizing our data-driven informative prior does well. As long as the grid covers parts of the stable region (or the informative data-driven prior presents a mode inside the stable region), the clustering obtained will be a good clustering. The prior based on random graph theory gives us a strong hint on the location of the stable region of the Potts model. Since we derived this information from the study of many random cluster models with constant interaction, it is reasonable to expect further improvements in clustering with the advancement of the knowledge of the phase transition region of a random cluster model with variable interactions between the data points.
SUPPLEMENTARY MATERIALS
The supplementary material consists of the proof of the convergence theorem for PMC2 (Section 1), the proof of the probability bound theorem used to construct an informative data-driven prior for the temperature parameter of the conditional-Potts model (Section 2), and a more detailed description of the datasets used in our simulations (Section 3). | 12,933.2 | 2014-06-23T00:00:00.000 | [
"Computer Science",
"Mathematics",
"Physics"
] |
The Effect of Ca on In Vitro Behavior of Biodegradable Zn-Fe Alloy in Simulated Physiological Environments
The growing interest in Zn based alloys as structural materials for biodegradable implants is mainly attributed to the excellent biocompatibility of Zn and its important role in many physiological reactions. In addition, Zn based implants do not tend to produce hydrogen gas in in vivo conditions and hence do not promote the danger of gas embolism. However, Zn based implants can provoke encapsulation processes that, practically, may isolate the implant from its surrounding media, which limits its capability of performing as an acceptable biodegradable material. To overcome this problem, previous research carried out by the authors has paved the way for the development of Zn-Fe based alloys that have a relatively increased corrosion rate compared to pure Zn. The present study aims to evaluate the effect of 0.3–1.6% Ca on the in vitro behavior of Zn-Fe alloys and thus to further address the encapsulation problem. The in vitro assessment included immersion tests and electrochemical analysis in terms of open circuit potential, potentiodynamic polarization, and impedance spectroscopy in phosphate buffered saline (PBS) solution at 37 ◦C. The mechanical properties of the examined alloys were evaluated by tension and hardness tests while cytotoxicity properties were examined using indirect cell metabolic activity analysis. The obtained results indicated that Ca additions increased the corrosion rate of Zn-Fe alloys and in parallel increased their strength and hardness. This was mainly attributed to the formation of a Ca-rich phase in the form CaZn13. Cytotoxicity assessment showed that the cells’ metabolic activity on the tested alloys was adequate at over 90%, which was comparable to the cells’ metabolic activity on an inert reference alloy Ti-6Al-4V.
Introduction
Traditional structural materials for metallic implants in orthopedic applications such as bone plates and screws as well as stents for cardiovascular use are produced from stainless steels, Ti based alloys, Cobalt-chromium alloys, and others [1]. These implants have excellent corrosion resistance in in vivo conditions along with superior mechanical properties [2]. However, in the long run, these permanent implants may cause a variety of problems including premature failure and stress shielding [3]. Hence, an interest in developing metallic biodegradable implants made of Mg, Fe and Zn based alloys [4][5][6][7] is steadily growing. Studies related to Mg based alloys [8][9][10][11][12][13][14][15][16][17][18][19] revealed several major problems, including accelerated corrosion rates, premature degradation of mechanical integrity, and the release of hydrogen gas. The accumulation of hydrogen in in vivo conditions can produce gas bubbles that, in extreme cases, may penetrate the bloodstream [20,21] and promote the danger of gas embolism. As for Fe based implants, their main disadvantages are limited mechanical properties and relatively reduced corrosion rates [22][23][24][25]. In addition, they produce large amounts of harmful iron oxide that repels neighboring tissue, stimulates inflammation and, in certain conditions, can be even toxic [26,27].
In the light of the inherent limitations of Mg and Fe based alloys as biodegradable implants, Zn based alloys seem to be an interesting alternative. This can be attributed to the excellent biocompatibility of Zn and its important role in many enzymatic reactions and bone metabolism. Zn is also considered to be an anti-bacterial [28] and anti-viral [29] element which is crucial for preventing inflammation in the vicinity of the implant. In addition, the degradation of Zn does not tend to produce hydrogen gas, as in the case of Mg, and hence reduces the danger of gas embolism. In spite of those relative advantages, pure Zn has a reduced corrosion rate (higher than Fe but lower than Mg) and insufficient mechanical properties [30][31][32]. Furthermore, pure Zn tends to provoke encapsulation processes in in vivo conditions [33], which can isolate the implant from the physiological environment and hence limits its capability to act as a suitable biodegradable material. This encapsulation problem was partly addressed in previous studies of the authors [34][35][36] by developing innovative Zn-Fe based alloys that have relatively increased corrosion rates compared to pure Zn. The present study aims to evaluate the effect of 0.3-1.6% Ca on in vitro behavior of Zn-Fe alloys in order to further address the encapsulation problem while maintaining adequate mechanical properties. Here, we hypothesize that the encapsulation response is regulated by the corrosion rate of the biodegradable alloy.
Alloys Preparation
Zn based alloys in the form of Zn-2%Fe and Zn-2%Fe with various amounts of Ca (0.3%, 0.6%, 1%, and 1.6%) were prepared by gravity casting. The selected concentration of Ca relates to the fact that this alloying element has a significant embrittlement effect on Zn based alloys and hence should be kept as low as possible. Alloy preparation was carried out in a graphite crucible using pure Zn ingots (99.99%), pure iron (99%) with powder size up to 44 microns (−325 mesh) and pure calcium in the form of granules. The alloying process was performed at 750 • C for 3 h along with active stirring every 30 min. The molten alloy was cast as bars in a rectangular steel die with the following dimensions: 6 cm × 6 cm × 15 cm. The as-cast bars were machined to obtain rods with 13 mm diameter. Later, the rods were extruded using an extrusion ratio of about 1:5. Prior to the extrusion process, the rods were preheated to 350 • C. The final dimension of the obtained rods was 6 mm. The chemical composition of the tested alloys was analyzed using an Inductively Coupled Plasma Optical Emission Spectrometer (ICP-SPECTRO, ARCOS FHS-12, Kelve, Germany) facility.
Mechanical Properties Tests
The mechanical properties of the tested alloys were evaluated in terms of tensile strength and hardness. The tensile tests were performed at room temperature using a CORMET slow strain rate machine (C76, Cormet Testing Systems, Vantaa, Finland) at a rate of 0.5 mm/min. Hardness tests were carried out by Vickers measurements using a hardness tester (Zwick/Roell Indentec (Quantarad Technologies, Selangor, Malaysia)) with an applied load of 3 kg. Several indentations were applied to each test sample and the diagonal lengths were measured using a calibrated micrometer attached to the eyepiece of an optical microscope. The standard deviations related to tensile strength and hardness measurements were based on at least three tests for each alloy.
Immersion Test
The corrosion resistance of the tested alloys was examined by immersion tests in a simulated physiological environment in the form of a phosphate-buffered saline (PBS) solution at 37 • C. The duration of the immersion test was 14 days in line with the ASTM ID: G31-72 standard, and the pH level of the PBS solution was close to 7.4. Standard deviations were based on at least three examinations for each alloy. The corrosion products obtained after the immersion test were removed using a 10% NH 4 Cl solution at 70 • C in accordance with the ASTM ID: G1-03 standard.
Electrochemical Behavior
The electrochemical behavior of the tested samples was evaluated in terms of open circuit potential, potentiodynamic polarization analysis, and impedance spectroscopy (EIS). This was carried out using a Bio-Logic SP-200 potentiostat equipped with EC-Lab software V11.18 [38]. The three-electrode cell method used for the electrochemical analysis included a saturated calomel reference electrode (SCE), a platinum counter electrode, and the tested sample as a working electrode [39,40]. The exposed area of the working electrode was 1 cm 2 and the test solution was PBS at ambient temperature. The duration of the open circuit potential tests was about 70 h in order to obtain a stable potential. The scanning rate of the potentiodynamic polarization analysis was 1 mV/s and the corrosion rates were calculated by Tafel extrapolation. The EIS measurements were performed between 10 kHz and 100 mHz at 10 mV amplitude over the open circuit potential. Prior to the electrochemical testing, the samples were cleaned in an ultrasonic bath for 5 min, washed with alcohol, and dried in hot air.
Cytotoxicity Evaluation
Indirect extract cell metabolic activity assessment was performed in order to evaluate the cytotoxicity characteristics of the tested alloys. Sample preparation and the experimental protocol were carried out in line with the ISO 10993-5/12 standard [41,42], using Mus musculus (mouse) 4T1 cells. The selection of 4T1 cells was attributed to the fact that those cells are relatively more active than primary cells and hence more sensitive to toxic insults [4]. Cylindrical samples (D = 10 mm, h = 2 mm) made from the tested Zn based alloys and a Ti-6Al-4V alloy as the reference material (control group) were prepared using four samples from each alloy. Prior to the experiment, the samples were polished up to 4000 grit, ultrasonically cleaned for 10 min in ethanol and 5 min in acetone, and then air dried followed by sterilization in ultraviolet-radiation for 1 h on each disk side. All the Zn based alloys and Ti-6Al-4V samples were pre-incubated for 24 h in Dulbecco Modified Eagle's Medium (DMEM) supplemented with 4.5 g L −1 D-Glucose, 10% Fetal Bovine Serum (FBS), 4 mM L-Glutamine, 1 mM Sodium Pyruvate, and 1% Penicillin Streptomycin Neomycin (PSN) antibiotic mixture at 37 • C in a humidified atmosphere. The surface area to volume extraction ratio was 1.25 cm 2 mL −1 . In parallel, the cells were seeded in 96-well tissue culture plates with a density of 5000 cells per well to allow substrate attachment. After 24 h, the liquids from all samples were collected and filtered by a PVDF membrane (0.45 µm), and 100 µL of metal extract was added to the cells. The negative control group contained cells cultured with only DMEM, while the positive control group contained cell cultures with 90% DMEM and 10% DMSO for toxic evaluation. Cell metabolic activity was assessed using a Cell Proliferation Kit (XTT, Biological Industry, Beit Haemek, Israel) and a microplate reader (SYNERGY-Mx, BioTek, Winooski, Vermont, USA) after 24 h and 48 h incubation at 37 • C in a 5% CO 2 humidified atmosphere. The testing process includes adding 50 µL reagent and 1 µL activator to 100 µL DMEM in each sample well for 2 h incubation. The resulting color formation was measured spectrophotometrically Metals 2020, 10, 1624 4 of 14 at 490 nm using the microplate reader. As the cell metabolic activity is an indirect measurement of cell viability, the cell viability was calculated according to the following equation: where OD Sample is the optical density determined by the cells cultured with the tested extracts and OD Control is the optical density measurement of the cells in the control culture media [4]. Subsequent to this experiment, a pH test was performed on the medium cell (ORION PrepHec T ROSS comb. Micro pH 8220BNWP, Thermo Scientific, Waltham, Massachusetts, USA) using at least three measurements. This was followed by a visual examination of the cells was documented by a CoolLED pE-2 collimator fitted to an inverted phase-contrast microscope (Eclipse Ti, Nikon, Tokyo, Japan) that was equipped with a digital camera (DS-Qi1Mc, Nikon, Tokyo, Japan).
Results
The composition of the test alloys (in wt.%) obtained by optical emission spectrometer is shown in Table 1. Phase identification obtained by X-ray diffraction analysis revealed the presence of three major phases: pure Zn, a Fe-rich phase, and a Ca-rich phase, as shown in Figure 1. The Fe-rich phase was identified as Zn 11 Fe (according to ICDD 045-1184), and the Ca-rich phase was identified as CaZn 13 (according to ICDD 028-0258). The intensity of the Ca-rich phase was relatively elevated as the Ca content was increased from 0.6% to 1.6%. indirect measurement of cell viability, the cell viability was calculated according to the following equation: where ODSample is the optical density determined by the cells cultured with the tested extracts and ODControl is the optical density measurement of the cells in the control culture media [4]. Subsequent to this experiment, a pH test was performed on the medium cell (ORION PrepHec T ROSS comb. Micro pH 8220BNWP, Thermo Scientific, Waltham, Massachusetts, USA) using at least three measurements. This was followed by a visual examination of the cells was documented by a CoolLED pE-2 collimator fitted to an inverted phase-contrast microscope (Eclipse Ti, Nikon, Tokyo, Japan) that was equipped with a digital camera (DS-Qi1Mc, Nikon, Tokyo, Japan).
Results
The composition of the test alloys (in wt.%) obtained by optical emission spectrometer is shown in Table 1. Phase identification obtained by X-ray diffraction analysis revealed the presence of three major phases: pure Zn, a Fe-rich phase, and a Ca-rich phase, as shown in Figure 1. The Fe-rich phase was identified as Zn11Fe (according to ICDD 045-1184), and the Ca-rich phase was identified as CaZn13 (according to ICDD 028-0258). The intensity of the Ca-rich phase was relatively elevated as the Ca content was increased from 0.6% to 1.6%. The typical microstructure of the examined alloys obtained by SEM is shown in Figure 2. The microstructure of the Zn-2%Fe alloy revealed a pure Zn matrix with a secondary Fe-rich phase (Zn 11 Fe) that was scattered evenly across the entire bulk material, as shown in Figure 2a. In the cases of the ternary alloys with Ca additions, the microstructure was composed of a pure Zn matrix, a Fe-rich phase, and a Ca-rich phase (CaZn 13 ). The dimensions and structure of the Ca-rich phase varied as the Ca content was increased. At a lower Ca content (0.6%), the Ca-rich phase was relatively fine, while, at a higher content (1.6%), this phase was significantly enlarged with a massive bulky appearance.
Metals 2020, 10, x FOR PEER REVIEW 5 of 14 The typical microstructure of the examined alloys obtained by SEM is shown in Figure 2. The microstructure of the Zn-2%Fe alloy revealed a pure Zn matrix with a secondary Fe-rich phase (Zn11Fe) that was scattered evenly across the entire bulk material, as shown in Figure 2a. In the cases of the ternary alloys with Ca additions, the microstructure was composed of a pure Zn matrix, a Ferich phase, and a Ca-rich phase (CaZn13). The dimensions and structure of the Ca-rich phase varied as the Ca content was increased. At a lower Ca content (0.6%), the Ca-rich phase was relatively fine, while, at a higher content (1.6%), this phase was significantly enlarged with a massive bulky appearance.
The embrittlement effect caused by the additions of Ca to the base alloy Zn-2%Fe is clearly illustrated by the results of the hardness and tensile tests shown in Figures 3 and 4, respectively. The significantly increased hardness as the Ca content was increased has limited the possibility of practically extruding the tested alloys (at an extrusion ratio of 1:5) when the Ca content was above 0.6%. The results of the tensile tests related to Zn-2%Fe and Zn-2%Fe-0.6%Ca alloys are summarized in Table 2. This reveals that the addition of 0.6%Ca significantly reduces the elongation of the base alloy (from 13.8% to 7.7%), while having a relatively minor deteriorating effect on the tensile strength (UTS) and yield point (YP). Typical microstructure of the tested alloys (a) Zn-2%Fe The embrittlement effect caused by the additions of Ca to the base alloy Zn-2%Fe is clearly illustrated by the results of the hardness and tensile tests shown in Figures 3 and 4, respectively. The significantly increased hardness as the Ca content was increased has limited the possibility of practically extruding the tested alloys (at an extrusion ratio of 1:5) when the Ca content was above 0.6%. The results of the tensile tests related to Zn-2%Fe and Zn-2%Fe-0.6%Ca alloys are summarized in Table 2. This reveals that the addition of 0.6%Ca significantly reduces the elongation of the base alloy (from 13.8% to 7.7%), while having a relatively minor deteriorating effect on the tensile strength (UTS) and yield point (YP).
Metals 2020, 10, x FOR PEER REVIEW 5 of 14 The typical microstructure of the examined alloys obtained by SEM is shown in Figure 2. The microstructure of the Zn-2%Fe alloy revealed a pure Zn matrix with a secondary Fe-rich phase (Zn11Fe) that was scattered evenly across the entire bulk material, as shown in Figure 2a. In the cases of the ternary alloys with Ca additions, the microstructure was composed of a pure Zn matrix, a Ferich phase, and a Ca-rich phase (CaZn13). The dimensions and structure of the Ca-rich phase varied as the Ca content was increased. At a lower Ca content (0.6%), the Ca-rich phase was relatively fine, while, at a higher content (1.6%), this phase was significantly enlarged with a massive bulky appearance.
The embrittlement effect caused by the additions of Ca to the base alloy Zn-2%Fe is clearly illustrated by the results of the hardness and tensile tests shown in Figures 3 and 4, respectively. The significantly increased hardness as the Ca content was increased has limited the possibility of practically extruding the tested alloys (at an extrusion ratio of 1:5) when the Ca content was above 0.6%. The results of the tensile tests related to Zn-2%Fe and Zn-2%Fe-0.6%Ca alloys are summarized in Table 2. This reveals that the addition of 0.6%Ca significantly reduces the elongation of the base alloy (from 13.8% to 7.7%), while having a relatively minor deteriorating effect on the tensile strength (UTS) and yield point (YP). The corrosion rate of the tested alloys obtained by immersion tests in PBS solution at a temperature of 37 °C after 14 days is shown in Figure 5. This reveals that the corrosion rate of the base alloy tends to increase due to the additions of Ca. However, the corrosion rates of the alloys containing 0.6-1.6% Ca were relatively similar. Electrochemical analysis in terms of open circuit potential EOC is shown in Figure 6. The EOC of all the tested alloys was within a narrow range between −1.03 V and −1.1 V. In addition, it was evident that, after reaching steady state conditions (beyond 50 h of exposure), the potential of the base alloy Zn-2%Fe was relatively elevated. The spike of the potential of Zn-2%Fe just before 10 h can be related to some types of contamination. Altogether, the obtained open circuit potential result comes in line with the outcome of the immersion tests that indicate that the additions of Ca increase the corrosion degradation of the base alloy. The corrosion rate of the tested alloys obtained by immersion tests in PBS solution at a temperature of 37 • C after 14 days is shown in Figure 5. This reveals that the corrosion rate of the base alloy tends to increase due to the additions of Ca. However, the corrosion rates of the alloys containing 0.6-1.6% Ca were relatively similar. The corrosion rate of the tested alloys obtained by immersion tests in PBS solution at a temperature of 37 °C after 14 days is shown in Figure 5. This reveals that the corrosion rate of the base alloy tends to increase due to the additions of Ca. However, the corrosion rates of the alloys containing 0.6-1.6% Ca were relatively similar. Electrochemical analysis in terms of open circuit potential EOC is shown in Figure 6. The EOC of all the tested alloys was within a narrow range between −1.03 V and −1.1 V. In addition, it was evident that, after reaching steady state conditions (beyond 50 h of exposure), the potential of the base alloy Zn-2%Fe was relatively elevated. The spike of the potential of Zn-2%Fe just before 10 h can be related to some types of contamination. Altogether, the obtained open circuit potential result comes in line with the outcome of the immersion tests that indicate that the additions of Ca increase the corrosion degradation of the base alloy. Figure 6. The E OC of all the tested alloys was within a narrow range between −1.03 V and −1.1 V. In addition, it was evident that, after reaching steady state conditions (beyond 50 h of exposure), the potential of the base alloy Zn-2%Fe was relatively elevated. The spike of the potential of Zn-2%Fe just before 10 h can be related to some types of contamination. Altogether, the obtained open circuit potential result comes in line with the outcome of the immersion tests that indicate that the additions of Ca increase the corrosion degradation of the base alloy. Metals 2020, 10, x FOR PEER REVIEW 7 of 14 Figure 6. Open circuit potential of tested alloys in PBS solution at 37 °C.
Electrochemical analysis in terms of open circuit potential E OC is shown in
Potentiodynamic polarization curves of the tested alloys are shown in Figure 7. This reveals that the polarization curves of the alloys containing Ca were shifted to relatively higher current densities compared to the base alloy Zn-2%Fe. This can be an indication of relatively higher corrosion degradation characteristics of the alloys containing Ca. This assumption was supported by Tafel extrapolation analysis in terms of corrosion potentials (ECORR), corrosion current densities (ICORR), and corrosion rates, as shown in Table 3. According to the Tafel extrapolation results, the current densities and consequent corrosion rates of the alloys containing Ca were relatively increased compared to base alloy. This again indicates that the addition of Ca clearly reduces the corrosion resistance of the base alloy. The corrosion degradation kinetics of the tested alloys were further analyzed by impedance spectroscopy (EIS), as shown in Figure 8. The Nyquist plots reveal that the radii of curvature of the alloys containing Ca were relatively reduced compared to that of the base alloy Zn-2%Fe. This Potentiodynamic polarization curves of the tested alloys are shown in Figure 7. This reveals that the polarization curves of the alloys containing Ca were shifted to relatively higher current densities compared to the base alloy Zn-2%Fe. This can be an indication of relatively higher corrosion degradation characteristics of the alloys containing Ca. This assumption was supported by Tafel extrapolation analysis in terms of corrosion potentials (E CORR ), corrosion current densities (I CORR ), and corrosion rates, as shown in Table 3. According to the Tafel extrapolation results, the current densities and consequent corrosion rates of the alloys containing Ca were relatively increased compared to base alloy. This again indicates that the addition of Ca clearly reduces the corrosion resistance of the base alloy. Potentiodynamic polarization curves of the tested alloys are shown in Figure 7. This reveals that the polarization curves of the alloys containing Ca were shifted to relatively higher current densities compared to the base alloy Zn-2%Fe. This can be an indication of relatively higher corrosion degradation characteristics of the alloys containing Ca. This assumption was supported by Tafel extrapolation analysis in terms of corrosion potentials (ECORR), corrosion current densities (ICORR), and corrosion rates, as shown in Table 3. According to the Tafel extrapolation results, the current densities and consequent corrosion rates of the alloys containing Ca were relatively increased compared to base alloy. This again indicates that the addition of Ca clearly reduces the corrosion resistance of the base alloy. The corrosion degradation kinetics of the tested alloys were further analyzed by impedance spectroscopy (EIS), as shown in Figure 8. The Nyquist plots reveal that the radii of curvature of the alloys containing Ca were relatively reduced compared to that of the base alloy Zn-2%Fe. This The corrosion degradation kinetics of the tested alloys were further analyzed by impedance spectroscopy (EIS), as shown in Figure 8. The Nyquist plots reveal that the radii of curvature of the alloys containing Ca were relatively reduced compared to that of the base alloy Zn-2%Fe. This indicates that the corrosion resistance of the alloys containing Ca was relatively reduced. This outcome was Metals 2020, 10, 1624 8 of 14 also supported by the Bode curves that clearly illustrate the differences between the base alloy and the alloys containing Ca. In order to provide detailed information relating to the corrosion process at the electrolyte/electrode interface, electrical equivalent circuit (EEC) fitting was generated based on the Nyquist plots. The EECs fitted to model the EIS spectra are shown in Figure 9, while the relevant outcomes are reported in Table 4. The fitted EEC had the lowest chi-square values and minimum overall errors. The R S is the solution resistance, R dl is the charge transfer resistance attributed to the electrochemical reaction, and Q dl is a component related to the capacitance of the double layer. Q dl is a constant phase element that is governed by the exponent a, where a = 1 indicates an ideal capacitor C. As shown, the solution resistance of all tested samples was equal, and the double layer capacitance has nearly the same order of magnitude.
Metals 2020, 10, x FOR PEER REVIEW 8 of 14 indicates that the corrosion resistance of the alloys containing Ca was relatively reduced. This outcome was also supported by the Bode curves that clearly illustrate the differences between the base alloy and the alloys containing Ca. In order to provide detailed information relating to the corrosion process at the electrolyte/electrode interface, electrical equivalent circuit (EEC) fitting was generated based on the Nyquist plots. The EECs fitted to model the EIS spectra are shown in Figure 9, while the relevant outcomes are reported in Table 4. The fitted EEC had the lowest chi-square values and minimum overall errors. The RS is the solution resistance, Rdl is the charge transfer resistance attributed to the electrochemical reaction, and Qdl is a component related to the capacitance of the double layer. Qdl is a constant phase element that is governed by the exponent a, where a = 1 indicates an ideal capacitor C. As shown, the solution resistance of all tested samples was equal, and the double layer capacitance has nearly the same order of magnitude. The cytotoxicity of the tested alloys was evaluated by indirect testing in terms of cells viability using a Ti-6%Al-4%V reference, as shown in Figure 10. As indicated by ISO 10993-5 [41], cell viability reduction of higher than 30% is considered to indicate a cytotoxic effect. The obtained results clearly indicates that the corrosion resistance of the alloys containing Ca was relatively reduced. This outcome was also supported by the Bode curves that clearly illustrate the differences between the base alloy and the alloys containing Ca. In order to provide detailed information relating to the corrosion process at the electrolyte/electrode interface, electrical equivalent circuit (EEC) fitting was generated based on the Nyquist plots. The EECs fitted to model the EIS spectra are shown in Figure 9, while the relevant outcomes are reported in Table 4. The fitted EEC had the lowest chi-square values and minimum overall errors. The RS is the solution resistance, Rdl is the charge transfer resistance attributed to the electrochemical reaction, and Qdl is a component related to the capacitance of the double layer. Qdl is a constant phase element that is governed by the exponent a, where a = 1 indicates an ideal capacitor C. As shown, the solution resistance of all tested samples was equal, and the double layer capacitance has nearly the same order of magnitude. The cytotoxicity of the tested alloys was evaluated by indirect testing in terms of cells viability using a Ti-6%Al-4%V reference, as shown in Figure 10. As indicated by ISO 10993-5 [41], cell viability reduction of higher than 30% is considered to indicate a cytotoxic effect. The obtained results clearly The cytotoxicity of the tested alloys was evaluated by indirect testing in terms of cells viability using a Ti-6%Al-4%V reference, as shown in Figure 10. As indicated by ISO 10993-5 [41], cell viability reduction of higher than 30% is considered to indicate a cytotoxic effect. The obtained results clearly demonstrated that the viability values of all the tested alloys were between 90-116%. Hence, it can be assumed that all the tested alloys can be non-cytotoxic substances regarding 4T1 cells. This assessment was also supported by microscopy analysis of the cells, as shown in Figure 11. According to the obtained images, the general appearance of the cells on all the tested alloys was normal and healthy and their density was quite adequate and comparable with the cells' viability on the Ti-6%Al-4%V alloy. In addition, pH measurements of extracted media post incubation of 4T1 cells on all the Zn based alloys were very similar to the measurement obtained by the reference Ti-6%Al-4%V alloy, as shown in Figure 12. It should be pointed out that independent cytotoxicity tests were carried out twice with very similar outcomes.
Metals 2020, 10, x FOR PEER REVIEW 9 of 14 demonstrated that the viability values of all the tested alloys were between 90-116%. Hence, it can be assumed that all the tested alloys can be non-cytotoxic substances regarding 4T1 cells. This assessment was also supported by microscopy analysis of the cells, as shown in Figure 11. According to the obtained images, the general appearance of the cells on all the tested alloys was normal and healthy and their density was quite adequate and comparable with the cells' viability on the Ti-6%Al-4%V alloy. In addition, pH measurements of extracted media post incubation of 4T1 cells on all the Zn based alloys were very similar to the measurement obtained by the reference Ti-6%Al-4%V alloy, as shown in Figure 12. It should be pointed out that independent cytotoxicity tests were carried out twice with very similar outcomes.
Discussion
The present study addresses the inherent disadvantage of Zn in terms of its biodegradation characteristics in physiological environments. This disadvantage mainly relates to the relatively elevated potential of Zn (−0.76 V) compared, for example, to Mg (−2.37 V) [43]. Consequently, according to Guillory et al. [33], pure Zn and Zn based alloys tend to provoke inflammation and fibrous encapsulation. The encapsulation event can practically isolate the implant from the surrounding physiological environment, and subsequently limits its capability to perform as a biodegradable material [44,45]. In order to address this problem, previous research activities carried out by the authors [34,36,46] paved the way for the development of Zn-Fe based alloys that have a relatively increased corrosion rate compared to pure Zn. The additions of various amounts of Ca with a relatively lower potential (−2.87 V) to Zn-Fe based alloys aim to further increase the degradation rate of those alloys in order to overcome the problem of encapsulation.
The results obtained by this study in terms of immersion test and electrochemical analysis (open circuit potential, potentiodynamic polarization, and impedance spectroscopy) clearly indicate that
Discussion
The present study addresses the inherent disadvantage of Zn in terms of its biodegradation characteristics in physiological environments. This disadvantage mainly relates to the relatively elevated potential of Zn (−0.76 V) compared, for example, to Mg (−2.37 V) [43]. Consequently, according to Guillory et al. [33], pure Zn and Zn based alloys tend to provoke inflammation and fibrous encapsulation. The encapsulation event can practically isolate the implant from the surrounding physiological environment, and subsequently limits its capability to perform as a biodegradable material [44,45]. In order to address this problem, previous research activities carried out by the authors [34,36,46] paved the way for the development of Zn-Fe based alloys that have a relatively increased corrosion rate compared to pure Zn. The additions of various amounts of Ca with a relatively lower potential (−2.87 V) to Zn-Fe based alloys aim to further increase the degradation rate of those alloys in order to overcome the problem of encapsulation.
The results obtained by this study in terms of immersion test and electrochemical analysis (open circuit potential, potentiodynamic polarization, and impedance spectroscopy) clearly indicate that additions of 0.3-1.6 Ca increase the corrosion rate of the base Zn-2%Fe alloy. This was mainly attributed to the formation of a Ca-rich phase in the form of CaZn 13 that, according to Li et al. [47], increases the corrosion of pure Zn, probably due to a micro-galvanic effect. The selected amount of Ca (0.3-1.6%) was related to the processing capabilities of the tested alloys in terms of the extrusion process as well as due to the fact that this element is considered an essential bone constituent and one of the vital elements in human body [48]. It was evident that alloys containing more than 0.6%Ca could not be practically extruded at a suitable extrusion ratio of 1:5. This can be related to the hardening effect of CaZn 13 phase, as clearly indicated by the hardness and tensile tests. This assumption was also supported by Shi et.al [49], who showed that the hardness of the CaZn 13 phase was three times higher than for pure Zn. In addition, the FCC structure of the CaZn 13 phase, with a primary growth direction of <111> and a secondary favorable direction <010>, had a morphology of a fine three-petaled flower [49] at a lower Ca content (up to 0.6%), while with higher amounts of Ca this phase was significantly enlarged with a massive bulky appearance. Hence, the inherent embrittlement effect of the CaZn 13 phase in alloys containing more than 0.6% Ca was due to the increased amount of that phase and morphology transformation.
Relating to the cytotoxicity characteristics of the tested alloys in terms of indirect cell metabolic activity analysis, it was evident that the tested alloys showed over 90% cell viability, which was comparable to the cell viability obtained by the reference Ti-6Al-4V alloy. Hence, according to this result, it can be concluded that the additions of Ca to the base Zn-2%Fe did not impair the adequate biocompatibility characteristics of this alloy. It should be pointed out that the selection of 4T1 cells for the cytotoxicity analysis was related to the fact that these cells are relatively much more active than primary cells and consequently more sensitive to toxic insults [4]. In addition, as the outcome of this study relates only to in vitro analysis, additional evaluation in in vivo conditions is required in order for the prospects of Zn-Fe-Ca based alloy as structural materials for biodegradable implants can be practically realized.
Conclusions
The obtained results showed that additions of 0.3-1.6% Ca to Zn-2%Fe alloy increased the corrosion rate of this alloy and hence subsequently reduced the possible risk of encapsulation. It is believed that this was mainly attributed to the formation of a CaZn 13 phase that creates a detrimental micro-galvanic effect. The processing capabilities of the alloys with Ca in terms of an extrusion process indicate that alloys containing more than 0.6%Ca could not be extruded at a suitable extrusion ratio of 1:5. This was mainly related to the hardening effect of the CaZn 13 phase. Cytotoxicity analysis in terms of indirect cell viability on Zn-2%Fe alloys containing Ca was adequate and comparable to the cells' viability on a reference Ti-6Al-4V alloy. This indicates that the alloys with Ca can be considered to have acceptable biocompatibility characteristics.
Conflicts of Interest:
The authors declare no conflict of interest. | 8,051.4 | 2020-12-03T00:00:00.000 | [
"Materials Science"
] |
Multiparameter Sensor Array for Gas Composition Monitoring †
In the energy transition from fossil to renewable resources, gas is foreseen to play an important role. However, the composition of the gas is expected to change due to a wider variation of sources. In order to mitigate potential challenges for distributors and end-users, a new low-cost gas composition sensor was developed that will be able to monitor the composition and energy content of these gas sources, ranging from biogas to liquid natural gas (LNG). Together with industrial and academic partners a gas sensor was realized that can be inserted in an existing gas grid. A first demonstrator was realized that was small enough to be used in low and medium pressure gas pipes (100 mbarg—8 barg). Adding the pressure and temperature data to the chip readings enables to determine the concentrations of methane, ethane, propane, butane, nitrogen and carbon dioxide, including small fluctuations in water vapor pressure and subsequently calculate the Calorific Value, Wobbe Index and Methane Number.
Introduction
The Netherlands and other regions are facing major changes during the coming decades in the production and use of natural gas for household heating and industrial processes.Both economic and political changes induce an accelerated reduction of the use of the nationally produced natural gas, and require a shift towards LNG from different sources all over the world and sustainable solutions such as biogas.Both these gases have a deviating composition related to the traditional sources.This will require a more intense monitoring of the composition along the gas grid.The currently available gas quality measuring systems (e.g., GC, Wobbe Index analyzer, etc.) cannot fulfill the need for a costeffective inline measuring method [1].The number of biogas feeds into an existing gas grid will increase significantly during the coming years.This not only asks for a clear monitoring of the distribution of this gas along the grid, but also a more intense monitoring of the gas feed quality.Biogas may suffer from larger fluctuations in composition and accompanying contaminations.These should be recognized before entering a gas grid.Currently, gas chromatographs are used for these 'gate keeper' activities, but lower cost solutions may be required in order to lower the hurdles for starting biogas producers.For that reason, TNO started the development of a new type of gas sensor, based on gas sensitive coatings on an array of electronic chips.
This paper presents the development of a low cost calorific value sensor for natural gas and biogas, based on the measurement of the composition of the individual components methane, ethane, propane, butane, carbon dioxide and nitrogen.This gas sensor is based on gas sensitive coatings on an electronic platform (Figure 1).New coating formulations were developed that selectively absorb the target gas and consequently will give rise to a change in material behavior (i.e., dielectric constant).This change in material properties is monitored using capacitive comb electrodes.Combining the response of multiple sensor chips makes it possible to simultaneously obtain the concentrations of the individual components of the target gas.Subsequently the calorific value and Wobbe Index of the gas mixture can be calculated.
Approach
The use of capacitive interdigitated electrodes (IDE) has been discussed already in several papers, ranging from gas sensors to liquid sensors [2][3][4][5].In general, these electrodes are coated with responsive layers that absorb the target molecules.This concept has the ability for miniaturization, since the electrodes can be made using CMOS compatible technologies, and the read-out electronics only require a small PCB.So, the approach disclosed in the current paper is a next step in miniaturization of a gas sensor array.Therefore, a capacitive platform was chosen, made from an array of interdigitated electrodes (Figure 1), each of which was coated with a polymer based coating, specifically tuned to one of the target gasses [2,3].The responsive coatings that were applied on the capacitive comb electrodes were based on fluoro, silicone and imide polymers, some having porous additives for the capture of the gas molecules.These porous additives were based on zeolites, cage molecules, and Metal Organic Framework (Figure 2).Zeolites have been shown to be a very interesting and versatile group of porous materials, that is often used for sensor applications [6].The cavity size and porosity can be tuned to the chemistry and molecular size of the individual gasses.When gas molecules are captured inside a cavity, the dielectric constant changes, giving rise to changing capacitances, measured by the electronics.The capacitive chips were bonded to a sensing-PCB (Figure 3), and installed in the gas exposure vessel (Figure 4).The signal processing PCB was kept out of the gas mixtures for security reasons.First the gas sensing device was exposed to well defined gas mixtures of methane, ethane, propane, nitrogen, and carbon dioxide in a laboratory environment.The concentrations in these mixtures were chosen to approach the concentrations in a typical gas (i.e., ~80 vol% methane, ~3 vol% ethane, ~1 vol% propane, ~3 vol% CO2).Two examples are given in Figure 4: two coated chips exposed to small variations in gas concentration (at 1 bara and 25 °C).Combining the response of multiple sensor chips makes it possible to simultaneously obtain the concentrations of the individual components of the target gas mixtures.Subsequently, other gas parameters can be calculated from the composition, such as the calorific value, Wobbe index, and density.
Results of the Field Tests
The final validation of the sensors was done in field tests, where the sensors were installed in several real gas grids that transport natural gas and/or mixtures with biogas to customers.The sensors were operating for several weeks Two field tests were executed in the Dutch gas grid.First, a test to monitor changes in composition due to feed of biogas from a sugar processing plant into a local gas grid of the city of Groningen (Figure 5).And secondly, a stability test in the gas grid of the island of Texel, where the sensor was exposed to Dutch natural gas.The sensors that were installed in the gas grid were first calibrated in laboratory conditions.When installed in the gas grid, some shifting of the baseline was observed, and a correction for this shift was introduced in the data processing.For the first few days, the GC data was compared with the sensor data and used for correction of the data processing protocol.For the rest of the time (3-4 weeks), the composition and corresponding energy values were calculated from the raw data using the processing protocol.The calculated methane concentration and calorific value are plotted versus time (in days) for two weeks measurement (Figure 5).The calculated sensor data follows the GC values very nicely when changing from a mixture of biogas and natural to natural gas and finally to biogas.The differences in methane concentration between natural gas (~82 vol%) and biogas (~88 vol%) can be clearly validated.Furthermore, although the gas concentrations change significantly when switching from natural gas to biogas (e.g., biogas does not contain any higher hydrocarbons), the calculated calorific value is rather constant (~35 MJ/m 3 ) over time, which is confirmed both in the GC as well as the sensor data.The second field test was performed only on natural gas, and resulted in a much more homogeneous gas composition over time, which was confirmed by comparing the GC results with the sensor data.
This hydrocarbon sensor will also be used for the assessment of the composition of LNG as a fuel for automotive engines.The conversion of the gas composition to a relevant number for the fuel quality (i.e., methane number), is more complex than the calorific value; the higher hydrocarbons have a larger influence on this methane number, and must therefore be measured with a higher accuracy.
Conclusions
It was found that the new sensor array can very well detect the changes in composition over a period of several weeks.The accuracy in the measured concentration of all gasses was well below 1 vol%.Furthermore, the calculated Calorific Value and Wobbe Index was within 0.4% of the actual values.The signal to noise ratio of the proposed micro-structured responsive coating, functionalized electronic device was over 500.When LNG is used for fuel in transportation, the quality of the fuel is quantified in the methane number, which can also be derived from the composition.
Author Contributions: A.B. designed the IDE structures, responsive coatings and data processing protocol; J.S. synthesized, formulated and applied the coatings to the chips, and performed the gas exposure experiments; H.B. has initiated the current research, and was supervising, discussing the experiments and editing the content of the paper.
Figure 1 .
Figure 1.Concept of capacitive detection of gas absorption using interdigitated electrodes, and silicon chip having eight electrodes.
Figure 2 .
Figure 2. Sensor chips coated with a Metal Organic Framework coating (left), and zeolite coating (right) on the eight electrodes, and close-up of the smallest electrodes (400 × 1000 µm).
Figure 3 .
Figure 3. Coated capacitive chips mounted to the PCB and inserted into the protective housing (left).The sensor is inserted into the heated pressure vessel for the laboratory validation tests (right).
Figure 5 .
Figure 5.Comparison of Sensor and GC data for a switch between mixed, natural and Biogas.Calorific Value is calculated from the composition (left).Installed sensor in distribution station (right). | 2,115.4 | 2018-12-03T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Tracer-free laser-induced grating spectroscopy using a pulse burst laser at 100 kHz
: This work shows the first application of a burst laser for Laser-Induced Grating Spectroscopy (LIGS) diagnostics. High repetition rate (100 kHz) LIGS is performed in non reacting and reacting flows using the fundamental harmonic of a Nd:YAG pulse-burst laser as pump. In the first part of the paper, we demonstrate the first time-resolved, high repetition rate electrostrictive LIGS measurements in a sinusoidally-modulated helium jet, allowed by the highly energetic pulses delivered by the burst laser (around 130 mJ per pulse). In the second part of the paper, we perform thermal LIGS measurements in a premixed laminar methane/air flame. Thermal gratings are generated in the flame products from the water vapour, which weakly absorbs 1064 nm light. Thus, this work demonstrates the potential of seeding-free high repetition rate LIGS as a technique to detect and time-resolve the instantaneous speed of sound, temperature, and composition in unsteady flow processes.
Introduction
Many fundamental processes in gas dynamics, such as supersonic/hypersonic flow, combustion, aero-acoustics, and plasma physics, occur on time scales of millisecond or less, posing significant challenges for the accurate measurement of scalars such as temperature or density.To gain a deeper insight into the relevant chemical and physical quantities involved, laser diagnostics play a vital role.However, most of the pulsed lasers currently available, especially those with highly energetic pulses, operate only at low repetition rates (∼10 Hz).The laser pulses are short and thus can measure the instantaneous behaviour, but the low repetition rate means that unsteady phenomena typical of turbulent flows cannot be captured.
Recent efforts have tried to develop high repetition rate, non-invasive techniques to measure temperature variations in reacting flows.In fact, temperature is a key parameter in combustion as it affects engine efficiency, pollutant emission, formation and destruction, and noise.However, the hostile environment of combustion chambers makes these measurements extremely challenging, so that only few experimental techniques are able to measure unsteady temperature variations without perturbing the flow.These include interferometry, laser absorption spectroscopy [1][2][3], Rayleigh scattering [4], Coherent Anti-Stokes Raman Spectroscopy (CARS) [5][6][7][8].Laser-Induced Grating Spectroscopy (LIGS) offers an alternative method for gas dynamic measurements using a simpler optical arrangement, while still providing high precision and good accuracy.
LIGS is a non-linear laser diagnostic technique which measures the local speed of sound using the modulation frequency detected from probing a laser-induced transient grating.LIGS signals are based on opto-acoustic effects which arise from the interaction of the local medium with an interference pattern generated from the overlap of two pulsed laser beams.A brief description of the technique is given here; for more detail see references [9][10][11].LIGS signals can be generated either by a resonant or a non-resonant process.The non-resonant process arises from an electrostrictive interaction named Laser-Induced Electrostrictive Grating Scattering (LIEGS) whereas the resonant process arises from resonant absorption of the radiation energy, and subsequent collisional quenching, leading to Laser-Induced Thermal Grating Scattering (LITGS).When the excitation is rapid, two counter-propagating acoustic waves are generated leading to a periodic modulation of the grating scattering efficiency.The frequency of these oscillations depends on the local speed of sound and is detected by the first-order Bragg scattering of a probe beam.The energies required to generate the thermal gratings are normally one order of magnitude lower than those needed to generate the electrostrictive gratings.Although LITGS typically offers a higher signal to noise ratio than LIEGS, its main limitation is the need for sufficient concentration of a resonant species at the available pump laser wavelength.Much of the previous literature has covered LIGS measurements using low repetition rate, high power lasers.Only recent experiments have demonstrated LITGS at frequencies up to 10 kHz [12,13] using high repetition rate PIV lasers.As these pump lasers could only deliver low-energy pulses (1-5 mJ), a strong absorber at the laser wavelength had to be seeded into the flow in order to generate thermal LIGS signals.Such energies were too weak to generate detectable electrostrictive gratings, so there have been no high repetition rate electrostrictive LIGS measurements yet.
In this work, for the first time, a pulse burst Nd:YAG laser [14] is used as pump to generate LIGS signals.A burst laser generates high repetition rate (10-100 kHz) pulses for a short amount of time (a burst of pulses over a time of 10 ms), followed by a cooling time between bursts of 10-20 s.Using the fundamental harmonic of the Nd:YAG laser (1064 nm), the energy of each pulse, of the order of 100 mJ, is sufficient to generate detectable electrostrictive gratings.High repetition rate-time resolved electrostrictive LIGS measurements are obtained for the first time in a sinusoidally modulated helium-air jet, highlighting advantages and disadvantages of such application.Then, as a proof of concept, 100 kHz LIGS is applied to a premixed methane/air flame in a vessel at 4 bar.In this case, a thermal grating is generated by the absorption of light at 1064 nm by weak water vapor lines, aided by the presence of percent levels of water in the reaction product mixture.The 100 kHz data is compared with measurements previously performed at 10 Hz [15], to verify accuracy and precision, and demonstrate that 100 kHz LIGS diagnostics using pulse burst lasers can be successfully applied in flames.
Optical arrangement
The optical layout of the experiment is sketched in Fig. 1 and is briefly described here (for a more detailed description refer to [15]).The λ = 1064 nm pump laser pulses were generated at 100 kHz with a Nd:YAG pulse-burst laser (Quasimodo T M from Spectral Energies).A 10 Hz Nd:YAG laser (Continuum Powerlite DLS9010) was used for comparison.A movable mirror on a magnetic base allowed switching between the two laser sources while using the same optical arrangement.Further on, a 50/50 beam splitter plate optimized for 1064 nm divided the beam into two identical beams.These two parallel beams, separated by 50 mm, were crossed using a 75-mm diameter bi-convex crossing lens (CL) with a 750-mm focal length, resulting in a crossing angle θ 3.81 • .This arrangement produced a grating of Λ = (λ/2)/sin(θ/2) = 16 µm spacing in a probe volume of length and width of approximately 4 mm and 200 µm, respectively.The expected LIGS frequency f is calculated as: where c is the local speed of sound, n = 1 for thermal gratings and n = 2 for electrostrictive gratings, γ is the specific heat capacity ratio, R the universal gas constant, T the local temperature, and W the molar mass.The probe beam was generated by a continuous wave solid state laser (Coherent Verdi G) operating at 532 nm with a power output of 2 W and a diameter of ∼ 2 mm.A guide beam, also at 532 nm, produced by a diode laser (Thorlabs CPS532) was used as a tracer Refer to [15] for more details.
to identify the direction of the scattered signal and facilitate positioning of the collection optics.These four beams (two pumps, the probe, and the tracer beams) were coplanar and alignment masks were used to adjust their respective positions.The LIGS signal was collected by a PMT (Hamamatsu H10721-20).Two 550 nm low pass filters (T: 99.9% at 532 nm, T: 0.08% at 1064 nm) were mounted in front of the PMT to improve the rejection of background scattered light.An infrared photodiode (Thorlabs DET210) detected the pump pulses and triggered acquisition of the LIGS signal.The PMT and photodiode signals were recorded using an oscilloscope (Keysight DSOS804A, 10 Gs/s sampling rate, 8 GHz bandwidth).LIEGS measurements were performed over a modulated jet of helium and air (Fig. 2 (a)).A flow of helium (5 SLPM) was continuously delivered into a plenum, which was then connected to a 3/8" vertical pipe.A loudspeaker driven at 300 Hz modulated the helium flow entering the pipe.At a distance of 200 mm downstream of the exit of the plenum, a steady flow of air from a choked valve (10 SLPM) was added to the helium flow using a T-junction.The air-helium mixture then travelled through the metal pipe for 200 mm before exiting into the room.The probe volume was located in the center of the jet, 3 mm above the exit of the pipe.In addition to electrostrictive LIGS measurements in a non-reactive jet, thermal LIGS measurements were performed at 100 kHz and 10 Hz in laminar premixed methane/air flat flames produced with a McKenna burner.The burner was located in a vertically-oriented pressure vessel operated at 4 bar (Fig. 2 (b)).A detailed description of the vessel with the burner can be found in [15].Time-resolved, high repetition rate LIEGS measurements were performed in the helium/air jet, where the concentration of helium was periodically modulated at 300 Hz with the loudspeaker, changing, in turn, both the specific heat capacity ratio and the molar mass of the mixture, and therefore the local speed of sound in the flow passing through the probe volume.The pulse burst laser was operated at a repetition rate of 100 kHz, delivering 1000 pulses of 130 mJ each (10 ms burst duration).The cooling time between bursts was set to 20 s.
High repetition rate LIEGS measurements in a helium/air jet
As helium and air molecules do not absorb 1064 nm radiation, no thermal gratings were formed in the probe volume, but electrostrictive signals were generated as the energies delivered by the pulses of the burst laser were sufficiently high.Fig. 3 (a) shows two single shot normalized LIEGS signals obtained in jets with different compositions: the red signal was acquired in pure air (which was used for calibration), while the black signal was acquired in the steady mixture of air and helium, with a helium concentration of 6.3% by mass.The different densities of the two jets (and consequently the different speeds of sound) were reflected in the different oscillation frequencies and the different damping rates of the two signals.In the mixture with helium, both the speed of sound and the damping rate of the signals were higher, due to the lower density of the gas in the probe volume [11].
Measurements were taken in the steady jet of air, air and helium and in the modulated jet of air and helium.The loudspeaker was driven with a sinusoidal wave input but, due to the rather large input amplitude used, the modulated flow did not display a perfect sinusoidal shape.Fig. 3 (b) shows the time-resolved (non averaged) normalized density ρ/ρ 0 derived from the LIEGS signals during a burst using the following procedure.Let f E,0 be the calibration frequency, which is acquired in pure air.For each LIEGS shot in the mixture, the instantaneous molar concentration of helium in the probe volume X H e is determined from its frequency f E,i using Eq. ( 1) [13]: Assuming constant temperature, the knowledge of X He allows the determination of the instantaneous density ρ i , as follows: where cp is the specific molar heat capacity at constant pressure, () a refers to pure air, () He to pure helium.In Fig. 3, the red dots correspond to the air-only jet; the black circles to the jet of air and helium (6.3 % in mass) and the blue circles to the sinusoidally modulated jet (time-resolved measurements), while the magenta line shows an average in the oscillating jet over 42 bursts.
Drift in the probe volume
A drift in the inferred density can be observed by looking at the evolution of the density derived from the LIGS signals over the burst (Fig. 3 (b)).The behaviour of the signals during the burst suggests that such a drift was not caused by a physical change in the local conditions owing to e.g.laser-induced thermalisation.For the signal in air (red dots) the derived density shows an apparent positive increase during the burst, which would correspond to a local temperature decrease that can not attributed to any external changes caused by the laser.The apparent change in density was instead caused by a drift in the spatial location and crossing angle of the beams forming the probe volume.In fact, images of the laser beam during the burst reveal that the beam is affected by a spatial drift, becoming also somewhat defocused.This effect is attributed to the high radiation energy and the consequent thermal loading that a burst generates on the amplifiers and optics (both inside and outside the laser).As the two beams travel for about 2 m before crossing in the probe volume, even small initial shifts can be highly amplified along the path, and this can substantially affect the crossing angle and consequently the grating spacing, as well as the spatial location of the probe volume.
Images of the beam revealed that its drift is consistent from burst to burst, so that, in principle, the effect of change in the crossing angle can be removed from the data by using a calibration in steady air.As the speed of sound in steady air c a can be assumed to be constant, the variation of the LIGS frequency f a,i at each shot () i should only reflect the variation of the crossing angle θ i and consequently of the grating spacing Λ i : This quantity Λ i can be used to obtain the correct speed of sound c E,i in each shot during the burst as: From Eq. ( 4), the crossing angle had changed by 5% between the beginning and the end of the burst.The data is also more scattered at the end of the burst: the overlap between the three beams (two pump and probe beams) had deteriorated, making the quality of the signal poorer.
In particular, the biggest variation occurs after t = 5 ms, thus, this effect might be mitigated by reducing the duration of the burst.After applying the correction of Eq. ( 5), only the variation of the crossing angle has been considered, but not the spatial change in the position of the probe volume.The beam drift had moved the probe volume away from the centre of the jet, therefore the data at the beginning and at the end of the burst were not acquired at the same location.In this experiment, this movement is particularly evident in the last milliseconds of the burst in the helium-air jet (black dots in Fig. 3 (b)): the probe volume had travelled out of the jet completely until it reached the still air, as evidenced by the acquired density which corresponds to air.Additionally, for the sinusoidally-modulated flow, in the first two periods of oscillations generated by the speaker, the acquired density variation has a clearer shape, while in the last one it is distorted.Variations in the length and location of the probe volume might also change the overall composition of the mixture inside it (higher or lower percentage of helium).This could explain why the opposite behaviour in the derived density is observed in the jet of pure air and helium and air: the concentration of helium in the probe volume may vary between the beginning and the end of the burst.This reduces the spatial resolution that can be achieved with the measurements and hinders the acquisition of local phenomena occurring at a specific point.Care must be taken while choosing the experimental target, which has to be large enough to ensure that the probe volume remains inside it during a burst and has uniform properties in the measurement plane (or at least in the area where the probe volume is expected to move).
This simple experiment shows how the data acquired with a burst laser should be carefully evaluated to ensure that variations due to the misalignment of the system are not erroneously confused with physical changes in the properties the flow.As final remark, it has to be clarified that this is not an inherent limitation of the technique, but of the current state of the art of the lasers.Improvements in the instrumentation, such as a better stability of the burst laser, and more robust optical components might help to reduce this issue.Here we compare the speed of sound and water concentration derived from LIGS signals using the 100 kHz pulse burst laser and the 10 Hz laser (the results obtained with the 10 Hz laser are discussed in [15]).The measurements were performed in a steady laminar flame due to a limitation of the pressure vessel, which could not sustain the flow rates required for turbulent/unsteady flames.Water available in the flame products weakly absorbs 1064 nm light, generating a detectable thermal grating at such energies, and a weaker, but non-negligible, electrostrisctive grating.
High repetition rate LIGS measurements in flames
LIGS was used to determine the local speed of sound and water concentration in the products of lean-to-rich premixed methane-air flames at 4 bar.The measurements shown here were conducted 5 mm above the surface of the burner, in the product zone.A total of 200 pulses of 130 mJ each were delivered at 100 kHz from the pulse burst laser.This burst duration was chosen to reduce the thermal load on the windows of the vessel and also to make the effect of the beam drift less severe.Fig. 4 (a) shows ensemble averaged LIGS signals for the same flame (with an equivalence ratio of φ = 0.95) obtained with the high (black) and low (red) repetition rate lasers.The thermal (T) and electrostrictive (E) peaks are marked on the figure.The two signals display a nominally identical oscillation frequency.The small differences in the frequencies are due to the differences in the crossing angles of the pump beams in the low and high speed set-up.These variations are taken into account during the corresponding calibrations in still air.The two signals show the same number of peaks, but they differ in their contrast (e.g.peak-to-valley amplitude ratio), due to the differences in the pulse width of the two lasers [11].
The equivalence ratio in the flames was varied from φ = 0.73 to φ = 1.30 by varying the fuel mass flow rate while keeping the air mass flow rate constant to verify accuracy and precision.Fig. 4 compares the speed of sound (b) and water concentration (c) obtained from LIGS signals with the 100 kHz and the 10 Hz laser and with predictions from burner-stabilized flame simulations using Chemkin.The speed of sound is extracted from the oscillation frequency of the signals, according to Eq. ( 1).The water concentration is derived after calibration using the intensity ratio of the thermal to electrostrictive peaks in the signal, as explained in [15].The inset in Fig. 4 (c) shows the calibration curves for determining the water concentration.The calibration curves obtained for the two laser sources are both linear but they have a different slope due to the different signal contrasts.Results at high and low repetition rate agree by 0.2-1.2% for the speed of sound and by 0.2-1.7% for the water concentration, and also show good agreement with the Chemkin calculations discussed in [15], suggesting that high repetition rate measurements of temperature and water concentration can be made in reacting flows using a 1064 nm pulse burst laser and water as an absorber.
Conclusions
For the first time, tracer-free LIGS measurements are demonstrated at rates of 100 kHz using a pulse burst laser.The energies delivered at the fundamental harmonic (around 130 mJ per pulse) allow the generation of detectable electrostrictive gratings.Here we present the first time-resolved LIEGS measurements in a sinusoidally-modulated, non reacting helium-air jet.This simple experiment highlights an issue regarding the stability of the laser: the high energies used to pump the laser amplifier cause a drift in the direction of the laser beam, which manifests itself as a drift in the acquired frequency of LIGS signals, limiting the useful burst duration.In the second part of the paper, to demonstrate the potential of the technique in reacting flows, a comparison between high and low repetition rate LIGS measurements was made in premixed methane-air flames at 4 bar.The water vapour naturally produced by the flame weakly absorbs the 1064 nm laser light, generating a thermal grating.The speed of sound and water concentration measurements at 100 kHz compare well with results at 10 Hz, demonstrating that LIGS can be potentially used as a tool to detect and time-resolve unsteady changes in turbulent reacting flows.In conclusion, this work demonstrates a significant improvement in the time resolution obtainable using LIGS.To the authors' knowledge, energies higher than 100 mJ per pulse (which are typically required to generate electrostrictive signals using folded boxcars arrangements) at 10-100 kHz can be delivered only by burst lasers, which allow repetition rates at least one order of magnitude higher than conventional high-speed PIV lasers.
Fig. 3 .
Fig. 3. Normalized single shot LIEGS signals recorded at 100 kHz in the steady jets of air (red) and helium and air (black) (a).Density derived from LIGS signals during a 10 ms burst (b): steady jet of pure air (red circles), steady mixture of helium and air (black circles); modulated jet of helium and air (blue circles) and corresponding averaged signal over 42 bursts (magenta line). | 4,839.8 | 2019-10-14T00:00:00.000 | [
"Physics"
] |
IL-1β and IL-6 Upregulation in Children with H1N1 Influenza Virus Infection
The role of cytokines in relation to clinical manifestations, disease severity, and outcome of children with H1N1 virus infection remains thus far unclear. The aim of this study was to evaluate interleukin IL-1β and IL-6 plasma expressions and their association with clinical findings, disease severity, and outcome of children with H1N1 infection. We prospectively evaluated 15 children with H1N1 virus infection and 15 controls with lower respiratory tract infections (LRTI). Interleukin plasma levels were measured using immunoenzymatic assays. Significantly higher levels of IL-1β and IL-6 were detected in all patients with H1N1 virus infection compared to controls. It is noteworthy to mention that in H1N1 patients with more severe clinical manifestations of disease IL-1β and IL-6 expressions were significantly upregulated compared to H1N1 patients with mild clinical manifestations. In particular, IL-6 was significantly correlated with specific clinical findings, such as severity of respiratory compromise and fever. No correlation was found between interleukin expression and final outcome. In conclusion, H1N1 virus infection induces an early and significant upregulation of both interleukins IL1β and IL-6 plasma expressions. The upregulation of these cytokines is likely to play a proinflammatory role in H1N1 virus infection and may contribute to airway inflammation and bronchial hyperreactivity in these patients.
Introduction
In the last years the world has been facing a new pandemia caused by an H1N1 influenza virus, the so-called H1N1/09 virus, which contains a unique combination of gene segments that has never been identified in humans or animals [1]. This new pandemic strain is of particular concern because of its efficient person-to-person transmission responsible for increased virulence and morbidity in humans [2,3].
The novel influenza H1N1 virus was identified as a cause of febrile respiratory infections ranging from self-limited to severe illness both in adults and children. Recent data reported that most cases of H1N1 infection with high rate of hospitalizations occurred in children who aged 5-14 years. A small percentage of these patients can develop more complicated and severe symptoms, such as elevated fever, violent dry cough, pneumonia, and acute respiratory distress syndrome (ARDS) [4,5], requiring admission in Pediatric Intensive Care unit (PICU) and mechanical ventilation [6].
Several hypotheses to explain this particular virulence of H1N1 in children were advocated, including downregulation of type 1 interferon expression, apoptosis, and hyperinduction of proinflammatory cytokines [7]. Upregulation of inflammatory cytokines, such as the TNF-a, IL-1 , IL-6, and IL-10, and a cytokine-mediated inflammatory response have also been documented as responsible of severity of viral lung infections [8]. Different viruses, such as respiratory syncytial virus (RSV) and adenovirus, enhanced the production of IL-6 by human macrophages influencing the susceptibility and severity of respiratory infections [9]. In addition, pulmonary and systemic inflammatory stimuli, such as hypoxia and fever, induce the biosynthesis of interleukins (ILs) in most cell types, including respiratory endothelium and mast cells [10,11], thus determining the increase of vascular permeability and leukocyte accumulation in lung tissue [12,13]. In the literature the inflammatory role of IL-6 and IL-1 in both systemic and respiratory disorders such as meningitis, head injury, and ARDS has also been reported [14,15]. Moreover, recent studies demonstrated that influenza virus A elicits an acute inflammatory response characterized by the production of pro-inflammatory cytokines, such as IL-33 and IL-6, in infected lungs, suggesting a key role for these interleukins in the pathogenesis of respiratory epithelial cell damage and lung inflammation [16,17]. However, the role of most cytokines in relation to clinical findings, disease severity, and outcome of children with H1N1 virus infection remains thus far unclear. Attempting to elucidate the immune mechanisms of inflammation and to clarify the role of interleukins IL-1 and IL-6 in children with H1N1 virus infection, we evaluated the plasma levels of these cytokines in 15 children with H1N1 infection and 15 controls with lower respiratory tract infections (LRTI), to determine whether a correlation with the expression of these molecular markers and clinical findings of these patients exists.
Study Population.
We conducted a prospective observational clinical study among children admitted from October 2009 to December 2010 with the diagnosis of influenza H1N1 virus infection and LRTI to the Pediatric Intensive Care Unit (PICU) and Pediatric Infectious Disease Unit (PIDU) of the "Agostino Gemelli" Hospital, Catholic University Medical School, Rome, Italy. Patients with H1N1 influenza virus infection were grouped according to age, etiology of virus infection, findings of chest radiograph, clinical and laboratory characteristics, respiratory care, and final outcome (Table 1). We also decided to differentiate the patients with H1N1 virus infection in two groups (severe and mild manifestations of H1N1 infection) based on the severity of the symptoms and on the admission to the PICU. We considered severe manifestations of H1N1 influenza infection, the presence of hypoxia at admission (SpO 2 less than 82% in room air), ARDS requiring mechanical ventilation or noninvasive ventilation (NIV) by Helmet, oxygen supplementation by Ventimask or CPAP by face mask, severity of fever (more than 39 ∘ C at the moment of admission), presence and duration of cough, presence of specific radiological findings, such as pneumothorax (PNX), pneumopericardium, and pneumomediastinum, and other specific clinical manifestations, such as neurological involvement. Based on these admission parameters, nine patients with severe manifestations of H1N1 influenza virus infection were admitted to the PICU, while the other 6 patients with mild symptoms of H1N1 infection were admitted to the PIDU. Regarding the control group, 8 infants with severe RSV bronchiolitis were admitted to the PICU, while the other 7 children with LRTI to the PIDU. Six infants with RSV bronchiolitis admitted to the PICU underwent oxygen supplementation and NIV by Helmet, while the other 2 patients required mechanical ventilation. The other 7 infants belonging to the control group required only oxygen supplementation and symptomatic treatment ( Table 2).
Oral Oseltamivir (60 mg twice daily for 5 days) was administered to all 15 patients with the diagnosis of influenza H1N1 virus infection, and supportive therapy for ARDS was started based on the severity of respiratory failure (Table 1). Fever was treated aggressively with paracetamol, while dry cough with aerosol therapy. Chest X-ray was performed within the first 6 hours of hospital admission. Eventual chest CT scan was performed in all children with H1N1 infection with particular severity of respiratory impairment or with specific findings at standard chest radiography (i.e., PNX, pneumopericardium, or pneumomediastinum). All patients were isolated at the moment of the admission based on their clinical symptoms suspected for H1N1 infection or other acute respiratory illness. The throat/nose swabs and blood samples for both laboratory studies and cytokines determination were taken at the moment of the admission. All the throat/nose swabs were sent to the microbiology for influenza virus detection and were analyzed for influenza A, B, subtypes of A by influenza real-time RT-PCR test, and RSV infection. Tables 1 and 2 reported the clinical and demographic characteristics of both patients and controls studied.
The outcome of patients was assessed upon discharge from the hospital using the Glasgow Outcome Score (GOS), which assigns a score of 1 to children who died, 2 to persistent vegetative state, 3 to severe neurologic deficits, 4 to mild neurologic deficits, and 5 to completely healthy children [18,19].
Plasma Sample Collection.
In H1N1 patients we collected blood samples using indwelling radial artery catheters in children admitted to the PICU or arterial puncture in children admitted to the PIDU after local painful treatment. All samples were obtained in the acute phase of the illness, at the moment of the admission of the patients, and before starting any treatment. The plasma samples were submitted for microbiological and biochemical analysis (leukocyte and platelet counts, serum C-reactive protein concentration, procalcitonin, glucose-protein concentration, electrolytes, acidbase study, BUN, etc.).
To measure interleukin levels all blood samples were centrifuged for 10 min at 5,000 rpm, and the supernatants were immediately stored at −70 ∘ C until analysis.
As controls, we used blood radial artery samples collected from children with the diagnosis of LRTI who had undergone blood sample analysis at the moment of their admission to the PICU or PIDU.
The study was approved by the Institutional Review Board, and the parents of participating children were informed about study and provided written informed consent.
Interleukin
Assays. IL-1 and IL-6 were measured from blood samples using commercial immunoenzymatic kits (Human Quantikine by R&D Systems) following the instructions of the manufacturer. The sensitivity of the assay was typically 0.70 pg/mL for IL-6 and 1 pg/mL for IL-1 ; no crossreactivity or interference with other related interleukins was observed. Results were represented in pg/mL, and all assays were performed in duplicate. were admitted to the PIDU (4 with diagnosis of non-RSV bronchiolitis and 3 with diagnosis of influenza A (H2N3) virus infection). Regarding clinical differences between the two groups, H1N1 patients experienced higher median fever (39.2 ∘ C) compared to controls (37.7 ∘ C) ( < 0.0001). Cough was a common symptom in both groups. However, H1N1 patients more frequently suffered from a dry and longer cough compared to LRTI patients (median 6 days versus 4 days) ( < 0.0001). The most frequent pulmonary abnormalities at chest X-ray were represented by pneumonia and pulmonary consolidation in the H1N1 patients, while in LRTI children we detected atypical findings, such as hyperinflated lungs and segmental pulmonary atelectasias. Two patients with H1N1 infection showed PNX, while another three children showed severe respiratory complications, such as pneumopericardium, pneumomediastinum, and pneumorrhachis at chest CT scan (Table 1). No pulmonary or systemic complications were referred to LRTI group. No differences in clinical manifestations, such as gastrointestinal and neurological symptoms, have been reported between the groups. Regarding laboratory tests (blood cells and platelet count, serum C-reactive protein, procalcitonin, GOT, GPT, CTN, and urea) no significant differences were detected between H1N1 patients and LRTI controls. All children, both patients and controls, had a good outcome without any significant complications (GOS 5), but H1N1 patients had a significantly longer time of hospitalization compared to the control group (9 days versus 3 days: = 0.0013).
Correlation between Interleukin Expression with Disease Severity and Clinical Manifestations in H1N1 Patients.
To elucidate the association between interleukin expression and disease severity, we analyzed their plasma levels both in patients with severe (9 patients) and mild symptoms (6 patients) of H1N1 influenza virus infection. Compared to the mild patients, severe H1N1 patients produced significant higher levels of IL-1 (22.6 ± 4.7 pg/mL versus 9.1 ± 2.8 pg/mL; < 0.0001) and IL-6 (124.1 ± 11.8 pg/mL versus 84.0 ± 8.6 pg/mL; < 0.0001) ( Figure 4). Moreover, to verify whether there was a correlation between interleukin up-regulation and clinical manifestations in H1N1 patients, we compared the plasma levels of these cytokines with some clinical symptoms referred to the patients. In particular, we detected a positive correlation between plasma level of IL-6 and fever with a coefficient of determination of 0.64 ( = 0.0004) ( Figure 5). Finally we found a negative correlation between IL-6 plasma level and SpO 2 at admission in room air with a coefficient of determination of 0.53 ( = 0020) ( Figure 6). No significant correlations were reported between interleukin expression and other clinical and laboratoristic parameters, such as biochemical markers of inflammation (C-reactive protein and procalcitonin), respiratory care, systemic complications, and, finally, outcome of all children, with H1N1 virus infection.
Discussion
Our study, despite the limited patient sample so far evaluated, provides evidence that H1N1 virus infection induces an early and significant up-regulation of interleukin IL-1 and IL-6 plasma levels suggesting that these cytokines are responsible for different molecular reactions leading to airway inflammation and disease severity. Compared to LRTI controls, H1N1 infected children showed a strongly higher production of both IL-1 and IL-6 soon after virus lung infection, and this overexpression seems to correlate with the severity of clinical compromise assessed upon admission. We also observed that in H1N1 patients with more severe clinical manifestations of disease, plasma levels of IL-1 and IL-6 were significantly upregulated compared to H1N1 mild patients and that this over expression was correlated with some specific clinical manifestations and a longer time of hospitalisation. More in particular, IL-6 up-regulation was significantly correlated with the severity of respiratory compromise, testified by a lower SpO 2 at admission and higher fever observed in this subset of children, as previously reported in patients with H1N1 virus infection [7]. No differences were reported between plasma expression of these factors and final outcome of patients and controls.
To date it is difficult to explain the exact role of ILs in the mechanisms of virus host response, because both pro-inflammatory and immunoprotective actions have been reported in previous researches. H1N1 virus infection causes the activation of the host macrophages and lymphocytes determining the release of pro-inflammatory cytokines. The increased expression of pro-inflammatory cytokines into the lung tissue may lead to higher blood vessel permeability, phagocytic cell recruitment, apoptosis of lung epithelial cells, and release of neutrophil-derived enzymes, such as myeloperoxidase and elastase, responsible of severity of acute lung injury [20]. Our results are in agreement with these studies, as we showed a significant correlation between ILs up-regulation and severity of respiratory compromise in children with H1N1 virus infection. There are some possible explanations for this relationship. Up-regulation of IL-1 and IL-6 may affect lung functioning because hypoxia is in turn responsible for the endogenous cytokine production after H1N1 lung infections [10]. Cytokine up-regulation may cause epithelial cell damage through different mechanisms. ILs have a direct toxic effect by increasing the production of nitric oxide synthase, cyclooxygenase, and free radicals and by favouring the release of the excitatory amino acid in experimental model of neurotoxicity and also in patients with severe sleep apnea [21,22],thus determining impaired pulmonary function [23]. Previous studies, in fact, reported the correlation between IL-1 and IL-6 up-regulation and some clinical and radiological findings, such as pneumonia and ARDS, both in experimental animal models and in children with naturally acquired seasonal influenza A [24][25][26][27]. More in particular, IL-1 and IL-6 have been identified as specific markers of the severity of acute lung injury during H1N1 influenza virus infection [8] and it has also been reported that IL-1 is an early and useful biomarker of the severity and progression of lung inflammation in patients undergoing mechanical ventilation and unresponsive to antimicrobiological treatment [28]. Our results are consistent with these previous researches because children with more severe clinical and radiological manifestations of H1N1 disease severity, such as ARDS and longer and dry cough, elicited a more intensive production of IL-1 and IL-6 than H1N1 mild patients, suggesting that this up-regulation exerted a key role in biochemical and molecular processes affecting the lung soon after the infection leading to the development of airway inflammation and bronchial hyperreactivity [29,30].
Up to now it is difficult to explain if the observed ILs up-regulation in H1N1 patients can represent a protective mechanisms for respiratory cell survival or it is secondary to a loss of physiological control of ILs biosynthesis. Available clinical and experimental data does not permit a definitive clarification of these findings. ILs plasma levels increase in several inflammatory diseases, such as allergen provocation and asthma. Recently, lymphocytes and in particular activated T cells were revealed to express ILs receptors in the experimental animal model of pulmonary sarcoidosis and chemical lung injury [31,32]. So, it is possible that ILs upregulation is secondary to lymphocytes rapid activation by H1N1 virus infection and that this over-expression represents an important process in the mechanisms of inflammatory host response after viral lung infections [33,34].
Previous studies, in fact, reported that different viral lung infections are associated with early up-regulation of cytokine biosynthesis suggesting that the changes of ILs release may contribute to the development of airway inflammation and bronchial hyperreactivity [34]. In our study, IL-1 and IL-6 up-regulation, observed early after H1N1 virus infection, was consistent with the timing of cytokine expression in experimental models of virus infected human alveolar macrophages, suggesting that this over expression plays a key role in the mechanisms of inflammatory lung response [35]. The significant correlation between ILs upregulation and severity of H1N1 virus infection observed in our patients might reflect an endogenous attempt against molecular mechanisms activated in the epithelium cells of infected lung suggesting that ILs up-regulation acts in different fashion to amplify and propagate inflammation in the airways. However, given that the statistical power to find a statistically significant association in a model of 15 patients is very low, we need to be very cautious on interpreting these data, because only limited information is available on ILs expression in children with viral lung infections.
In conclusion, our observations provide new evidence that an immune response is activated at the early stage of pandemic H1N1 influenza virus infection with up-regulated production of plasma interleukins IL-1 and IL-6. These findings are consistent with previous experimental and clinical studies confirming a key role for both of these interleukins in the pathogenesis of airway inflammation and bronchial hyperreactivity during virus lung infections. The increased expression of these cytokines may together be the underlying cause of the observed clinical symptoms in severe H1N1 patients, and defining the relationships between ILs expression and the pathophysiology and clinical manifestations of H1N1 may help to shed light on the molecular pathogenesis of H1N1 influenza and other human viral lung infections. Further clinical and experimental investigations are necessary to identify the ILs target cells in the damaged lung and to discover possible clinical applications of ILs in children with H1N1 influenza virus and other viral lung infections. | 4,067 | 2013-04-29T00:00:00.000 | [
"Medicine",
"Biology"
] |
Exactly marginal deformations from exceptional generalised geometry
We apply exceptional generalised geometry to the study of exactly marginal deformations of N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = 1 SCFTs that are dual to generic AdS5 flux backgrounds in type IIB or eleven-dimensional supergravity. In the gauge theory, marginal deformations are parametrised by the space of chiral primary operators of conformal dimension three, while exactly marginal deformations correspond to quotienting this space by the complexified global symmetry group. We show how the supergravity analysis gives a geometric interpretation of the gauge theory results. The marginal deformations arise from deformations of generalised structures that solve moment maps for the generalised diffeomorphism group and have the correct charge under the generalised Reeb vector, generating the R-symmetry. If this is the only symmetry of the background, all marginal deformations are exactly marginal. If the background possesses extra isometries, there are obstructions that come from fixed points of the moment maps. The exactly marginal deformations are then given by a further quotient by these extra isometries. Our analysis holds for any N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = 2 AdS5 flux background. Focussing on the particular case of type IIB Sasaki-Einstein backgrounds we recover the result that marginal deformations correspond to perturbing the solution by three-form flux at first order. In various explicit examples, we show that our expression for the three-form flux matches those in the literature and the obstruction conditions match the one-loop beta functions of the dual SCFT.
The AdS/CFT correspondence allows the study of a wide class of superconformal field theories in four dimensions, many of which are realised as the world-volume theories of D3-branes at conical singularities of Calabi-Yau manifolds. Examples are N = 4 super Yang-Mills or the Klebanov-Witten model, which are obtained by putting D3-branes in flat space or at the tip of the cone over T 1,1 respectively. An interesting feature of N = 1 SCFTs is that they can admit exactly marginal deformations, namely deformations that preserve supersymmetry and conformal invariance. A given N = 1 SCFT can then be seen as a point on a "conformal manifold" in the space of operator couplings. The existence and dimension of the conformal manifold for a given theory can be determined using N = 1 supersymmetry and renormalisation group arguments [1][2][3][4]. For instance, N = 4 super Yang-Mills admits two exactly marginal deformations, the so-called β-and cubic deformations. What is more difficult to determine is the precise geometry of the conformal manifold.
Using AdS/CFT, the same questions can be asked by studying deformations of the supergravity background dual to the given SCFT. For N = 4 super Yang-Mills, the supergravity dual of the full set of marginal deformations is known only perturbatively. In [5], the first-order perturbation was identified with the three-form fluxes of type IIB, and the corresponding linearised solution was given in [6]. The second-order solution, including the back-reacted dilaton and metric, was constructed in [7], which also identified an obstruction to the third-order solution, corresponding to the vanishing of the gauge theory beta functions. This required considerable effort, and it seems far from promising to reconstruct the full solution from a perturbative analysis. On the other hand, using duality transformations, Lunin and Maldacena were able to build the full analytic supergravity dual of the β-deformation [8]. The same transformation applied to T 1,1 or Y p,q manifolds gives the gravity duals of the β-deformation of the Klebanov-Witten theory and more general N = 1 quiver gauge theories [8]. For the other marginal deformations of Y p,q models, the identification of the gravity modes dual to them can be found in [9] but no finite-deformation gravity solutions are known.
The Lunin-Maldacena (LM) solution has a nice interpretation in generalised complex geometry [10,11], a formalism that allows one to geometrise the NS-NS sector of supergravity [12,13]. One considers a generalisation of the tangent bundle of the internal manifold, given by the sum of the tangent and cotangent bundles. The structure group of this generalised tangent bundle is the continuous T-duality group O(d, d). The transformation that generates the LM solution is then identified as a bi-vector deformation inside O(d, d) [10]. However, this is not the case for the other marginal deformation of N = 4. In order to capture all exactly marginal deformations, one is tempted to look at the full U-duality group. This requires considering exceptional or E d(d) × R + generalised geometry [14,15], where the U-duality groups appear as the structure groups of even larger extended tangent bundles.
The main motivation for this paper is to lay the foundations for applying exceptional generalised geometry to the study of exactly marginal deformations of a generic SCFT with a supergravity dual. As the first step of this programme we perform a linearised analysis JHEP01(2017)124 of the exactly marginal deformations. To do this, we use the description of N = 2 AdS 5 backgrounds in terms of "exceptional Sasaki-Einstein" structures, given in [16]. This is a generalisation of the conventional G-structure formalism where generalised structures are defined by generalised tensors that are invariant under some subgroup of E d(d) × R + . The relevant structures for AdS 5 compactifications are a hypermultiplet (or H) structure J α and a vector-multiplet (or V) structure K. These structures are naturally associated with the hypermultiplet and vector-multiplet degrees of freedom of the five-dimensional gauged supergravity on AdS 5 , hence their names [17]. Together they are invariant under a USp (6) subgroup of E 6(6) × R + and also admit a natural action of the USp(2) local symmetry of N = 2 supergravity in five dimensions. 1 Although our specific examples will focus on type IIB geometries, the same analysis applies equally to generic N = 2 AdS 5 solutions of type IIB or eleven-dimensional supergravity.
This generalised geometric description of the internal geometry translates naturally to quantities in the dual field theory, which is particularly useful when analysing marginal deformations. Indeed, since hypermultiplets and vector multiplets of the gauged supergravity correspond to chiral and vector multiplets of the dual SCFT [18], the deformations of the H and V structures map directly to superpotential and Kähler deformations of the dual SCFT. Using the properties of the N = 1 superconformal algebra, Green et al. [3] showed that marginal deformations can only be chiral operators of (superfield) dimension three and that the set of exactly marginal deformations is obtained by quotienting the space of marginal couplings by the complexified global symmetry group. The main result of this paper will be to reproduce these features from deformations of generic solutions on the supergravity side: the supersymmetric deformations must preserve the V structure but can deform the H structure. In addition, the exactly marginal deformations are a symplectic quotient of the marginal deformations by the isometry group of the internal manifold. This corresponds to the global symmetry group of the dual field theory.
The paper is organized as follows: we begin in section 2 with a discussion of marginal deformations of N = 1 SCFTs focussing on a number of classic examples that are dual to AdS 5 × M type IIB backgrounds, where M is a Sasaki-Einstein manifold. In section 3, we review the reformulation of AdS 5 backgrounds in terms of exceptional generalised geometry [16,19]. We then describe how the moduli space of generalised structures appears and outline how this reproduces the findings of [2][3][4]. For concreteness, in section 4 we specialise to type IIB Sasaki-Einstein backgrounds. We find the explicit linearised supersymmetric deformations corresponding to the operators in the chiral ring, matching the Kaluza-Klein analysis of [20], and recover the result that the supersymmetric deformations give rise to three-form flux perturbations [6]. In section 5, we give the explicit examples of S 5 , T 1,1 and Y p,q , and show that our expression for the three-form flux on S 5 matches the supergravity calculation of Aharony et al. [7], and reproduces the flux of the LM solution for generic Sasaki-Einstein manifolds. We conclude with some directions for future work in section 6.
JHEP01(2017)124 2 Marginal deformations of N = 1 SCFTs
Conformal field theories can be seen as fixed points of the renormalisation group flow where the beta functions for all couplings vanish. Generically, since there are as many beta functions as there are couplings, CFTs correspond to isolated points in the space of couplings. This is not the case for supersymmetric field theories, where non-renormalisation theorems force the beta functions for the gauge and superpotential couplings to be linear combinations of the anomalous dimensions of the fundamental fields [1]. If global symmetries are present before introducing the marginal deformations, the number of independent anomalous dimensions will be smaller than the number of couplings and not all beta functions will be independent. The theory then admits a manifold of conformal fixed points, M c . This is equivalent to saying that a given SCFT at a point p ∈ M c admits exactly marginal deformations, namely deformations that preserve conformal invariance at the quantum level. The dimension of the conformal manifold is given by the difference between the number of classically marginal couplings and the number of independent beta functions. The two-point functions of the exactly marginal deformations at each point p ∈ M c define a natural metric on M c known as the Zamolodchikov metric.
Recently, developing the argument in [2], the authors of [3] proposed an alternative method to analyse the N = 1 exactly marginal deformations of four-dimensional SCFTs, which does not use explicitly the beta functions for the superpotential couplings, but instead relies on the properties of the N = 1 algebra. Take a four-dimensional N = 1 SCFT at some point p in the conformal manifold and consider all possible marginal deformations. These are of two types: "Kähler deformations" which are perturbations of the form d 4 θ V where V is a real primary superfield of mass dimension ∆ = 2, and "superpotential" deformations which have the form d 2 θ O where O is a chiral primary superfield with ∆ = 3. 2 The results of [3] are that: • there are no marginal Kähler deformations since they correspond to conserved currents; • there is generically a set of marginal superpotential deformations O i , with the generic deformation W = h i O i parametrised by a set of complex couplings {h i }; • if the undeformed theory has no global symmetries other than the U(1) R R-symmetry, all marginal deformations are exactly marginal; • however if the original SCFT has a global symmetry G that is broken by the generic deformation W = h i O i , then the conformal manifold, near the original theory, is given by the quotient of the space of marginal couplings by the complexified broken global symmetry group where M c is Kähler with the Zamolodchikov metric.
JHEP01(2017)124
The reduction (2.1) can be viewed as a symplectic quotient for the real group G, where setting the moment maps to zero corresponds to solving the beta function equations for the deformations. Note also that the vector space of couplings h i (modulo G C ) parametrise the tangent space T p M c at the particular SCFT p ∈ M c , and so define local coordinates on the conformal manifold near p. Thus, as written, (2.1) is only a local definition. More generally one can also consider operators O = A + θψ + θ 2 F A that are chiral primary superfields of any dimension, modulo the relations imposed by the F-terms of the SCFT. The lowest components A form the chiral ring under multiplication A = AA subject to the F-term relations, whereas the θ 2 -components satisfy F A = AF A + A F A , and hence transform as a derivation on the ring (specifically like a differential "dA"). In what follows it will be useful to define the infinite-dimensional complex space of couplings {γ i , γ i } corresponding to deforming the Lagrangian by a term ∆ = γ i F A i + γ i A i for generic chiral ring elements A i and θ 2 -components F A i . The γ i terms are supersymmetric, while the γ i terms break supersymmetry, and generically neither are marginal. One of our results is that the supergravity analysis implies that there is a natural hyper-Kähler structure on this space, since the pair (γ i , γ i ) arise from the scalar components of a hypermultiplet in the bulk AdS space. More precisely, if there is a global symmetry G, one naturally considers the space defined by the hyper-Kähler quotient 3 The conformal manifold is then a finite-dimensional complex submanifold of M with the A i couplings γ i set to zero and only the exactly marginal γ i coefficients (denoted h i above) non-zero. We now give three examples of SCFTs whose conformal manifolds have been analysed and whose gravity duals will be discussed in the rest of the paper.
N = 4 super Yang-Mills
The most studied example of a SCFT in four dimensions is N = 4 super Yang-Mills. The fields of the theory are, besides gauge fields, six scalars and four fermions, all in the adjoint representation of the gauge group SU(N ) and transforming non-trivially under the SU(4) R-symmetry. In N = 1 notation, these fields arrange into a vector multiplet and three chiral superfields Φ i . The theory has a superpotential which is antisymmetric in the fields, and the coupling is fixed by N = 4 supersymmetry to be equal to the gauge coupling, h = τ . In this notation, only the SU(3) × U(1) subgroup of the R-symmetry is manifest.
JHEP01(2017)124
The marginal deformations compatible with N = 1 supersymmetry are given by the chiral operators where f ijk is a complex symmetric tensor of SU(3) and h is a priori different from the gauge coupling τ . In all there are eleven complex marginal deformations. The superpotential (2.5) breaks the global SU(3) symmetry, leaving the U(1) R symmetry of N = 1 theories. Therefore, the conformal manifold is with complex dimension dim(M c ) = 11 − 8 = 1 + 2. The first deformation is an SU(4) singlet corresponding to changing both τ and h, the other two are true superpotential deformations.
The same conclusions can be reached by studying the beta functions of the deformed theory [1,7]. One can show that the beta function equations for the gauge coupling and the superpotential deformations are proportional to the matrix of anomalous dimensions. At one loop, this (or more precisely its traceless part) is corresponding to the SU(3) moment maps, when we view (2.6) as a symplectic quotient. This equation imposes eight real conditions on f ijk . One can remove another eight real degrees of freedom using an SU(3) rotation of the fields Φ i . Together, these reduce the superpotential deformation to [1] The coupling f β is the so-called β-deformation, 4 and f λ is often called the cubic deformation. As mentioned above, the first term in this expression is to be interpreted as changing h and τ together. One can go beyond the one-loop analysis. The deformed theory has a discrete Z 3 × Z 3 symmetry, which forces the matrix of anomalous dimensions of the Φ i to be proportional to the identity. One can then show that the beta function condition (at all loops) reduces to just one equation, thus again giving a three-dimensional manifold of exactly marginal deformations. Since this will be relevant for the gravity dual, we stress that the only obstruction to having exactly marginal deformations is the one-loop constraint (2.7).
Klebanov-Witten theory
The Klebanov-Witten theory is the four-dimensional SCFT that corresponds to the worldvolume theory of N D3-branes at the conifold singularity [21]. This is an N = 1 SU(N ) × SU(N ) gauge theory with two sets of bi-fundamental chiral fields A i and B i (i = 1, 2) transforming in the (N , N ) and (N , N ) respectively. The superpotential is
JHEP01(2017)124
and preserves an SU(2) × SU(2) × U(1) R global symmetry, under which the chiral fields transform as (2, 1, 1/2) and (1, 2, 1/2) respectively. The R-charges of the fields A i and B i are such that the superpotential has the standard charge +2. The superpotential is not renormalisable, suggesting that the theory corresponds to an IR fixed point of an RG flow. Indeed, one can show that this theory appears as the IR fixed point of the RG flow generated by giving mass to the adjoint chiral multiplet in the Z 2 orbifold of N = 4 super Yang-Mills [21]. Classically, the marginal deformations of the KW theory are given by the following chiral operators where the tensor f αβ,αβ is symmetric in the indices αβ andαβ, and therefore transforms in the (3, 3) of the SU(2) × SU(2) global symmetry group. The deformation τ does not break the global symmetry of the theory and corresponds to a shift in the difference of the gauge couplings (1/g 2 1 − 1/g 2 2 ). The exactly marginal deformations of the KW theory were found in [22]. Only three components of the f αβ,αβ term are exactly marginal, so we have five exactly marginal deformations in total. This is in agreement with the dimension of the conformal manifold, given by One reaches the same conclusions by studying the beta functions of the deformed theory [21]. These are equivalent to the SU(2) × SU(2) moment maps, which take the form (2.12) These remove six real degrees of freedom. We can also redefine the couplings using the SU(2) × SU(2) symmetry to remove another six real degrees of freedom, leaving three complex parameters. The exactly marginal deformations are then given by (2.13) The deformation parametrised by f β is the β-deformation for the KW theory, since it is the deformation that preserves the Cartan subgroup of the global symmetry group (U(1) × U(1) in this case).
Y p,q gauge theories
The KW theory is the simplest example of an N = 1 quiver gauge theory in four dimensions.
A particularly interesting class of these theories arise as world-volume theories of D3-branes
JHEP01(2017)124
probing a Calabi-Yau three-fold with a toric singularity, where the singular Calabi-Yau spaces are cones over the infinite family of Sasaki-Einstein Y p,q manifolds [23,24]. 5 These theories have rather unusual properties, such as the possibility of irrational R-charges. The field theories dual to the infinite family of geometries were constructed in [25], which we review quickly. The properties of the dual field theories can be read off from the associated quiver. The fields theories have 2p gauge groups with 4p + 2q bi-fundamental fields. Besides the U(1) R , they have an SU(2) × U(1) F global symmetry. The 4p + 2q fields split into doublets and singlets under SU(2): p doublets labelled U , q doublets labelled V , p − q singlets labelled Z and p + q singlets labelled Y . The general superpotential is where the α and β indices label the global SU (2). The R-charges of the fields are while their charges under the additional U(1) F symmetry are respectively 0, 1, −1 and 1.
The marginal deformations of these theories are given by [22] 16) where f αβ is symmetric and O gauge is an operator involving differences of gauge couplings. Note that W preserves U(1) F , but the f αβ terms break the SU(2) to U(1). The SU(2) moment maps giving the beta functions are where f αβ = f a (σ a ) αβ , which has the solution f a = r a e iφ . Modding out by the SU(2) action leaves a single deformation that is exactly marginal, namely the analogue of the β-deformation for the Y p,q theories. As mentioned previously, the β-deformation breaks the global symmetry to its Cartan generators. Thus one can take f 3 non-zero, or equivalently Note that the counting is in agreement with the dimension of the conformal manifold, given by
JHEP01(2017)124
Naively the quotient gives the wrong counting. However f αβ does not completely break SU(2) but instead preserves a U(1), meaning that the quotient removes only two complex degrees of freedom.
Deformations from exceptional generalised geometry
According to AdS/CFT, the supergravity dual of a given conformal field theory in four dimensions is a geometry of the form AdS 5 ×M , where the AdS 5 factor reflects the conformal invariance of the theory. The duals of exactly marginal deformations that preserve N = 1 supersymmetry are expected to be of the same form, but with a different geometry on the internal manifold. Generically, the solution will also have non-trivial fluxes and dilaton, if present. These solutions should be parametrically connected to the undeformed solution, so that the moduli space of exactly marginal deformations of the gauge theory is mapped to the moduli space of AdS 5 vacua.
Finding the full supergravity duals of exactly marginal deformations is not an easy task; few exact solutions are known, and those that are were found using solution-generating techniques based on dualities [8]. The idea of this paper is to exploit as much of the symmetry structure of the supergravity as possible to look for the generic exactly marginal deformations. This is most naturally done in the context of exceptional generalised geometry, where by considering an extended tangent bundle that includes vectors, one-forms and higher-rank forms, one finds an enhanced E d(d) × R + structure group and the bosonic fields are unified into a generalised metric.
In this section, we outline the general results applicable to arbitrary AdS 5 supergravity backgrounds, whether constructed from type II or eleven-dimensional supergravity. In particular, we find the supergravity dual of the field theory results of [3]. In the following section, we discuss the specific case of type IIB compactifications on Sasaki-Einstein manifolds giving considerably more detail.
Generalised structures and deformations
Consider a generic supersymmetric solution of the form AdS 5 × M , where M can be either five-or six-dimensional depending on whether we are compactifying type II or eleven-dimensional supergravity. We allow all fluxes that preserve the symmetry of AdS 5 .
We are looking for the duals of N = 1 SCFTs in four dimensions and so the dual supergravity backgrounds preserve eight supercharges, that is N = 2 in five dimensions. A background preserving eight supercharges is completely determined by specifying a pair of generalised structures [17]: a "vector-multiplet structure" K and a "hypermultiplet structure" J α , a triplet of objects labelled by α = 1, 2, 3. Each structure is constructed as a combination of tensors on M built from bilinears of Killing spinors [19], but for the moment the details are irrelevant. One should think of them as defining a generalisation of the Sasaki-Einstein structure in type IIB to a generic AdS 5 flux background in type II or M-theory.
JHEP01(2017)124
Supersymmetry implies that the structures K and J α satisfy three differential conditions [16,19]. The two of particular relevance to us take the form where the triplet of functions µ α (V ) are defined to be The third condition isL The constants λ α are related to the AdS 5 cosmological constant and can always be fixed to Again the details are not important here, but for completeness note that c(K, K, V ) is the E 6(6) cubic invariant (see (A.9)) while the symbolL denotes the Dorfman or generalised Lie derivative (see (A.22)), which generates the group of generalised diffeomorphisms GDiff, namely the combination of diffeomorphisms and gauge transformations of all the flux potentials. In particular one can show that K is a "generalised Killing vector", that isL K generates a generalised diffeomorphism that leaves the solution invariant, and this symmetry corresponds to the R-symmetry of the SCFT. In analogy to the Sasaki-Einstein case, we sometimes refer to K as the "generalised Reeb vector". In addition, the functions µ α have an interpretation as a triplet of moment maps for the group of generalised diffeomorphisms acting on the space of J α structures. As such we will often refer to (3.1) as the moment map conditions. To find the marginal deformations of the N = 1 SCFT we need to consider perturbations of the structures K and J α that satisfy the supersymmetry conditions, expanded to first order in the perturbation. These are of two types, 6 which correspond to the two types of deformation in the SCFT δK = 0 , δJ α = 0 : Kähler deformations, δK = 0 , δJ α = 0 : superpotential deformations.
The easiest way to justify this identification is to note that, from the point of view of five-dimensional supergravity, fluctuations of K live in vector multiplets and those of J α live in hypermultiplets. According to the AdS/CFT dictionary, vector multiplets and hypermultiplets correspond to real primary superfields and chiral primary superfields in the SCFT [18].
JHEP01(2017)124
Let us first consider the Kähler deformations, where we hold J α fixed and deform K. Looking at the moment maps (3.1), we see the left-hand side depends only on J α and so does not change, but the right-hand side can vary, thus we must have The K tensor is invariant under an F 4(4) ⊂ E 6(6) subgroup. Decomposing into F 4(4) representations, we find 27 = 1 + 26 and a singlet in the tensor product 26 × 26 = 1 + . . ..
the terms that form a singlet in the cubic invariant are This must vanish for all b and V 26 . The first term is generically non-vanishing, so we must take a = 0 implying there is no singlet component in δK. We cannot simply scale K. For the second term, the singlet in 26 × 26 appears in the symmetric product so is generically non-zero in c(K, K 26 , V 26 ). 7 Given that it must vanish for any V 26 , K 26 must itself vanish. Together these mean δK = 0, so there are no deformations of K that satisfy the moment maps. This matches the field theory analysis that there are no deformations of Kähler type.
For the superpotential deformations we can solve (3.1) and (3.2) to first order in δJ α . We can do this in two steps. First we solve the linearised moment map conditions (3.1). This gives an infinite number of solutions which correspond to θ 2 -components and fields in the chiral ring of the dual gauge theory; generically these are not marginal. Imposing the first-order generalised Lie derivative condition (3.2) will select a finite number of these modes that are massless in AdS 5 and correspond to the actual marginal deformations.
Exactly marginal deformations and fixed points
We now turn to how the supergravity structure encodes the SCFT result that all marginal deformations are exactly marginal unless there is an additional global symmetry group G.
The key point, as we will see, is that the differential conditions (3.1) appear as moment maps for the generalised diffeomorphisms.
A priori, to see if the marginal deformations are exactly marginal one needs to satisfy the equations (3.1) and (3.2) not just to first order, but to all orders in the deformation. In general this is a complicated problem: typically there can be obstructions at higher order that mean not all marginal deformations are actually exactly marginal. For example, a detailed discussion of deformations of N = 4 up to third order is given in [7].
However, viewing the conditions (3.1) as a triplet of moment maps provides an elegant supergravity dual of the field theory result that does not require detailed case-by-case 7 The F4 Dynkin diagram has no symmetries, so the fundamental representation is equivalent to its dual.
This means the singlet in 26 × 26 appears in the symmetric or the antisymmetric product. For F4 the singlet appears in the symmetric product [26]. The real form F 4(4) has the same complexification as F4, so it also admits a symmetric product.
JHEP01(2017)124
calculations. The generic situation is discussed in some detail in [16], which we now review. Moment maps arise when there is a group action preserving a symplectic or hyper-Kähler structure. Here the µ α correspond to the action of generalised diffeomorphisms, that is conventional diffeomorphisms and/or a form-field gauge transformations, acting on the structure J α . Thus to get physically distinct solutions we need to satisfy the moment map conditions (3.1) and then identify solutions that are related by a generalised diffeomorphisms. Formally this defines a subspace of hypermultiplet structures where γ is the function and GDiff K is the subgroup of generalised diffeomorphisms that leave K invariant. (We are considering the moduli space of solutions for J α for fixed K.) By construction (3.9) defines a hyper-Kähler quotient and hence M is hyper-Kähler. The condition (3.2) then defines a Kähler subspace M c ⊂ M (see [16]). We can also consider first imposing (3.2) and then the moment maps (3.1).
The moment map conditions then take a symplectic quotient of N H rather than a hyper-Kähler quotient. We then have the following picture [16] (3.11) A nice property of moment map constructions is that generically there are no obstructions to the linearised problem: every first-order deformation around a given point p ∈ M in the hyper-Kähler quotient (or alternatively p ∈ M c for the symplectic quotient) can be extended to an all-order solution. The only way this fails is if the symmetry group at p defining the moment map has fixed points. In our context this means there are generalised diffeomorphisms that leave the particular J α and K structures invariant, that is one can find V such that the generalised Lie derivatives vanisĥ These V generate isometries of the background (beyond the U(1) R R-symmetry), corresponding to the global symmetry group G of the dual field theory. In other words, the vector component of V is a Killing vector. 8 Thus we directly derive the result that every marginal deformation is exactly marginal in the absence of global symmetries.
JHEP01(2017)124
Suppose now that the global symmetry group G is non-trivial. By construction, those V that generate G fall out of the linearised moment map conditions -they trivially solve the moment maps asL V J α = 0. Thus to solve the full non-linear problem, one must somehow impose these additional conditions. It is a standard result in symplectic (or hyper-Kähler) quotients that the missing equations correspond to a quotient by the global group G on the space of linearised solutions. Suppose {γ i , γ i } are coordinates on the space of linearised deformations, corresponding to couplings of operators F A i and A i . Imposing (3.2) then restricts to the marginal operators {h i } ⊂ {γ i , γ i }. By construction, there is a flat hyper-Kähler metric on {γ i , γ i } and a flat Kähler metric on {h i }. In addition there is a linear action of G on each space that preserves these structures. The origin is a fixed point of G corresponding to the fact that we are expanding about a solution with a global symmetry. The moduli space of finite deformations then corresponds to a quotient of each space by G (at least in the neighbourhood of the original solution). Thus we have (3.13) This structure is discussed in little more detail in section 4.4. We see that we directly recover the field theory result (2.1) that the conformal manifold is given by Note that interpreting the supersymmetry conditions in terms of moments maps nicely mirrors the field theory analysis of the moduli space of marginal deformations. Indeed imposing (3.2) and solving the linearised moment maps (3.1) is equivalent to restricting to chiral operators of dimension three that satisfy the F-term conditions. The further symplectic quotient by the isometry group G then corresponds to imposing the D-term constraints and modding out by gauge transformations.
The case of D3-branes at conical singularities
The results summarized in the previous section are completely general and apply to any AdS 5 flux background. To make the discussion more concrete we will focus on deformations of N = 1 SCFTs that are realised on the world-volume of D3-branes at the tip of a Calabi-Yau cone over a Sasaki-Einstein (SE) manifold M .
Before turning to the generalized geometric description of the supergravity duals, we present their description in terms of "conventional" geometry.
The undeformed Sasaki-Einstein solution
In the ten-dimensional type IIB solution dual to the undeformed SCFT, the metric takes the form 9 where the radial direction of the cone together with the four-dimensional warped space form AdS 5 . In the second and third line we have used the explicit form of the warp factor for AdS 5 , e ∆ = r. The solution has constant dilaton, e φ = 1, and five-form flux given by where vol 5 is the volume form on M . The metric on the SE manifold locally takes the form where σ is called the contact form and the four-dimensional metric is Kähler-Einstein (KE), with symplectic two-form given by There is also a holomorphic (2,0)-form Ω, compatible with ω The five-dimensional volume form is then The forms σ, Ω and ω define an SU(2) structure on the Sasaki-Einstein manifold. It will be useful in what follows to introduce a (2, 0) bi-vector α that is obtained fromΩ by raising its indices with the metric. Then the complex structure I for the Kähler-Einstein metric can be written as The R-symmetry of the field theory is realised in the dual geometry by the Reeb vector field ξ, satisfying Locally we can introduce a coordinate ψ such that
JHEP01(2017)124
If a tensor X satisfies L ξ X = iqX, we say it has charge q under the action of the Reeb vector. The objects defining the SU(2) structure on M have definite charge The R-charge r is related to q by q = 3r/2. For example, Ω is charge +3 under the Reeb vector and has R-charge +2. The contact and Kähler structures allow a decomposition of the exterior derivative as where∂ is the tangential Cauchy-Riemann operator, which satisfies [27] For calculations, it is useful to introduce a frame such that the complex, symplectic and contact structure have the following form (4.14) In terms of the dual frame the bi-vector α is If the SE manifold is "regular" the Reeb vector defines a U(1) fibration over a Kähler-Einstein base. This is the case for S 5 and T 1,1 , dual to N = 4 SYM and the N = 1 KW theory, where the base manifolds are respectively CP 2 and CP 1 × CP 1 . The Y p,q spaces are generically not fibrations.
Embedding in exceptional generalised geometry
In this section we review the description of supersymmetric AdS 5 ×M solutions in E 6(6) ×R + generalised geometry following [16,17]. Although we will focus on type IIB for definiteness, we stress that the construction is equally applicable to solutions of eleven-dimensional supergravity. In particular, details of the embedding of the generic M-theory AdS 5 solution of [28] into E 6(6) × R + generalised geometry are given in [16].
The idea of exceptional generalised geometry is to build a generalised tangent bundle E over M , which encodes the bosonic symmetries of supergravity. The structure group of E is E 6(6) × R + , mirroring the U-duality group of five-dimensional toroidal compactifications. The generalised tangent bundle E can be written as The different components correspond physically to the charges of type IIB supergravity: momentum, winding, D1-, D3-, D5-and NS5-brane charge. One can combine the T * M JHEP01(2017)124 and ∧ 5 T * M factors into SL(2; R) doublets. This way of writing the generalised tangent bundle corresponds to the decomposition of the 27 representation of E 6(6) according to a GL(5; R) × SL(2; R) subgroup, where GL(5; R) acts on the five-dimensional space M , and SL(2; R) corresponds to S-duality. A generalised vector V is a section of E. We will use the following notation for the components of a generalised vector where m, n = 1, . . . , 5 and i = 1, 2.
We will also need the adjoint representation of E 6(6) × R + , which decomposes as r m n is the adjoint of GL(5; R), a i j is the adjoint of SL(2; R), B i mn is an SL(2; R) doublet of two-forms, and C mnpq is a four-form. In addition, there is also a doublet of bi-vectors β imn and a four-vector γ mnpq . The B i mn and C mnpq fields above can be thought of as accommodating the NS-NS and R-R two-form potentials, and the R-R four-form potential of the supergravity theory.
The generalised structures K and J α transform under E 6(6) × R + as an element of the 27 and a triplet of elements in the 78 [17]. 10 The J α form an SU(2) triplet under the E 6(6) adjoint action, corresponding to the R-symmetry of the N = 2 supergravity where κ 2 is the volume form on M . The normalisations of K and J α are fixed by where c is the cubic invariant of E 6(6) , and tr is the trace in the adjoint representation (see (A.9) and (A.10)). The two structures are compatible, which means they satisfy where · is the adjoint action on a generalised vector: 78 × 27 → 27 (see (A.7)). The generalised structures K and J α are combinations of the geometric structures on M built from bilinears of the N = 2 Killing spinors [19]. For Sasaki-Einstein manifolds, these are the Reeb vector ξ, the symplectic form ω and the holomorphic two-form Ω. In this case, K and J α take the following simple form in terms of the GL(5; R) × SL(2; R) decompositions given in (4.17) and (4.18) [17]
JHEP01(2017)124
where J + = J 1 + iJ 2 , σ 2 is the second Pauli matrix and the SL(2; R) vector is Here we are using a compact notation where we suppress all tensor indices. Thus K has only vector and three-form components given by K m = ξ m and K mnp = (σ ∧ ω) mnp , whereas J + has only two-form and bi-vector components given by (J (Ω ∧Ω) mnpq and four-vector κ 8 (ᾱ ∧ α) mnpq components. Note that K depends only on the Reeb vector and the contact structure, whereas J α depends only on the complex structure of the Kähler-Einstein metric.
Supersymmetry conditions
For a supersymmetric compactification to AdS 5 , the structures K and J α must satisfy the differential conditions (3.1)-(3.4). Let us explain a little more the form of these conditions.
The key ingredient is the generalised Lie derivative L. This encodes the differential geometry of the background, unifying the diffeomorphisms and gauge symmetries of the supergravity. Given two generalised vectors V and V as in (4.17), the generalised Lie derivative is given by where L is the ordinary Lie derivative. This can be extended to an action on any generalised tensor. For example, the action on the adjoint representation is given in (A.16). One always has the choice to include the supergravity fluxes in the structures K and J α or as a modification of the generalised Lie derivative. Here the latter option turns out to be more convenient. This defines a "twisted generalised Lie derivative"L, which takes the same form as (4.24) but with the substitutions Although we will not discuss the details, there is actually a natural hyper-Kähler geometry on the space of J α structures [17]. There is also an action of generalised diffeomorphisms taking one J α into another. This action preserves the hyper-Kähler structure. The conditions (3.1) can then be viewed as moment maps for the action of the generalised diffeomorphisms. By construction the space M of solutions to this condition in (3.9) is also hyper-Kähler. The generalised Lie derivative condition (3.2) takes a Kähler slice of this space. Note that for the SE structure (4.22) and five-form flux given in (4.2), we havê and thusL K generates the U(1) R symmetry. We have shown this for the particular case of SE, but this is actually a general result. Thus the slice taken by condition (3.2) essentially fixes the R-charge of J + to be +3, and J 3 to be zero.
JHEP01(2017)124
The supersymmetry conditions can also be viewed as the internal counterpart of the supersymmetry conditions in five-dimensional gauged supergravity [29]: (3.1) comes from the gravitino and gaugino variations (as does (3.4)), while (3.2) is related to the hyperino variation (recall K is associated to the vector multiplets, while J α is associated to the hypermultiplets).
As discussed in [16], it is easy to show that the structures in (4.22) defined for Sasaki-Einstein manifolds satisfy the supersymmetry conditions (3.1)-(3.4). The first two reduce to (4.4), (4.6) and (4.11), thus fixing the constants λ α as in (3.5), while condition (3.4) gives no extra equations. Note that since the deformations we are after leave the structure K invariant, the latter condition will play no role in the following.
Linearized deformations
The structures K and J α lie in orbits of the E 6(6) action. The linearized deformations are therefore elements in the adjoint of E 6(6) , which take us from a given point in these orbits corresponding to the original solution (in the case of Sasaki-Einstein, this is (4.22)), to another point in the orbit corresponding to the structures of the deformed geometry. We have seen that from the gauge theory we expect the marginal deformations A to leave the structure K invariant, while deforming J α . This implies As we will discuss in more detail in appendix B, the deformations A are doublets under the SU(2) generated by J α 11 The signs ± denote the charge under J 3 , [J 3 , A ± ] = ±iA ± , and r is the charge under the action ofL K corresponding to their R-chargê The difference in the R-charge of the two components follows from (3.2), (4.29) and the definition We now need to find pairs of solutions for A ± satisfying the linearised supersymmetry conditions and, for definiteness, R-charge r ≥ 0. In the next subsection, we start by first finding solutions to the linearised moment maps. We then have to mod out by the symmetry, identifying deformations that are related by diffeomorphisms or form-field gauge transformations as corresponding to the same physical deformation. This process corresponds to finding the bulk modes dual to the bosonic components of all chiral superfields: namely the chiral ring operators A i (associated to A − ) and the related supersymmetric deformations of the Lagrangian F A i (associated to A + ). Then in the following subsection, we turn to finding the subset of marginal deformations. The technical details are discussed in appendix B. Here we outline the procedure and present the results.
The chiral ring
The linearised moment map equations are given by 12 where we are using the fact that the deformation leaves K invariant. We start by looking for A + that solve (4.30). The A + deformations can be distinguished by which components of the E 6(6) × R + adjoint (4.18) are non-zero. They fall into two classesǍ where the first contains only two-forms and the corresponding bi-vectors, and the second contains only sl 2 entries. As shown in appendix B.3.1, the two-form part of theǍ + solutions to (4.30) consists of two independent terms where Ω and σ are the holomorphic two-form and the contact form on the SE manifold, and the SL(2; R) vector u i is defined in (4.23). The bi-vector part of the solution is obtained by raising indices with the SE metric. The term in the brackets is completely determined by a function f on the SE manifold satisfyinḡ Imposing that the deformationǍ + has fixed R-charge r − 2, and using (4.11), gives so that f is a homogeneous function on the Calabi-Yau cone of degree 3 2 r. Let us now consider + . Its only non-zero components are a i j ∈ sl 2 , which are again determined by a functionf on the manifold whereū i = ijū j and the functionf is holomorphic (4.37) 12 As we discuss in appendix B.3, the actual deformation is by A = Re A+ so that the deformed structures are real. This do not affect the discussion that follows.
JHEP01(2017)124
The deformations of fixed R-charge r − 2 satisfy so thatf is a homogeneous function on the Calabi-Yau cone of degree 3 2 (r − 2). For each solution A + , one can generate an independent solution A − by acting with J + . Indeed, any deformation of the form A − = [J + , A + ] is automatically a solution of the moment maps, provided A + is. The explicit form of these deformations forǍ − and − is given in (B.15) and (B.17). Thus the solutions of the linearised moment maps consist of an infinite set of deformations A + labelled by their R-charge r, which are generated by the two holomorphic functions, f andf , and a (1,1)-form, δ, and another independent set of deformations A − generated by f ,f and δ . Together these give the general solution to the deformation problem. Arranging these deformations as in (4.28), we find three types of multiplets, schematically, with charge r given respectively by r > 0, r ≥ 2 and r = 2.
Let us now identify what these solutions correspond to physically. For this it is convenient to compute the action of the linearised deformations on the bosonic fields of type II supergravity and then interpret the multiplets (4.39) in terms of Kaluza-Klein modes on the Sasaki-Einstein manifold. One way to read off the bosonic background is from the generalised metric G. This is defined in (B.43) and encodes the metric, dilaton, the NS-NS field B 2 and the R-R fields C 0 , C 2 and C 4 . As discussed in appendix B.4, the two-form and bi-vector deformations f and their partners f at leading order generate NS-NS and R-R two-form potentials, and a combination of internal four-form potential and metric 13 .
(4.40)
Similarly one can show that the holomorphic functionf and its partnerf correspond to the axion-dilaton, and NS-NS and R-R two-form potentials Finally the two-form and bi-vector deformations δ and its partner δ generate NS-NS and R-R two-form potentials and a component of the internal metric (4.42) 13 The full form of the four-form potential and metric is given by (B.15) withν = i 2q ∂f Ω andω = 1 4q(q−1) ∂(∂f Ω ).
JHEP01(2017)124
The KK spectrum for a generic Sasaki-Einstein background was analysed in [20] by solving for eigenmodes of the Laplacian on the manifold. The states arrange into long and short multiplets of N = 2 supergravity in five dimensions. Our multiplets (4.40), (4.41) and (4.42) are indeed the short multiplets of [20].
In terms of the bulk five-dimensional supergravity, each (A − piece corresponds to the lowest component [18]. We then have the following mapping between supergravity and field theory multiplets (4.43) For S 5 the first two sets of multiplets correspond to the operators tr(Φ k ) and tr(W α W α Φ k ), where Φ denotes any of the three adjoint chiral superfields of N = 4 SYM, and the last multiplet is not present. For T 1,1 , one has tr where A and B denote the two doublets of bi-fundamental chiral superfields. In analogy with the T 1,1 case, for a generic SE the operators O f and Of are products of chiral bi-fundamental superfields of the theory, while O gauge corresponds to changing the relative couplings of the gauge groups.
The tower of deformations gives the space M defined in (2.2). In particular, the A − = (f ,f , δ ) ∼ A i deformations parametrise the chiral ring, while A + = (f,f , δ) ∼ F A i parametrise the superpotential deformations.
Marginal deformations
The marginal deformations are a subspace of solutions in M that also satisfy the second differential condition (3.2). At first order in the deformation, this is where we have used again the fact that the deformations leave K invariant. Since the commutators with J α are non-zero, this condition amounts to the requirement corresponding precisely to superpotential deformations with ∆ = 3, a change in the original superpotential (and at the same time of the sum of coupling constants), and a change in the relative gauge couplings respectively.
Linearised supergravity solution
We want now to compute the supergravity solutions at linear order. As discussed in detail in appendix B.4, this can be done by looking at the action of the marginal deformationsǍ + and + on the generalised metric, which encodes the bosonic fields of type IIB supergravity. We first consider the effect of aǍ + deformation to linear order. As already mentioned, such a deformation generates NS-NS and R-R two-form potentials, given by Taking an exterior derivative, the complexified flux G 3 = d(C 2 − iB 2 ) to leading order is The (1,1)-form δ ∈ H 2 (M ) is closed and therefore does not contribute to the flux. On the Calabi-Yau cone, it is well-known that superpotential deformations correspond to imaginary anti-self-dual (IASD) flux [6]. The G 3 here is the component of the IASD flux restricted to the Sasaki-Einstein space. Now consider the effect of a marginal + deformation to linear order. As we show in appendix B.5, such a deformation allows for non-zero, constant values of the axion and dilaton, given byf We stress that this calculation and the expressions for the leading-order corrections to the solution (4.48) for the NS-NS and R-R three-form flux and the axion-dilaton in (4.49) are valid for any Sasaki-Einstein background. One simply needs to plug in the expressions for the holomorphic form and contact structure of the given Sasaki-Einstein space. These objects are given in terms of a frame in (4.14). We will give the explicit form of the frame for the examples of S 5 , T 1,1 and the Y p,q manifolds, and compare the flux with some known results in section 5.
Moment maps, fixed points and obstructions
The linearised analysis above has identified the supergravity perturbations dual to marginal chiral operators in the SCFT. However, this is not the end of the story. Really we would like to find the exactly marginal operators. In the gravity dual this means solving the JHEP01(2017)124 supersymmetry equations not just to first order but to all orders. In general there are obstructions to solving the supersymmetry conditions to higher orders, and not all marginal deformations are exactly marginal [7]. As we saw in section 2, in the field theory these obstructions are related to global symmetries [3].
As we discussed in section 3.2, the fact that the supergravity conditions in exceptional generalised geometry appear as moment maps gives an elegant interpretation of the field theory result. This analysis was completely generic, equally applicable to type II and eleven-dimensional supergravity backgrounds. We will now give a few more details, using the Sasaki-Einstein case as a particular example.
The key point is that generically there are no obstructions to extending the linearised solution of a moment map to an all-orders solution. The only case when this fails is when one is expanding around a point where some of the symmetries defining the moment map have fixed points (see for instance [30]). Since here the moment maps are for the generalised diffeomorphisms, we see that there are obstructions only when the background is invariant under some subgroup G of diffeomorphisms and gauge transformations, called the stabiliser group. Such transformations correspond to additional global symmetries in the SCFT. Furthermore, one can use a linear analysis around the fixed point to show that the obstruction appears as a further symplectic quotient by the symmetry group G. This mirrors the field theory result that all marginal deformations are exactly marginal unless there is an enhanced global symmetry group and that the space of exactly marginal operators is a symplectic quotient of the space of marginal operators.
To see this in a little more detail let us start by reviewing how the conditions (3.1) appear as moment maps [16,17] and how the obstruction appears. We will first consider M, the space of chiral ring elements and θ 2 -components, and then at the end turn to the actual marginal deformations. As we stressed above, this discussion is completely generic and not restricted to Sasaki-Einstein spaces. One first considers the space A K H of all possible hypermultiplet structures compatible with a fixed K, in other words where A(x) is some E 6(6) × R + element. The hyper-Kähler structure is characterised by a triplet of closed symplectic forms, Ω α . These symplectic structures Ω α are defined such that, given a pair of tangent vectors v, v ∈ T p A K H , the three symplectic products are given by The generalised diffeomorphism group acts on J α (x) and hence on A K H . Furthermore its action leaves the symplectic forms Ω α invariant. Infinitesimally, generalised diffeomorphisms
JHEP01(2017)124
are generated by the generalised Lie derivative so that δJ α =L V J α ∈ T p A K H . Thus, just as vector fields parametrise the Lie algebra of conventional diffeomorphisms via the Lie derivative, one can view the generalised vectors V as parametrising the Lie algebra gdiff of the generalised diffeomorphism group. 15 One can then show that the µ α (V ) in (3.1) are precisely the moment maps for the action of the generalised diffeomorphism group on A K H . As written they are three functions on A K H × gdiff where J α gives the point in A K H and V parametrises the element of gdiff, but they can equally well be viewed as a single map µ : A K H → gdiff * × R 3 where gdiff * is the dual of the Lie algebra. Solving the moment map conditions (3.1) and modding out by the generalised diffeomorphisms to obtain M as in (3.9) is a hyper-Kähler quotient. As discussed in [17], one subtlety is that, in order to define a quotient, the right-hand side of the conditions λ α γ, given in (3.10) and which depends on K, must be invariant under the action of the group. Thus the quotient is really defined not for the full generalised diffeomorphism group, but rather the subgroup GDiff K that leaves K invariant. Infinitesimally V parametrises an element of the corresponding Lie algebra gdiff K ifL V K = 0. Thus we have the quotient (3.9).
The linearised analysis of the last section first fixes a point p ∈ A K H corresponding to the Sasaki-Einstein background satisfying the moment map conditions, and then finds deformations of the structure δJ α ∈ T p A K H for which the variations of the moment maps δµ α (V ) vanish for all V . If we view δµ α as a single map δµ : T p A K H → gdiff * K × R 3 , the linearised solutions live in the kernel. Suppose now that p is fixed under some subset of generalised diffeomorphisms, that is we have a stabiliser group G ⊂ GDiff K . The corresponding Lie subalgebra g ⊂ gdiff K is (4.53) At a generic point in A K H satisfying the moment map conditions, all elements of GDiff K act non-trivially and so the stabiliser group is trivial. Thus solving δµ α (V ) = 0 we get a constraint for every V ∈ gdiff K . In constrast, at the point p, we miss those constraints corresponding to V ∈ g. Thus we see that the obstruction to extending the first-order deformation to all orders lies precisely in g * × R 3 , that is, it is the missing constraints. Put more formally, 16 the embedding i : g → gdiff K induces a map i * : gdiff * K → g * on the dual spaces and, at p, we have an exact sequence (4.54) The map δµ is not onto and the obstruction is its cokernel g * × R 3 .
The standard argument for moment maps at fixed points actually goes further. Let U be the vector space of linearised solutions δµ α (V ) = 0 at p, up to gauge equivalence. For the Sasaki-Einstein case it is the space of solutions, dual to the couplings of the operators (A i , F A i ), given in (4.43). Formally U is defined as follows. Recall that the space of solutions 15 Note from (4.24) that shifting the form components λ i and ρ of V by exact terms does not changeLV , furthermore it is independent of σ i . Thus different generalised vectors can parametrise the same Lie algebra element. 16 See for example the note in section 5 of [30].
JHEP01(2017)124
is ker δµ ⊂ T p A K H . The action of GDiff K on p ∈ A K H defines an orbit O ⊂ A K H , and modding out by the tangent space to the orbit T p O at p corresponds to removing gauge equivalence, so that U = ker δµ/T p O . (4.55) The moment map construction means that the hyper-Kähler structure on T p A K H descends to U . By definition, the stabiliser group G acts linearly on T p A K H and this also descends to U . Furthermore it preserves the hyper-Kähler structure. Thus we can actually define moment mapsμ α for the action of G on U . The standard argument is then that the space of unobstructed linear solutions can be identified with the hyper-Kähler quotient of U by G, so near p we have just as in (2.2). The idea here is that if we move slightly away from p we are no longer at a fixed point and there are no missing constraints. Thus we really want to take the hyper-Kähler quotient in a small neighbourhood of A K H near p. However we can use the tangent space T p A K H to approximate the neighbourhood. The moment map on T p A K H can be thought of in two steps: first we impose δµ α = 0 at the origin and mod out by the corresponding gauge symmetries, reducing T p A K H to the space U . However this misses the conditions coming from the stabiliser group G which leaves the origin invariant. Imposing these conditions takes a further hyper-Kähler quotient of U by G. Finally, note that since G acts linearly on U , the obstruction moment mapsμ α are quadratic in the deformation A. This exactly matches the analysis in [7], where in solving the deformation to third-order the authors found a quadratic obstruction. What is striking is that we have been able to show how the obstructions appear for completely generic supersymmetric backgrounds. This discussion has been somewhat abstract. Let us now focus on the simple case of S 5 to see how it works concretely. The full isometry group is SO(6) SU(4). However, only an SU(3) subgroup preserves J α and K, hence for S 5 the stabiliser group is G = SU(3) .
Rather than consider the full space of linearised solutions (4.43), for simplicity we will just focus on f and f , and furthermore assume both functions are degree three: L ξ f = 3if and L ξ f = 3if . In terms of holomorphic functions on the cone C 3 , this implies both functions are cubic The coefficients (f ijk , f ijk ) parametrise a subspace in the space of linearised gauge-fixed solutions U . Using the expressions (4.31) and (4.52) one can calculate the hyper-Kähler metric on the (f ijk , f ijk ) subspace. Alternatively, one notes that the hyper-Kähler structure on A K H descends to a flat hyper-Kähler structure the subspace, parametrised by f ijk and f ijk as quaternionic coordinates. We then find the three symplectic forms
JHEP01(2017)124
where Ω + = Ω 1 + iΩ 2 and indices are raised and lowered using δ ij . The SU(3) group acts infinitesimally as where tr a = 0 and a † = −a. This action is generated by the vectors where ∂ ijk = ∂/∂f ijk and ∂ ijk = ∂/∂f ijk . It is then easy to solve for the (equivariant) moment mapsμ α (a) satisfying i ρ(a) Ω α = dμ α (a), to find (4.61) Solving the moment mapsμ α (a) = 0 for all a i j gives (4.46).) Let us denote this subspace by U c ⊂ U . SinceL K J α is a holomorphic vector on M with respect to one of the complex structures [16], U c is a Kähler subspace. Furthermore, taking the hyper-Kähler quotient by G and then restricting to the marginal deformations is the same as restricting to the marginal deformations and then taking a symplectic quotient by G using only the moment map λ αμ α . In other words the diagram commutes. This is because the action ofL K which enters the generalised Lie derivative condition (3.2) commutes with the action ofL V generating G. 17 Given U c / /G = U c /G C , we see that we reproduce the field theory result (2.2).
JHEP01(2017)124
It is simple to see how this works in the case of S 5 . The marginal modes correspond to f =f = 0, while f is restricted to be degree three andf constant (recall δ and δ are absent on S 5 ). Since constantf is invariant under SU(3), the moment map conditions µ α = 0 on the marginal modes reduce to a single condition that comes fromμ 3 (given λ 1 = λ 2 = 0), namely since theμ + moment map is satisfied identically as f =f = 0. Comparing with section 2, we see that we indeed reproduce the field theory result that the exactly marginal deformations are a symplectic quotient of the marginal deformations by the global symmetry group G.
Examples
In the previous section we derived the first-order supergravity solution dual to exactly marginal deformations on any Sasaki-Einstein background. We now apply this to the explicit examples of the supergravity backgrounds dual to N = 4 super Yang-Mills, the N = 1 Klebanov-Witten theory and N = 1 Y p,q gauge theories.
N = 4 super Yang-Mills
The Sasaki-Einstein manifold that appears in the dual to N = 4 SYM is S 5 , whose four-dimensional Kähler-Einstein base is CP 2 . The metric on S 5 can be written as 18 where the coordinates are related to the usual complex coordinates on C 3 , pulled back to S 5 , by z 1 = c α e iφ 1 , z 2 = s α c θ e iφ 2 , z 3 = s α s θ e iφ 3 .
The marginal deformations are given in terms of a function f which is of charge three under the Reeb vector and the restriction of a holomorphic function on C 3 . In our parametrisation the Reeb vector field is 4) and the coordinates z i have charge +1 18 Here sα and cα are shorthand for sin α and cos α, and similarly for θ.
JHEP01(2017)124
Thus, f must be a cubic function of the z i . An arbitrary cubic holomorphic function on C 3 has ten complex degrees of freedom and can be written as where f ijk is a complex symmetric tensor of SU(3) with ten complex degrees of freedom. This is the same structure as the superpotential deformation (2.5). As mentioned before, not all components of f correspond to exactly marginal deformations because we still need to take into account the broken SU(3) global symmetry. This imposes the further constraint which removes eight real degrees of freedom. We can also redefine the couplings using the SU(3) symmetry to remove another eight real degrees of freedom, leaving a two-complex dimensional space of exactly marginal deformations. Thus, there are two independent solutions f β ∝ z 1 z 2 z 3 , (5.8) and corresponding to the β-deformation and the cubic deformation in (2.8).
The supergravity dual of the β-deformation was worked out in [8]. One can check that using our frame for S 5 and taking where γ is real, our expression (4.48) for the three-form fluxes reproduces those in the first-order β-deformed solution [8]. To generate the complex deformation of LM, we promote γ to γ − iσ, where both γ and σ are real. This reproduces the LM fluxes with τ = i. The full complex deformation with general τ can be obtained using the SL(2; R) frame from [31]. Unlike the β-deformation, the supergravity dual of the cubic deformation is known only perturbatively. Aharony et al. have given an expression for the three-form flux for both the β and cubic deformations to first order [6]. Again, one can check that our expression reproduces this flux for both f β and f λ .
We saw that the marginal deformations (4.46) also allow for closed primitive (1, 1)-forms that do not contribute to the flux. If such terms are not exact, if they are non-trivial in cohomology, they give additional marginal deformations. On CP 2 , the base of S 5 , there are no closed primitive (1, 1)-forms that are not exact, and so the marginal deformations are completely determined by the function f .
(5.18)
The complex, symplectic and contact structures are defined in terms of the frame in (4.14). One can check they satisfy the correct algebraic and differential relations (4.9)-(4.11).
The function f defining the marginal deformations is of weight three under the Reeb vector and a restriction of a holomorphic function on the conifold. Thus f must be a quadratic function of the z a , namely
JHEP01(2017)124
where f ab is symmetric and traceless (by condition (5.13)), or analogously f αβ,αβ is symmetric in αβ andαβ. These deformations are the SU(2) × SU(2)-breaking deformations in (2.10) and generically give nine complex parameters. We remove six real degrees of freedom when solving the moment maps to account for the broken SU(2) × SU(2) symmetry. The moment maps are precisely the beta function conditions given in (2.12). We can also redefine the couplings using SU(2) × SU(2) rotations to remove another six real degrees of freedom, leaving a three-complex dimensional space of exactly marginal deformations labelled f β , f 2 and f 3 in (2.13). We have (5.20) The first of these is the β-deformation for the KW theory. The supergravity dual of the β-deformation was worked out in [8]. One can check that using our frame for T 1,1 and taking our expression (4.48) reproduces the three-form fluxes that appear in the first-order βdeformed solution [8]. To our knowledge, the fluxes for the other deformations were not known before. Unlike CP 2 , CP 1 × CP 1 admits a primitive, closed (1, 1)-form δ that is not exact (specifically the difference of the Kähler forms on each CP 1 ), giving one more exactly marginal deformation, corresponding to a shift of the B-field on the S 2 . On the gauge theory side, this corresponds to the SU(2) × SU(2)-invariant shift in the difference of the gauge couplings in (2.10). Together with h, coming from the superpotential itself, one finds a five-dimensional conformal manifold.
JHEP01(2017)124
The Reeb vector field is ξ = 3∂ ψ . (5.25) As with S 5 , a holomorphic function on the cone over Y p,q determines the marginal deformations. The complex coordinates that define the cone for a generic Y p,q are known but rather complicated [33]. However, we need only the coordinates that can contribute to a holomorphic function with charge +3 under the Reeb vector -fortunately there are only three such coordinates The y i are the roots of a certain cubic equation and are given in terms of p and q as The coordinates b a actually have charge +3 under the Reeb vector and so the holomorphic function that encodes the marginal deformations will be a linear function of the b a . We can take the following frame for any Y p,q e 1 = 1 3 dψ − cos θdφ + y(dβ + cos θdφ) , e 2 + ie 5 = e iψ/2 1 − y 6 1/2 (dθ + i sin θdφ) , e 4 + ie 3 = e iψ/2 w(y) −1/2 q(y) −1/2 dy + 1 6 iw(y)q(y)(dβ + cos θdφ) .
(5.29)
The complex, symplectic and contact structures are defined in terms of the frame in (4.14). One can check they satisfy the correct algebraic and differential relations (4.9)-(4.11). The function f defining the marginal deformations is of weight three under the Reeb vector and a restriction of a holomorphic function on the cone. Thus f must be a linear combination of the b a , namely These deformations are the SU(2)-breaking deformations in (2.16) and generically give three complex parameters. We remove two real degrees of freedom when solving the moment maps to account for the broken SU(2) symmetry (leaving a U(1) unbroken). The moment JHEP01(2017)124 maps are precisely the beta function conditions given in (2.17). We can also redefine the couplings using SU(2) rotations to remove another two real degrees of freedom, leaving a one-complex -dimensional space of exactly marginal deformations. The single independent solution is This is the β-deformation for the quiver gauge theory. The supergravity dual of the βdeformation for Y p,q was worked out in [8]. One can check that using the frame for Y p,q given in (5.29) and taking (5.31), our expression (4.48) reproduces the three-form fluxes that appear in the first-order β-deformed solution [8]. Together with h and τ (dual respectively to the axion-dilaton and the B-field on the S 2 ), one finds a three-dimensional conformal manifold.
Discussion
In this paper we have used exceptional generalised geometry to analyse exactly marginal deformations of N = 1 SCFTs that are dual to AdS 5 backgrounds in type II or elevendimensional supergravity. In the gauge theory, marginal deformations are determined by imposing F-term conditions on operators of conformal dimension three and then quotienting by the complexified global symmetry group. We have shown that the supergravity analysis gives a geometric interpretation of the gauge theory results. The marginal deformations are obtained as solutions of moment maps for the generalised diffeomorphism group that have the correct charge under the Reeb vector, which generates the U(1) R symmetry. If this is the only symmetry of the background, all marginal deformations are exactly marginal. If the background possesses extra isometries, there are obstructions that come from fixed points of the moment maps. The exactly marginal deformations are then given by a further quotient by these extra isometries.
For the specific case of Sasaki-Einstein backgrounds in type IIB we showed how supersymmetric deformations can be understood as deformations of generalised structures which give rise to three-form flux perturbations at first order. Using explicit examples, we showed that our expression for the three-form flux matches those in the literature and the obstruction conditions match the one-loop beta functions of the dual SCFT.
Our analysis holds for any N = 2 AdS 5 background. It would be interesting to apply it to one of the few examples of non-Sasaki-Einstein backgrounds, such as the Pilch-Warner solution [34]. This is dual to a superconformal fixed point of N = 4 super Yang-Mills deformed by a mass for one of the chiral superfields. Another natural direction would be to apply our analysis to backgrounds dual to SCFTs in other dimensions. For example, one can study AdS 4 backgrounds in M-theory, such as AdS 4 × S 7 , where the solution-generating technique of Lunin and Maldacena to find the β-deformation also applies.
It would be interesting to see whether our approach can be used to go beyond the linearised analysis and find the all-order supergravity backgrounds dual to the deformations; so far only the dual of the β-deformation has been obtained. With these in hand, one would be able to perform many non-trivial checks of the AdS/CFT correspondence, including calculating the metric on the conformal manifold.
JHEP01(2017)124
Our formalism has applications other than AdS/CFT. Supersymmetric deformations of the geometry give rise to moduli fields in the low-energy effective action obtained after compactifying on the internal manifold. Determining the number and nature of moduli fields that arise in flux compactifications is difficult in general as we lose many of the mathematical tools used in Calabi-Yau compactifications. In our formalism, fluxes and geometry are both encoded by the generalised structure whose deformations will give all the moduli of the low-energy theory. The generalised geometry points to a new set of tools to understand these deformations, such as generalisations of cohomology and special holonomy.
We hope to make progress on these points in the near future.
In this section we provide details of the construction of E 6(6) × R + generalised geometry for type IIB supergravity compactified on a five-dimensional manifold M . (For more details and the corresponding construction in eleven-dimensional supergravity see [17,35].) We decompose the relevant E 6(6) representations according to a GL(5; R) × SL(2; R) subgroup, where SL(2; R) is the S-duality group and GL(5; R) acts on M . The generalised tangent bundle is where S transforms as a doublet of SL(2; R). We write sections of this bundle as where v ∈ Γ(T M ), λ i ∈ Γ(T * M ⊗ S), ρ ∈ Γ(∧ 3 T * M ) and σ i ∈ Γ(∧ 5 T * M ⊗ S). The adjoint bundle is
JHEP01(2017)124
We write sections of the adjoint bundle as where l ∈ Γ(R), r ∈ Γ(End T M ), etc. The e 6(6) subalgebra is generated by setting l = r a a /3. We take {ê a } to be a basis for T M with a dual basis {e a } on T * M so there is a natural gl 5 action on tensors. For example, the actions on a vector and a three-form are Our notation follows [36]. Wedge products and contractions are given by (A.6) We define the adjoint action of A ∈ Γ(adF ) on a generalised vector V ∈ Γ(E) to be V = A · V , where the components of V are v = lv + r · v + γ ρ + ij β i λ j , We define the adjoint action of A on A to be A = [A, A ], with components The cubic invariant for E 6(6) is
JHEP01(2017)124
The e 6(6) Killing form or trace in the adjoint is tr(A, A ) = 1 2 1 3 tr(r) tr(r ) + tr(rr ) + tr(aa ) We now define the generalisation of the Lie derivative. We introduce the dual generalised tangent bundle E * and define a projection Our choice of projection is The generalised Lie derivative is then defined as This can be extended to act on tensors using the adjoint action of ∂ × ad V ∈ Γ(adF ) in the second term. We will need explicit expressions for the generalised Lie derivative of sections of E and adF . The generalised Lie derivative acting on a generalised vector is The generalised Lie derivative acting on a section of the adjoint bundle is The generalised tangent bundle E is patched such that on overlapping neighbourhoods, where Λ i (ij) andΛ (ij) are locally a pair of one-forms and a three-form respectively, and the action of e where the "untwisted" generalised vectorṼ is a section of T M ⊕ (T * M ⊗ S) ⊕ . . ., and B i and C are two-and four-form gauge potentials, with gauge transformations given by We identify the fields B i with the NS-NS and R-R two-form potentials, and C with the R-R four-form potential The gauge-invariant field strengths are then For calculations, it often proves simpler to work with sections of T M ⊕ (T * M ⊗ S) ⊕ . . . and include the connection in the definition of the generalised Lie derivative. Following this convention, throughout this paper the generalised tensors we write down will be "untwisted". When B i and C are included, the generalised Lie derivative simplifies in a manner analogous to the H-twisted exterior derivative of generalised complex geometry, d H = d − H∧. We define the twisted generalised Lie derivativeL V of a generalised tensor µ bŷ We find thatL V has the same form as L V but includes correction terms involving the fluxes.
The net effects of this in (A.15) and (A.16) are the substitutions
B Supersymmetry conditions and deformations
In this appendix we give a detailed discussion of the deformations of the Sasaki-Einstein structure and of the derivation of the constraints from supersymmetry. We start with a brief description of the generalised structures and then move to their deformations and the conditions that supersymmetry imposes on them.
B.1 The generalised structures
In studying backgrounds with non-trivial fluxes it is often convenient to rewrite the supersymmetry conditions of ten-dimensional supergravity as equations on differential forms. In this paper we use the reformulation of the supersymmetry variations proposed in [17], which recasts the supersymmetric background as an integrable exceptional structure in E 6(6) × R + generalised geometry. The structure is defined by the generalised tensors K and J α introduced in section 4.2. These are globally defined objects that reduce the structure group of the generalised frame bundle so that there is N = 2 supersymmetry in five dimensions. The latter amounts to the existence of a pair of Killing spinors on the internal manifold. Since the internal spinors transform in the 8 representation of the local group USp(8), a pair of Killing spinors is invariant under a reduced USp(6) group.
JHEP01(2017)124
Computing the tensor product of the Killing spinors, one finds the structures K and J α , transforming in the 27 and 78 of E 6(6) . One can show that K is left invariant by an F 4(4) subgroup of E 6(6) . At a point on the internal manifold, K parametrises the coset E 6(6) × R + /F 4(4) , equivalent to picking an element in the 27 of E 6(6) such that c(K, K, K) > 0. A choice of K for the whole manifold then defines an F 4(4) structure.
Similarly the triplet J α , at a point on M , parametrises the coset E 6(6) × R + /SU * (6), equivalent to picking three elements in the 78 of E 6(6) that form a highest weight su 2 subalgebra of e 6(6) . A choice of J α for the whole manifold then defines an SU * (6) structure. The space of SU * (6) structures is the infinite-dimensional space of sections of J α . This space inherits the hyper-Kähler structure from the coset at a point.
The normalisations of K and J α are fixed by the E 6(6) cubic invariant (A.9) and the trace in the adjoint representation (A.10) The form of the E 6(6) -invariant volume κ 2 depends on the compactification ansatz. For type II compactifications in the string frame of the form the invariant volume includes a dilaton dependence and is given by For the Sasaki-Einstein backgrounds we consider, this is simply κ 2 = vol 5 . The two structures are compatible and together define a USp(6) structure if they satisfy where · is the adjoint action (A.7) on a generalised vector.
B.2 Embedding of the linearised deformations in generalised geometry
In this section we will justify the choice of (4.31) for the linearised deformation. As already mentioned, K is left invariant by an F 4(4) subgroup of E 6(6) while the triplet J α is left invariant by SU * (6). Together J α and K are invariant under a common USp (6) subgroup. We argued in section 3.1 that the dual of marginal deformations should leave K invariant, but modify the J α . This means that at a point on the internal manifold they must be elements of the coset F 4(4) × R + /USp (6). The 52 (adjoint) representation of F 4(4) decomposes under USp(6) × SU(2) as The first term corresponds to the triplet J α and its action simply rotates the J α among themselves. The second term is the adjoint of USp(6), which leaves both K and J α invariant. Therefore, the deformations are in the (14, 2) and form a doublet under the SU(2) defined by J α . We can choose them to be eigenstates of J 3
JHEP01(2017)124
The non-trivial eigenstates correspond to λ = 0, 1, 2. From the SU(2) algebra (4. 19) we see that the eigenstates with λ = 2 are J ± themselves. The eigenstates with eigenvalue zero are in USp (6), or in other words they leave J α and K invariant, and we will therefore not consider them. To simplify notation we will call the λ = ±1 eigenstates A ± . We note that we can generate an eigenstate with eigenvalue −iκ from A + by acting with J + , as the Jacobi identity implies We also note that complex conjugation also gives the eigenstate with opposite eigenvalue.
Since L K commutes with the action of J 3 we can also label states by their R-charge as in (4.29), so that we have doublets We have chosen r ≥ 0 for definiteness. Those doublets with r ≤ 0 will be related by complex conjugation. (Note this convention leads to a slight over-counting for 0 ≤ r ≤ 2, since the doublets with charge r have complex conjugates with charge −r + 2. However, it is the most convenient form to adopt for out purposes.) To compute the eigenstates with λ = 1 it helps to note that the E 6(6) action of J 3 acts separately on {B i , β i }, a i j and {r, C, γ, l} (see (A.8)). Using this we can organise the eigenstates asǍ As complex conjugation gives the eigenstate with opposite eigenvalue, using this basis, the modes {Ǎ + ,Ǎ * − , + , * − } fill out the possible +iκ eigenstates. In fact we will find that, with this basis, imposing r ≥ 0 actual restricts to onlyǍ + and + .
One can use the forms defining the SU(2) structure on a SE manifold, Ω, ω and σ, and the corresponding vectors to decompose the eigenstates. It is straightforward to verify that the eigenstateǍ + is given by where the vector u i is defined in (4.23),ν is a (0,1)-form,ῡ is a (1,0)-vector on the base, ω is a primitive (1,1)-form on the base, and p and f are arbitrary complex functions on the SE manifold. The ω andω terms in the bi-vector are obtained from the two-forms by raising indices with the metric g mn . The requirement that the deformation leaves K invariant (Ǎ + · K = 0) translates to constraints on the components ofǍ + , namely which impose p = 0 andῡ =ν . Thus theǍ + deformation that leaves K invariant iš
JHEP01(2017)124
where we have omitted the vector symbols and it is understood that all terms in the bi-vector part are obtained by raising the GL(5) indices of the corresponding forms with the metric g mn . Note that the two-form and bi-vector components are related by where we lower the indices of the bi-vector with the undeformed metric g. TheǍ − mode in the same multiplet asǍ + is given byǍ − = κ −1 [J + ,Ǎ + ] and has the following form where we should regard f as distinct from f . Similarly, we can construct the + deformation that leaves K invariant. It has only a i j components, given by The − mode in the same multiplet as + is given by − = κ −1 [J + , + ] and has the following form where again we should regardf as distinct fromf . We see this is of the form B i + β i as expected from (B.10).
B.3 Supersymmetry conditions
We are interested in deformations of the Sasaki-Einstein background that preserve supersymmetry. This is equivalent to requiring that the deformed structures are integrable, that is the new J α and K must satisfy (3.1) and (3.2). At linear order in the deformation these conditions reduce to As we want the deformed structures to be real, we take the deformation to be A = Re A + , where Re A + = 1 2 (A + + A * + ). In this section we give the derivation of the constraints that these equations impose on the defomationsǍ + . For the other deformations we give only the final results for the constraints, which can be derived in a similar fashion.
B.3.1 Moment map conditions
Let us first consider the deformationǍ + and the conditions from δµ 3 = 0. Given the form of J 3 (4.22), only the a i j , r m n , C mnpq and γ mnpq components of the generalised Lie derivative contribute. The relevant terms arê where B i and β i are the two-form and bi-vector components ofǍ + . We use this and rearrange the trace to give with a similar expression forǍ * + . Using thatǍ + is an eigenstate of J 3 with eigenvalue +iκ and the form of the trace (A.10), this simplifies to where we have used vol 5 (β i dλ j ) ∝ (β i vol 5 ) ∧ dλ j . When combined with the contribution fromL VǍ * + , this should hold for arbitrary λ j and so we require Using the explicit form ofǍ + (B.13), this condition gives
JHEP01(2017)124
Note that we have simplified some expressions using wherev is an arbitrary (0,1)-form with respect to I. We want to solve the system (B.27)-(B.33) of differential equations to derive the form of the deformation. From (B.27) we knowν Ω is closed under ∂, and so it may be written as the sum of a ∂-closed term and a ∂-exact term. However, we also have H 1,0 ∂ (M ) = 0 for a five-dimensional Sasaki-Einstein space M , and so only a ∂-exact term is needed. We make an ansatzν where f has a well-defined scaling under ξ, L ξ f = iqf , and q is non-zero. 19 Next (B.31) gives∂ We can solve this by taking f to be holomorphic, which also solves (B. 19 If q = 0 and f is holomorphic, f is necessarily constant. But from (B.32), a constant f requiresΩ to bē ∂-exact, which is not true. The only solution to the differential conditions for constant f is f = 0, and so we do not need to consider the case of q = 0. 20 In general one has ∂(∂f Ω) = 1 2 (q 2 + 4q − ∆0)f Ω and∂(∂f Ω ) = 1 2 (q 2 − 4q − ∆0)fΩ for a function satisfying ∆f = ∆0f and L ξ f = iqf [20].
JHEP01(2017)124
Taken together, these determine theǍ + solutions of the moment map equations. For example, the two-form component ofǍ + is where f is holomorphic with respect to ∂ (and hence has charge q ≥ 0 under the Reeb vector) and δ is ∂-and∂-closed (and hence has charge zero). The bi-vector component is determined from this using (B.14) . Notice that f -dependent terms and δ are independent of each other, so we really have two eigenmodes within this expression. In fact, this solution to the moment map equations corresponds to the A ]. Naively, one might think we should solve the moment maps from scratch for an A (r) − deformation. For example, the deformation would be calculated using the generic form ofǍ − , given by (B.15), and would then lead to differential conditions on the components ofǍ + from whichǍ − is generated. Fortunately, given a solution A + to the deformed moment maps (B.18), one can show that A − = κ −1 [J + , A + ] is automatically a solution too. The components of A − are determined by A + and the differential conditions on the components of A − reduce to the differential conditions on A + that we have already given. For example, we have seen thatǍ + is completely determined by a holomorphic function f and a ∂-and ∂-closed (1,1)-form δ. AsǍ − = κ −1 [J + ,Ǎ + ] is automatically a solution, it too is determined by a holomorphic function f and a ∂-and∂-closed (1,1)-form δ . Similarly + will be determined by holomorphic functionf . Here, we should note, however, because of our slight over-counting, the r = 2 case with constant f is actually the complex conjugate of the r = 0 case ofǍ + .
B.3.2 Lie derivative along K
At first order in a generic deformation A ∈ 78 of E 6(6) , the generalised Lie derivative condition is given by (4.44). It is straightforward to check that the commutators are non-zero for both J + and J 3 , and so the condition reduces toL K A = 0. From (4.26), we know that the generalised Lie derivative along K reduces to the conventional Lie derivative along ξ, and so the deformation condition is simply We see that the deformation must have scaling dimension zero under the Reeb vector field. Using the explicit form ofǍ + and + , we find f is charge +3 andf is charge zero JHEP01(2017)124 (which together with∂f = 0 impliesf is constant). We also have δ is charge zero, which is consistent with it being ∂-and∂-closed. This agrees with (4.46). These are precisely the conditions for the deformations to be marginal.
B.4 Generalised metric
We have deformed the geometry by two-forms and bi-vectors, but the bosonic fields of type II supergravity do not include bi-vectors. As is typical in generalised complex geometry, acting on the bosonic fields, the bi-vector deformation can be traded for deformations by a gauge potential. We first construct the generalised metric and then give the dictionary for translating a bi-vector deformation into a two-form deformation.
A generalised metric defines a USp(8) structure. K and J α together define a USp(6) structure and so also define a generalised metric, though reconstructing the metric from them may be complicated. 21 For this reason it proves simpler to construct the generalised metric from scratch. For a generalised vector V decomposed as in (4.17), the generalised metric, in the untwisted basis, is where h ij is the standard metric on SL(2)/SO(2) and we have raised/lowered indices using the metric g mn . 22 The generalised metric defines a USp(8) structure and so should be left invariant by a USp(8) subgroup of E 6(6) × R + . Using the adjoint action on V ∈ 27 , one can show that USp(8) is generated by elements of the E 6(6) × R + adjoint satisfying l = 0 , a ij = −a ji , r mn = −r nm , C mnpq = −γ mnpq , B 1 mn = β 2 mn , B 2 mn = −β 1 mn .
(B.44)
One can read off the new bosonic background by constructing the deformed generalised metric. The metric, axion-dilaton and four-form R-R potential receive corrections starting at second order. At first order, only the two-form potentials, B 2 and C 2 , are corrected. If we consider a deformation by a two-form B i and a bi-vector β i , at first order the resulting two-form deformation is We see that the bi-vector can be traded for a two-form contribution. This will become more complicated at higher orders in the deformation due to terms from contractions of the bi-vector with the two-form. As previously mentioned, this procedure is analogous to what is done when trading β-deformations in generalised complex geometry for metric and B-field deformations (see for example equations (3.3) and (3.4) in [37]). 21 For example, the conventional metric can be recovered from the three-and four-forms defining a G2 structure, but the relation between the two is not trivial. 22 We have chosen C0 = φ = 0 for the backgrounds we consider, so hij is simply δij.
B.4.1 Flux induced by deformation
Using (B.45) we have that our two-form deformation ReǍ + = B i + β i will induce NS-NS and R-R two-form potentials given by The complexified potential is Using the explicit form ofǍ + that solves the deformed moment maps (B.40), this is where L ξ f = iqf . From (B.42), this deformation will correspond to a marginal deformation if q = 3 and δ is d-closed. The complexified potential then simplifies to Taking an exterior derivative, the resulting complexified flux G 3 = d(C 2 − iB 2 ) is where we have used dδ = 0, ω ∧ (∂f Ω ) = i∂f ∧Ω and∂(∂f Ω ) = −12fΩ. We stress once more that this flux is valid for marginal deformations of any Sasaki-Einstein structure and reproduces the first-order fluxes of the β-deformation of Lunin and Maldacena [8].
B.5 Marginal deformations and the axion-dilaton
Let us now consider the effect of an + deformation. Such a deformation is marginal iff is charge zero under ξ, which, when combined with∂f = 0, impliesf is simply a constant complex number. The physical effect of such a marginal deformation can be found from its action on the SL(2; R) doublets that appear in the generalised metric. For example, the undeformed generalised metric contains terms of the form G(λ, λ) = δ ij λ i λ j + . . . .
(B.51)
To first order, the deformed generalised metric will then be G(λ + δλ, λ + δλ) = δ ij (λ i + δλ i ) (λ j + δλ j ) + . . . which is simply the real part of (B.16). We now want to compare this to the form of the generalised metric when the axion-dilaton is included. From [31], we see this is G(λ, λ) = h ij λ i λ j + . . . , (B.54) where h ij = e φ C 2 0 + e −2φ −C 0 −C 0 1 . (B.55) Expanding the fields to linear order, we find By comparing this expression with the deformed metric m ij , we see we can encode a first-order change in the axion-dilaton by takingf = C 0 − iφ.
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 22,468.6 | 2017-01-01T00:00:00.000 | [
"Mathematics"
] |
Defective RNA Particles of Plant Viruses—Origin, Structure and Role in Pathogenesis
The genomes of RNA viruses may be monopartite or multipartite, and sub-genomic particles such as defective RNAs (D RNAs) or satellite RNAs (satRNAs) can be associated with some of them. D RNAs are small, deletion mutants of a virus that have lost essential functions for independent replication, encapsidation and/or movement. D RNAs are common elements associated with human and animal viruses, and they have been described for numerous plant viruses so far. Over 30 years of studies on D RNAs allow for some general conclusions to be drawn. First, the essential condition for D RNA formation is prolonged passaging of the virus at a high cellular multiplicity of infection (MOI) in one host. Second, recombination plays crucial roles in D RNA formation. Moreover, during virus propagation, D RNAs evolve, and the composition of the particle depends on, e.g., host plant, virus isolate or number of passages. Defective RNAs are often engaged in transient interactions with full-length viruses—they can modulate accumulation, infection dynamics and virulence, and are widely used, i.e., as a tool for research on cis-acting elements crucial for viral replication. Nevertheless, many questions regarding the generation and role of D RNAs in pathogenesis remain open. In this review, we summarise the knowledge about D RNAs of plant viruses obtained so far.
Introduction
Viruses constitute the most abundant biological entities in the biosphere. They exhibit highly heterogeneous genome structures, sizes and replication strategies.
High diversity of viral nucleotide sequences is a characteristic feature of many viruses, in particular the RNA viruses. RNA viruses typically have mutation rates between 10 −6 and 10 −4 substitutions per nucleotide per cell, whereas these rates range from 10 −8 to 10 −6 in DNA viruses [1]. Plant viruses are dominated by RNA viruses [2], which tend to have a great potential for genetic variation, rapid evolution and adaptation. Different mechanisms that generate diversity in viral genomes have been investigated: mutation, recombination and reassortment [3]. These mechanisms can all contribute to the formation and evolution of defective RNAs (D RNAs). D RNAs are non-infectious virus particles harbouring defective RNA, derived from full-length viral genomes-termed "helper virus" (HV)-that cannot replicate autonomously. D RNAs are trans-replicated by the HV, and only those that interfere with HV accumulation should be referred to as defective interfering particles or RNAs (DIPs or DI RNAs). DI RNAs are, therefore, a subclass of D RNAs, and for accuracy, we will distinguish between these two classes throughout this review.
Generation of D RNAs and Mechanism of Induction
Two different stages can be distinguished during D RNA generation: (i) particle formation during replication process through recombination events, and (ii) selection of newly formed D RNAs, resulting in their accumulation [54]. D RNAs of plant viruses typically arise when a virus is serially passaged in conditions of high cellular MOI. Hypotheses concerning D RNA formation were widely investigated for many years, and a number of conclusions can be drawn. Firstly, although a positive effect of high MOI on D RNA formation was confirmed, for some virus species, high MOI is indispensable (e.g., TBSV) [25], whereas for others, it is only considered as an advantage [55]. High MOI favours coinfection of plant cells both with D RNAs and HV [56], allowing the D RNAs to be maintained and selected. D RNA formation is contingent on virus species (or even viral isolates), and for several viruses, low MOI at the start of serial passaging does not impede D RNA formation (e.g., CNV) [19]. By contrast, this situation seems to be uncommon, and it may explain the paucity of D RNA formation in natural environments. It is generally believed that D RNAs do not accumulate to easily detectable levels during natural virus infections. Low MOIs in plants in vivo can prevent the formation, accumulation or spread of D RNAs in nature [57]. Secondly, a single virus isolate can generate more than one species of D RNA de novo [25,47,50,58]. During the continuous passages, the D RNA population can evolve from a heterogeneous to homogenous one [27], and the different electrophoretic patterns of D RNAs can be observed during different stages of prolonged passaging [28]. It is not only possible that new D RNAs are continually generated from HVs in a process with a degree of repeatability; theoretical work also suggests that D RNAs evolve continuously [59]. The number of passages necessary for D RNA detection can vary for different virus species and isolates [25,58]. For instance, the occurrence of TBSV DI RNAs during prolonged passaging of HV in tobacco was observed even as early as after the third passage for some lines, and as late as after the 11th passage [25]. As genetic drift affects the maintenance of beneficial mutations in a population [60], bottlenecks that occur during within-host, or between-host transmission could purge DI RNAs and have a strong stochastic effect on their accumulation over time. By contrast, often the structure of D RNAs that arise independently from the same virus species, or even different species of the same genera, is conservative to a certain extent, and newly formed molecules are composed of the same regions of viral genomic sequences (e.g., Tombusvirus genus) [61]. Finally, the mechanisms of D RNA generation and maintenance appear to be host-specific, as has been confirmed for many D RNA species. Passaging of TBSV in Nicotiana benthamana resulted in DI RNA generation and accumulation, whereas prolonged passages of the same virus isolate in pepper did not result in generation and accumulation of DI RNAs, even in the case of inoculated leaves [56]. This situation can be explained by interaction between viral elements and host-specific determinants, or changes in the demography of virus populations (i.e., different MOI).
To date, the propensity for D RNA induction has been confirmed for plant viruses from different genera, i.e., Tombusvirus, Orthospovirus, Closterovirus, Tobravirus, Potexvirus, Bromovirus, Cucumovirus, Crinvirus, Comovirus, Nepovirus or Pomovirus (Table 1). Hopping or template switching of viral RdRp (also called copy-choice mechanism) is considered to be the most probable mechanism of D RNA formation [62]. D RNAs are mainly derived from the genome of the HV; premature dissociation of viral RNA polymerase and nascent strand from the RNA template is followed by reinitiation of replication after binding to the same or corresponding template at a different site, resulting in newly synthesized, incomplete strands [62,63]. Hence, D RNAs are derived from the HV genomic RNAs through a copy choice mechanism resulting in sequence deletion(s).
Analysis of regions near the apparent junction sites in D RNA sequences revealed that homologous recombination plays a major role in producing D RNAs. These recombination events are, therefore, not entirely stochastic events, but rather they tend to occur between similar sequences (or stretches) with weak secondary structure [28]. Recombination-prone sites (i.e., "hotspots") are thought to be surrounded by secondary structures with negative free energy, which are difficult to process for the polymerase and promote dissociation of RNA polymerase-nascent strand complex [28]. This hypothesis seems to be adequate for many D RNA species (e.g., CymRSV, TBRV) [28,[49][50][51]. Regions near the junction sites that can potentially destabilise the polymerase complex were described for TBSV. Knorr et al. [25] linked D RNA formation in TBSV with the presence of the hexanucleotide motif 5 -APuAGAA-3 , several "strong stop" signals and the presence of inverted repeats. Hernandez et al. (1996) [37] noted that junction sites in RNA2 of TRV DI RNAs are flanked by short nucleotide repeats or sequences resembling the 5 end of genomic and sub-genomic RNAs of TRV. The sequence motifs (i.e., AGAAAAG in RdRp coding region), together with complementary inverted repeats (i.e., CUUUUCU in 5 UTR sequence) were also found near the genomic sequences of TBRV isolates [50,64].
Reinitiation of the replication process requires the presence of replicated sequence initiation signals. It was confirmed that, in the case of CymRSV DI RNAs, many junction sites start with G followed by A, which is part of the motif recognized in all replicating molecules of the virus [65]. For the same virus species, it was suggested that DI RNA monomers arose from head-to-tail dimers [66,67]. Co-inoculation of tobacco plants with transcripts of HV and short DI RNAs resulted in accumulation of de novo generated DI RNA. Moreover, those results suggest that DI RNA dimers are preferred over monomers in movement from cell to cell [67].
A replicase-driven template switching mechanism was insightfully investigated for DI RNAs derived from PMTV [52]. The mechanism of DI RNA biogenesis proposed by the authors assumes that during HV replication, base pairing occurs between the TGB1 and 8K ORF coding regions on the minus strand. As a consequence, a stem-loop structure is formed, and the DI RNAs are generated during the positive strand RNA synthesis. A minimum free energy of the structure conditioning the occurrence of the process is −13.7 kcal/mol. The high predicted stability of this structure may explain the high repeatability of the junction site's location during the de novo generation of PMTV DI RNAs. Moreover, the other secondary RNA structures in close proximity to the 8K coding region and within the DI RNAs may be required for DI RNA biogenesis [52]. Quito-Avila et al. [68] indicated the that 5 to 3 pairing can act as a mechanism to generate new variants of raspberry bushy dwarf virus (RBDV), leading to formation of new, large-scale genomic rearrangements. Therefore, this mechanism may also contribute to D/DI RNA formation.
It has been shown that viral proteins can be involved in D RNA formation. The importance of the viral helicase (1a) and polymerase (2a) in generation of D RNAs was confirmed for BMV. Introduction of mutations within the corresponding coding regions resulted in changes in the fidelity of the replication and the position of recombination sites in comparison with the wild type virus [63,69]. Figure 1). In both cases, D RNAs may represent both homogenous or heterogeneous subpopulations [50,65]. Moreover, D RNAs can also constitute a mosaic of fragments originating from different segments of the HV genome (e.g., TRV) [37] or even a mosaic of plant and virus sequences [16] (Figure 1).
Structure of D RNAs and Their Population
RNA. Moreover, those results suggest that DI RNA dimers are preferred over monomers in movement from cell to cell [67].
A replicase-driven template switching mechanism was insightfully investigated for DI RNAs derived from PMTV [52]. The mechanism of DI RNA biogenesis proposed by the authors assumes that during HV replication, base pairing occurs between the TGB1 and 8K ORF coding regions on the minus strand. As a consequence, a stem-loop structure is formed, and the DI RNAs are generated during the positive strand RNA synthesis. A minimum free energy of the structure conditioning the occurrence of the process is −13.7 kcal/mol. The high predicted stability of this structure may explain the high repeatability of the junction site's location during the de novo generation of PMTV DI RNAs. Moreover, the other secondary RNA structures in close proximity to the 8K coding region and within the DI RNAs may be required for DI RNA biogenesis [52]. Quito-Avila et al. [68] indicated the that 5′ to 3′ pairing can act as a mechanism to generate new variants of raspberry bushy dwarf virus (RBDV), leading to formation of new, large-scale genomic rearrangements. Therefore, this mechanism may also contribute to D/DI RNA formation.
It has been shown that viral proteins can be involved in D RNA formation. The importance of the viral helicase (1a) and polymerase (2a) in generation of D RNAs was confirmed for BMV. Introduction of mutations within the corresponding coding regions resulted in changes in the fidelity of the replication and the position of recombination sites in comparison with the wild type virus [63,69]. (Figure 1). In both cases, D RNAs may represent both homogenous or heterogeneous subpopulations [50,65]. Moreover, D RNAs can also constitute a mosaic of fragments originating from different segments of the HV genome (e.g., TRV) [37] or even a mosaic of plant and virus sequences [16] (Figure 1). Despite having the same overall structure, the D RNA subpopulation generated during the same experiment from a single ancestral variant can be composed of different variants carrying additional small deletions and nucleotide substitutions [25,58]. The smaller D RNA particles probably arose from larger D RNA precursors [58], and their accumulation seems to be favoured over longer ones [28]. Frequently, DI RNAs of the same virus Despite having the same overall structure, the D RNA subpopulation generated during the same experiment from a single ancestral variant can be composed of different variants carrying additional small deletions and nucleotide substitutions [25,58]. The smaller D RNA particles probably arose from larger D RNA precursors [58], and their accumulation seems to be favoured over longer ones [28]. Frequently, DI RNAs of the same virus isolate (generated by passaging an isolate in different plants) are formed from the same fragments of the HV genome, and differences in length of the D RNAs proceed from shifts of recombination sites in genomic RNA [50]. The existence of some conservatism in the composition of D RNAs derived from genomic RNA of different isolates of the same virus species was confirmed. For instance, TBRV DI RNAs observed to date are distributed across the three essential types, and two of them include DI RNAs associated with TBRV isolates originated from distant host plants, such as zucchini, tomato and black elderberry [49] (Figure 2). The evolutionary repeatability of D RNA emergence was also confirmed for D RNAs of different virus species belonging to the same genus. Comparisons of DI RNA sequences associated with representatives of Tombusvirus genus-CSV and TBSV-demonstrate that in both DI RNAs, similar regions of the HV are present, which are essential for effective amplification and accumulation of the DI RNAs [27]. Tombusvirus DI RNAs indicate a common structural organization and contain three conserved sequence blocks referred to as A, B and C, derived from the 5 terminus, internal region of the replicase, and the 3 terminus of the viral genome, respectively [58,61,72]. DI RNAs associated with CymRSV and the cherry strain of TBSV share the same basic structure, and the nucleotide identity between them ranged from 80% to 90% [65]. Frequently, the invariant segments correspond to regulatory sequences that are important or crucial for D RNAs viability.
Structure of D RNAs and Their Population
across the three essential types, and two of them include DI RNAs associated with TBRV isolates originated from distant host plants, such as zucchini, tomato and black elderberry [49] (Figure 2). The evolutionary repeatability of D RNA emergence was also confirmed for D RNAs of different virus species belonging to the same genus. Comparisons of DI RNA sequences associated with representatives of Tombusvirus genus-CSV and TBSVdemonstrate that in both DI RNAs, similar regions of the HV are present, which are essential for effective amplification and accumulation of the DI RNAs [27]. Tombusvirus DI RNAs indicate a common structural organization and contain three conserved sequence blocks referred to as A, B and C, derived from the 5′ terminus, internal region of the replicase, and the 3′ terminus of the viral genome, respectively [58,61,72]. DI RNAs associated with CymRSV and the cherry strain of TBSV share the same basic structure, and the nucleotide identity between them ranged from 80% to 90% [65]. Frequently, the invariant segments correspond to regulatory sequences that are important or crucial for D RNAs viability.
The host effect on the population structure of D RNAs was also analysed. It was confirmed that passaging a viral population containing D RNAs through different hosts promotes changes in the nature of the dominant subpopulation of D RNAs [27].
Cis-and Trans-Regulation of DI RNA
Formation and accumulation of D RNAs is controlled both by cis-acting elements present in the HV and their cognate D RNAs, as well as trans-acting components such as non-structural proteins of the HV [30]. Cis-acting elements are responsible for recruitment to the site of replication and assembly of the viral replicase, constituting signal sequences necessary for movement that have been described for many plant viruses [73][74][75][76]. Therefore, numerous studies were performed to establish their placement in D RNAs and corresponding sequences in HV genomes. Research performed with infectious clones of D RNAs has enabled the identification of cis-acting elements required for defective particle accumulation [72,77]. Efficient replication and accumulation of tombusvirus DI RNAs are regulated by four conserved genomic RNA segments essential for replication, accumulation, competitiveness and supressing activity [27,73,77,78]. The potential cis-acting elements responsible for encapsidation were also described for BMV D RNAs. Using artificial D RNAs (with deletions of the same size as the naturally occurring D RNAs), Damayanti et al. (2002) [43] revealed that deletion of a fragment of ~500 nt in the proximal 5′ and 3′ The host effect on the population structure of D RNAs was also analysed. It was confirmed that passaging a viral population containing D RNAs through different hosts promotes changes in the nature of the dominant subpopulation of D RNAs [27].
Cis-and Trans-Regulation of DI RNA
Formation and accumulation of D RNAs is controlled both by cis-acting elements present in the HV and their cognate D RNAs, as well as trans-acting components such as non-structural proteins of the HV [30]. Cis-acting elements are responsible for recruitment to the site of replication and assembly of the viral replicase, constituting signal sequences necessary for movement that have been described for many plant viruses [73][74][75][76]. Therefore, numerous studies were performed to establish their placement in D RNAs and corresponding sequences in HV genomes. Research performed with infectious clones of D RNAs has enabled the identification of cis-acting elements required for defective particle accumulation [72,77]. Efficient replication and accumulation of tombusvirus DI RNAs are regulated by four conserved genomic RNA segments essential for replication, accumulation, competitiveness and supressing activity [27,73,77,78]. The potential cis-acting elements responsible for encapsidation were also described for BMV D RNAs. Using artificial D RNAs (with deletions of the same size as the naturally occurring D RNAs), Damayanti et al. (2002) [43] revealed that deletion of a fragment of~500 nt in the proximal 5 and 3 regions of the BMV 3a ORF (RNA3) supresses encapsidation of artificial D RNAs and affects competition with RNA3 in the artificial D RNAs' amplification and encapsidation. Such elements essential for DI RNA replication were also mapped for CymRSV [30]. Research performed with artificial D RNAs of TMV led to the conclusion that the replication signals for HV and D RNA may differ [79].
It was confirmed that CymRSV D RNAs are trans-regulated by the viral p22 and p92 proteins (replication regulation) [80], whereas CMV D RNAs are trans-regulated by the 3a (MP, cell-to-cell movement) [81] and 3b (CP, symptoms on infected plant in the presence of D RNAs) viral proteins [82].
Interference with Virus Accumulation
D RNAs that interfere with replication and accumulation of the HV are termed defective interfering RNAs (DI RNAs). Although "absence of evidence is not evidence of absence", the presence of non-interfering D RNAs appears to have few implications for virus infection and evolution. It has been shown that suppression of HV accumulation is quite frequent for many DI RNA species [27,66,83]; however, interference does not always correlate with high DI RNA accumulation [57]. The impact of DI RNAs on HV RNA replication was investigated with two TBRV isolates originating from different hosts (greenhouse tomato and lettuce). The accumulation of HV was analysed in the presence and absence of DI RNAs in the following plants: tomato, lettuce, quinoa, and tobacco. Results confirmed the hypothesis that the extent of DI RNA interference with the HV depends on virus isolate and host plant. The most spectacular effect was observed for quinoa, where the average reduction of TBRV accumulation was 26% [64].
The interference with HV replication and accumulation by DI RNAs can be explained by competition with the HV for viral and host resources, the mechanism of posttranscriptional gene silencing (PTGS), and modification of the function of viral factors [52,70]. Jones et al. (1990) [84] inoculated N. benthamiana protoplasts with different DI RNA: HV ratios. TBSV HV accumulation was reduced by 50% and 65% when plants were infected with 1:4 and 1:1 DI RNA:HV ratios, respectively. The authors claimed that the increased accumulation of DI RNAs and significant reduction of HV accumulation indicated that inhibition of HV occurred as a result of direct competition for viral resources, and not due to HV RNA degradation. Moreover, it was suggested that for TBSV-derived DI RNAs, the interference is mediated by the down-regulation of TBSV's p19 RNA silencing suppressor (RSS); coinfection of HV and DI RNAs resulted in decreased accumulation of this protein and its sub-genomic RNA [76].
PTGS is a common sequence-specific RNA degradation process used by plants as an antiviral strategy [85]. During the process, double-stranded (ds) RNAs are converted into 21-25 nt RNA fragments (siRNA), and subsequently used to direct ribonucleases to target cognate RNA [86]. For plant viruses, the resulting decreases in the amount of target RNA may lead to the attenuation of infection symptoms. This effect was noticed in plants infected with a mixture of HV and DI RNAs [64,87], suggesting the role of PTGS as a possible mechanism of DI RNA-induced interference. It was confirmed that in CymRSVinfected N. benthamiana plants, accumulation of the virus triggered PTGS, which resulted in generation of siRNAs corresponding to the viral genome. The three CymRSV-derived DI RNAs used in the study were targeted differently by helper virus-induced PTGS: the larger precursor form of DI RNAs (679 nt) was targeted successfully, whereas the shorter (mature) form was not. The higher suppression of the longer DI RNA by PTGS resulted from the presence of specific sequences/structures rather than the length of DI RNA. Moreover, the efficient generation of siRNAs from shorter DI RNAs was confirmed [87]. According to these results, for tombusviruses, the model of PTGS-mediated DI RNA evolution (by selective accumulation of DI RNAs without sequences that are targeted by PTGS) and symptom attenuation (as an effect of DI RNA-induced PTGS) was proposed [87].
Attenuation of Infection Symptoms
The attenuation of infection symptoms in the presence of DI RNAs has been confirmed for many virus species. The mechanism seems to be quite common and is reliant on lower levels of HV replication due to competition for replication factors or RNA-mediated enhancement of host resistance [88]. Previous studies showed that the attenuation of infection symptoms was not unique for all DI RNAs accompanying virus of the same species, and differences occurred both in intensity of symptom attenuation and its development over time as different virus isolates were passaged. For example, Knorr et al. [25] showed that when TBSV isolates were serially passaged in tobacco plants, symptom attenuation was observed for some lineages as early as the third passage and as late as the 11th passage. Attenuation of symptoms can be strictly related with the appearance of DI RNAs; however, in some cases, a decrease in symptoms may be observed before DI RNAs can be detected. This observation emphasizes that the underlying processes that lead to symptom development are complex, depending on both properties of individual DI RNAs and the plant's condition. Moreover, many other factors, including but not limited to interactions between the HV and host, could cause attenuation of symptoms. Symptom attenuation also appears to be host-dependent. In the case of CymRSV, the presence of DI RNAs in the virus inoculum, or introduction of the particles by transgenic nicotiana plants, prevents the appearance of the typical lethal necrotic symptoms of CymRSV infection [29]. Based on the research performed with genomic and DI RNAs of CymRSV, ClRV, and TBSV, Havelda et al. (1998) suggested that DI RNA-mediated enhancement of infection symptoms depends on the ability of the DI RNAs to prevent a direct or indirect interaction between p19 and p33 [89]. P19 and p33 were identified as viral symptom determinants responsible for necrotic symptoms on tombusvirus-infected N. benthamiana plants [90].
Interesting and unexpected observations were made for the DI RNA associated with TCV. Although the presence of DI RNAs reduced HV accumulation, symptoms were more severe in the presence of the DI RNAs [18].
Other Effects of D RNAs
The impact of D RNAs on HV, host or vectors seems to be a multi-level process. It was shown that the presence of DI RNAs may affect seed transmission of the HV. Pospieszny et al. (2020) [91] showed that TBRV DI RNAs can be vertically transmitted through seeds. In the experiment, quinoa plants were infected with TBRV isolate (originated from tomato) with and without DI RNA. The plants were grown until seeds could be collected, and development of disease symptoms was observed. Five to six weeks after sowing of the collected seeds, plants were tested for the presence of the TBRV using double antibody sandwich enzyme-linked immunosorbent assay (DAS-ELISA). Overall, over 4000 plants were tested, and it was confirmed that the presence of DI RNAs together with HV at initial infection made the seed transmission of the HV 45% more efficient than in the case of infection without DI RNAs. These results challenge the established framework for considering DI RNAs, as it is possible to have a negative effect on the within-host fitness of the HV (interference with replication) while bolstering between-host fitness (vertical transmission). Moreover, quinoa plants from the second generation were verified for the presence of DI RNAs. The obtained results suggested that DI RNAs are transferred (through the seeds) from generation to generation.
Research performed with transgenic nicotiana plants expressing CymRSV satRNA sequences showed that DI RNA accumulation is supressed by satRNAs. These transgenic plants were still susceptible to infection by CymRSV HV, but the presence of the satRNA did suppress the DI RNAs and also blocked its attenuation of disease symptoms [92].
Diversity and Evolution of D RNAs
Evolution of the D RNAs seems to proceed similarly to that of the HV virus, as both share the same replication system based on the viral RNA polymerase. However, as D RNAs have to compete with the HV for the viral RdRp, they are continuously under strong selection pressure and can evolve faster than the HV [55]. The faster rate of evolution of D RNAs in comparison with HV can be attributed to higher genetic plasticity of D RNA genomes, resulting from reduced purifying selection pressure eliminating non-functional viral sequences [63]. D RNAs might also undergo mutation and recombination resulting in insertion, deletion or sequence rearrangements. Recombination was widely studied in terms of the generation of primary D RNAs (derived directly from HV genomes in a single mutational event) as well as their descendants-forms of the D RNAs that have been further shortened. Several studies indicated that primary D RNAs evolve to shortened forms upon serial passages. Progressive deletions seem to be the mechanism generating D RNAs of many viruses, e.g., tombusviruses. It was shown that CymRSV is able to induce different types of DI RNAs during passaging in N. clevandii. The largest DI RNA sequence (DI-13) was 679 nt in length and consisted of three fragments derived from the HV genome. Sequence analysis of the smaller DI RNAs (from 673 to 404 nt in length) indicated that they are composed of the same three fragments, with a continuing reduction in size through further deletions inside particular blocks of DI-13 (referred to as A, B and C) [28]. Junction sites in DI RNA sequences are certainly not random. These sites were located mostly within blocks A and C in the case of CymRSV [28]. Research performed by Havelda et al. 1997 [93] suggested that in the case of CymRSV DI RNAs, the generation of a shorter variant of particle was associated with intramolecular secondary structures driving the mechanism of deletion [94]. Shorter D RNAs seem to be favoured, as their accumulation is often higher than that of longer variants [66,93]; however, their accumulation is not necessarily dependent on the DI RNA length. For example, shortened forms of TBSV-derived DI RNAs indicated poor targeting by PTGS due to the lack of PTGS targetspecific sequences/structures [63] (see Section 5.1). The role of rearrangements and/or recombination events in DI RNA evolution was confirmed for CNV. Passaging of this virus resulted initially in the formation of DI RNAs, but upon further passaging, larger DI RNAs were found [27]. The newly obtained variants contained repeats of three regions found in shorter DI RNA variants, leading the authors to suggest that these larger variants arose as a result of rearrangements between two DI RNAs, rather than being directly derived from the HV genome. The role of mutations as a force driving D RNA evolution also seems to be prevalent. Numerous studies confirmed that D RNAs, despite the fact they are originated from HV genome, indicated single nucleotide changes in their sequences in comparison to genomic RNA. The comparison showed that the identity between the corresponding regions of D RNAs of TBRV and its HV sequence ranged from 98% to 99.5% [49,50].
D RNA Detection Tools
In classic work on D RNAs, the most popular assay used for their detection was a Northern blot combined with separation and visualisation of D RNAs on sucrose gradients. In later studies, D RNAs were separated on low melting agarose gels, purified using methods of RNA extraction, and then amplified with different variants of a polymerase chain reaction (PCR).
A game changer in virus discovery, identification and sequence analysis has been high-throughput sequencing (HTS) [95]. HTS is a rapidly developing technique that allows for the massively parallel sequencing of millions or even billions of nucleotides in a single sequencing run [96]. It is widely used for viral metagenomics studies to identify known viruses and discover novel species, even in samples with the absence of disease symptoms [95]. This approach was successfully applied for detection of viral infection in many agricultural crops, as well as weeds and wild plants [95][96][97][98]. HTS, unlike to previously used diagnostic methods, gives a more complete perspective on the virome and provides insights into the virus population structure, ecology or evolution [99].
The identification of D RNAs from HTS data can be challenging; D RNAs typically occur together with HV and have high sequence identity with HV, often not containing any unique sequences. In recent years, HTS has been successfully adapted to D RNAs detection. To date, different tools for identification of different D RNA types in HTS outputs have been published, for example, ViReMa (as the first specific tool) [100], DI-Tector [101], DVG-profiler [102] and newly published DVGfinder [103]. The presented tools re-examine unmapped or mapped reads with mismatches and insertions, which potentially include D RNA sequences. Current challenges in D RNA identification from HTS data include improving the sensitivity and precision of the used algorithms, reduction of false positive D RNAs detected, quantification of the relative abundance of given D RNAs and HV, and normalization between samples and sequencing runs [10]. It is worth mentioning that the presented tools do not provide any functional information about the detected D RNA candidates in the replication process.
Concluding Remarks
Infection by RNA viruses can be accompanied by subviral particles such as D RNAs, DI RNAs and satellite RNAs. These particles are relatively short, non-infectious entities, and their replication, encapsidation and spread depend on the HV. DI RNAs are particularly interesting due to their ability to interfere with virus replication and, therefore, modulation of symptoms on infected plants. Previous research suggested that D RNAs are not abundant in plant viral infections in natural ecosystems. This conclusion could have resulted from both a low concentration of D RNAs in natural infection and/or lack of effective tools for their detection. HTS and dedicated bioinformatic tools for D RNA detection will improve the identification of D RNAs in infected plants in vivo, while also providing stronger evidence of their absence. Despite the recent progress in studies of D RNAs of plant viruses, many questions about D RNAs remain unanswered. First, are D RNAs just a result of errors occurring during virus replication? Or could plants have evolved features that promote their occurrence to weaken the virus? Second, do DI RNAs directly interfere with the replication of HV, or is this process more complex and, for example, mediated by the host? More detailed studies to look for the subtle impacts of D RNAs on infection, transmission or evolution would also be of great interest. Third, is D RNA replication host-specific? Although the de novo occurrence of D RNAs depends on the host, to what extent can existing D RNAs be maintained in different hosts? The knowledge regarding DI RNA formation and impact on HV replication is still limited, whereas the detailed analysis of those mechanisms might be a step toward new, innovative tools to protect plants against viruses. DI RNA particles represent a major controlling element of virus replication. The more we learn about viral pathogenesis and the interaction and competition between DI RNAs and the HV, the more we can focus on our research to dissect DI RNA-mediated attenuation of infection.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,701.4 | 2022-12-01T00:00:00.000 | [
"Biology"
] |
Hybrid Workload Enabled and Secure Healthcare Monitoring Sensing Framework in Distributed Fog-Cloud Network
: The Internet of Medical Things (IoMT) workflow applications have been rapidly growing in practice. These internet-based applications can run on the distributed healthcare sensing sys-tem, which combines mobile computing, edge computing and cloud computing. Offloading and scheduling are the required methods in the distributed network. However, a security issue exists and it is hard to run different types of tasks (e.g., security, delay-sensitive, and delay-tolerant tasks) of IoMT applications on heterogeneous computing nodes. This work proposes a new healthcare architecture for workflow applications based on heterogeneous computing nodes layers: an application layer, management layer, and resource layer. The goal is to minimize the makespan of all applications. Based on these layers, the work proposes a secure offloading-efficient task scheduling (SEOS) algorithm framework, which includes the deadline division method, task sequencing rules, homomorphic security scheme, initial scheduling, and the variable neighbourhood searching method. The performance evaluation results show that the proposed plans outperform all existing baseline approaches for healthcare applications in terms of makespan.
Introduction
Nowadays, the usage of medical devices based on the Internet of Medical Things (IoMT) network to deal with healthcare issues has been growing progressively [1]. The IoMT is a network that is composed of medical sensors, wireless technology and distributed cloud computing technologies [2]. Therefore, the combination of IoMT and healthcare devices can improve the quality of human life and provide better care services and create a more cost-effective system [3]. Recently, many IoMT-based applications have been developed node level are widely ignored. Due to this, the failure ratio of tasks and deadline of applications are missed over a wide range. This paper proposes a novel scheduling system for mixing fne-grained and workfow IoMT tasks in distributed and virtual machinebased mobile edge cloud networks to cope with the issues mentioned earlier. This work considers both workfow and fne-grained tasks simultaneously, the proposed serverless functions and a virtual machine aware service environment in a distributed mobile edge cloud network. The function as a service (FaaS) is a cost-effcient model for fne-grained tasks which pay for the execution of tasks rather than provisioning for monthly or yearly for not using tasks. For the workfow tasks, the virtual machine-based solution has been proposed in the system. Each application is divided into three types of tasks: security tasks, delay-sensitive tasks, and delay-tolerant tasks. The proposed IoMT system consists of different paradigms such as mobile computing, fog computing, and cloud computing based on task types. The study's goal is to minimize both makespans of applications and the cost of the system during problem formulation. In summary, this paper makes the following contributions to solve the scheduling problem.
• Initially, the study devises the mathematical model of hybrid work of IoMT applications with many objectives. The hybrid workloads consist of fine-grained and workflows tasks, and the multi-objective functions are makespan, cost, and energy consumption. Each objective function has different weights to optimize the IoMT for each application; • The study devises three-phase level scheduling methods such as deadline-effcient, cost-effcient and energy-effcient ones in the IoMT system to optimize the overall system in the network; • To maintain the security requirements of fne-grained and workfow workloads, the fully homomorphism Encryption (FHE)-enabled security is suggested to ensure the security in IoMT for all applications; • To optimize all objectives together, the study devises the deep graph convolutional network-enabled weighting scheme to boost and optimize the study's overall nonlinear objective functions in different convolutional networks.
The sections are organized as follows. Section 2 presents existing studies related to the considered problem. Section 3 defines all steps of the problem formulation. Section 4 outlines the proposed methods and their solutions. Section 5 illustrates the performance evaluation of techniques on different workfow benchmarks. Section 6 presents the conclusion of the paper.
Related Work
For many years, Internet of Medical Things (IoMT) frameworks or systems have gained signifcant traction in various medical sectors. Many different sorts of healthcare workloads are taken into account in the IoMT to tackle various scheduling and offoading issues. Workfows, applications, services, and fne-grained and coarse-grained models are examples of workloads. The IoMT consists of different heterogeneous computing nodes where a connection between computers or computer programmes is known as an application programming interface (API). It is a form of software interface that provides a service to other programmes. An API specifcation is a document or standard that defnes how to create such a connection or interface. An API is implemented or exposed by a computer system that meets this standard. The term API can be used to refer to either the specifcation or the implementation. As a result, as indicated in Table 1, there are different multi-objective techniques for each workload, each with its own set of restrictions and security requirements. In the study of [1], the workfow application, deadline constraint, weighting method, and remote Cloud VMs along with the RSA security mechanism aware IoMT system are suggested. The objective is to offoad healthcare data with their deadlines on the cloud servers. The study of [2] suggested IoMT-based coarse-grained healthcare workloads, with resource constraints and the programming aware constraint method on latency optimal edge nodes. The study implemented a DES security mechanism for offoaded workloads in the system. The goal is to minimize end to end latency. The study of [3] suggested IoMT based on the independent healthcare workload, budget objective, goalprogramming multi-objective method, latency optimal cloudlet deployed virtual machines and CRC32 security mechanism. The aim is to minimize end to end latency. Refs. [4,5] suggested an IoMT system based on workfow that is fne-grained and quality of service (QoS) aware, as well as a min-max multi-objective method and distributed edge implemented virtual machines and DES and RSA security for healthcare applications. The goal is to minimize resource consumption and energy and delay the objectives of the study. The vector evaluated genetic algorithm (VEGA) allows dealing with multiple objectives. However, the min-max algorithm only achieved good results with the single constraints in nondominant solutions on the Pareto frontier.
The studies [6][7][8] devised dynamic and secure IoMT systems based on different primitives such as workflow applications, deadlines, Genetic Algorithm (GA) on virtual machines (VMs), which enable cloud data centers, and RSA-based networks. The purpose of these studies is to gain dynamic results for healthcare applications in distributed cloud data centers. The studies [9][10][11][12][13][14] proposed IoMT with a tuple of implementations, such as coarse-grained workload, and optimized the objectives' lateness and energy with particle swarm optimization schemes in distributed RSA-enabled fog virtual machines. Particle swarm optimization (PSO) is a computational method for solving problems by iteratively improving a potential solution against a set of quality criteria. The message digest (MD5) enabled the secure distributed cloudlets and fog node aware IoMT systems suggested by [15][16][17][18][19][20]. The workload considered to be coarse-grained is solved by a multi-objective approach and ant-colony is done so with a dynamic approach. The objective is to minimize service cost, latency and delay of applications in the IoMT system. The deadline, resource and lateness are considered during offloading and resource allocation in the system.
The applications, services, application programming interface (API) and model-based workload have been implemented in [21][22][23][24][25][26][27]. The virtual machines, container and serverless aware resources are offered during workload execution in the system. The min-max, Multi-objective Evolutionary Genetic Algorithm (MOGA) and NSGA-II-enabled multiobjective-based techniques suggested to solve the healthcare problems in distributed fog cloud nodes. The goal is to optimize different objectives with nondominance and dominance schemes with the Pareto frontier tool for different healthcare workloads. These studies considered the single constraint during decision in IoMT. The deep convolutional neuron network-enabled healthcare system is suggested in [28][29][30][31]. The goal is to handle multiple objectives such as energy, makespan, and cost of coarse-grained applications in the distributed IoT fog cloud network in the system. These studies suggested a dynamic heuristic based on reinforcement learning where the considered workload is only coarse-grained in the system for offoading and resource allocations.
To the best of our knowledge, a hybrid workload-enabled and secure healthcare monitoring sensing framework in a distributed fog cloud network has not been studied yet. The considered problem and system in the present study differ from existing works [1, 2,18,22,[28][29][30][31] in the following way. The proposed work considers the hybrid workloads such as workflow and the fine-grained model and the proposed mathematical model, whereas the study devises the functions and virtual machine aware fog cloud network which was not considered in the existing works. The main reason for this is that the research focuses on the cost-efficient scheduling and resource-optimal allocations of workload in the distributed fog cloud network. Therefore, in the considered problem, the study has three different conflicting objectives: makespan, lateness, and energy consumption with cost, deadline, and lateness constraints in the IoMT system. The existing multi-objective approaches cannot be applied to hybrid workloads in the IoMT because all existing objectives require a lot of decision time and resources to find optimal solutions for all objectives in IoMT. Therefore, the study considers the deep graph-based convolutional network-enabled algorithm framework to solve the supposed problem in the IoMT.
Proposed Architecture
The study proposes a new secure mobile edge cloud architecture to run IoMT workfow applications in a distributed environment. The proposed architecture consists of three main layers, the application layer, management layer and resource layer, as shown in Figure 1. The IoMT workfow applications layer consists of multiple applications where each application is composed of three different types of functions. The nodes, such as the blue node, show security tasks, the light node shows delay-sensitive tasks, and the red node displays delay-tolerant functions. The architecture initially takes the inputs of all applications into the management layer. The workfow tasks are annotated as the design time in different types, such as security tasks, delay-sensitive tasks, and delay-tolerant tasks. The execution time and energy consumption are anticipated in advance before scheduling tasks to any node by exploiting the energy profler and workload execution profler at the design time of applications. These mechanisms of application partitioning and the time estimation are already published in our previous work [7]. Therefore, this work only focuses on scheduling, not application partitioning and offoading in the current model. The IoMT agent is an administrator in the management layer that processes the requested IoMT workfow applications P = {a1, . . . , aP}. The IoMT agent consists of the following components: deadline division, task sequencing, homomorphic security scheme, initial scheduling and Variable neighborhood Searching (VNS). The deadline division divides the deadline d a of all IoMT applications into task deadlines based on their execution time on different computing nodes. All the tasks are sorted based on their requirements by the sequencing rules. The rules are earliest due date and cost of resources. We assigned the priority to each task in the following way: The homomorphic security method encrypts and decrypts security tasks locally on the devices. The Denial of Service (DoS) and surfng profling handles and identifes an attack in the network. This way, we can save resource and time before offoading and scheduling in the system. Initial scheduling maps all tasks based on sorting and security requirements onto heterogeneous mobile edge cloud computing effciently. After that, a VNS-based searching method improves the initial solution from candidate solutions.
The resource layer consists of mobile computing, edge computing and cloud computing. Resource-constrained mobile computing only executes security annotated tasks with private keys. Furthermore, delay-sensitive tasks are carried out using edge computing which locates the edge of the network with ultra-low latency. Finally, all delay-tolerant tasks are carried out using cloud computing. Table 2 describes the notation of the mathematical model of the considered problem. The heterogenous computing nodes (e.g., mobile edge cloud) j The jth computing node ζ j The processing speed of computing node j R The resource of computing nodes r The resource particular r N Total number of task T Set of all tasks t i The tth task of T This study considered two types of application workloads with different processes such as fne-grained tasks and workfow tasks.
Fine-Grained Tasks
The healthcare fne-grained tasks have their data, deadlines, and required CPU per execution during the process. Each fne-grained task is isolated; it needs a separate function to run its operation. The fne-grained task shown in Figure 2 has three types of tasks: secure tasks, delay-sensitive tasks, and delay-tolerant tasks. The study makes the Problem Formulation of Workfow Tasks in the following way. The paper investigates P number IoMT workfows applications, i.e., {a1, a2, . . . , aP}. The directed acyclic graph, i.e., a(V, E) illustrates constraint rules of applications, where i illustrates a particular task, and e(i, j) ∈ E represents the communication nodes among different tasks. There are certain rules in IoMT tasks: (i) a task i should fnish before starting task j. Furthermore, some tasks use original data and some of them have generated data. The notation w i is an original datum of a particular task, and w i,z generated data of precedence during execution. Each IoMT application a categorized tasks into three lists: (i) security list S = {s i = 1, . . . , S ∈ Va}, delay-sensitive tasks (ii) L = {l i = 1, . . . , L ∈ Va}, and delay-tolerant tasks DR = {dr i = 1, . . . , DR ∈ Va}.
The present paper discusses the scenario of real-life healthcare IoMT applications, as shown in Figure 3. In the system model, there are three types of task lists: (1) secure tasks, i.e., S = {s i = 1, . . . , S ∈ Va}, which must be encrypted and decrypted locally by IoMT devices; (2) delay-sensitive tasks, i.e., L = {l i = 1, . . . , L ∈ Va}, which must be performed on edge nodes because of their latency requirements; (3) delay-tolerant tasks, i.e., DR = {dr i = 1, . . . , DR ∈ Va}, which are offoaded to the public cloud for execution. There are three layers in the system: the edge layer, where all organizations such as hospitals, clinics, and any medical centre use different IoMT devices to run IoMT applications. These applications secure data locally on their own devices, and delaysensitive tasks are offoaded via an access point (e.g., wif) to the edge layer for execution. Furthermore, via the internet, all applications offoaded their tasks to the public cloud. The categorized tasks we already defned in detail above. We denote the speed factor of all computing nodes in the following way, i.e., ζ j {ζ j 1, . . . , ζ j }, whereas the work shows the computing nodes resources in the following way, e.g., R = {r = 1, . . . , rR}. We determine the execution time of a particular task in the following way.
In Equation (1), the vector y i = 0 means the execution of a task in the local machine, e.g., y i = 1. It implies the performance of a task on the edge and y i = 2 executions of a task in the cloud.
IoMT workfow applications have relationship and communication time requirements due to transferring of data between them.
Equation (2) determines the communication time between constraint tasks in workfow while sharing their data for execution.
If two tasks i and j are being carried out in the local machine, there is no communication time between them, i.e., z i = 0. If two tasks i and j are running on an edge LAN network, then there is a fxed communication time between them, i.e., z i = 1. Finally, if two tasks i and j are being carried out on a cloud WAN network, then there is fxed communication time between them, i.e., z i = 2. The constraint (3) calculates the communication time between tasks i, j. We determine the fnish time of a general task in the following way.
Equation (4) calculates the fnish time of a task. We obtained the makespan of IoMT workfow applications as MW denotes the makespan of all IoMT workfow applications in Equation (5). The paper presents the problem mathematically as follows: min MW (6) which is subject to Equation (6) shows the objective function of each application.
Equation (7) denotes the deadlines for completion of tasks of all applications.
Constraint (8) shows all the requested workloads of applications that must not exceed the limits of resources during execution.
Constraints ((9), (10), and (11)) show that each task to be assigned to one node, and each node can execute one task at a time when it is successfully assigned to any particular node.
Problem Formulation of Fine-Grained Tasks
This study considers T number of fne-grained tasks, i.e., {t = 1, . . . , T}. Each task has a workload, e.g., W t and t d deadline. The number of fog cloud functions is represented by F = { f 1, f 2, . . . F}. Each function has a memory size of f m. The execution time of fne-grained tasks is determined in the following way. (12) calculates the execution time of a task on the function in node j Therefore, the execution cost of all tasks is determined in the following way.
The functions can run on only computing nodes such as j1 to M. Therefore, the cost of the function is to be determined by the memory size and execution time as determined in Equation (13).
Energy Consumption Computing Nodes
This study determines the energy consumption due to virtual machines and particular function nodes. Therefore, j w is the energy consumption per watt of node j to run virtual machines and functions. The power consumption of nodes is determined in the following way.
t=1 v=1 j=1 f =1 Equation (14) determines the energy consumption due to both the workfow and fne-grained tasks in the computing nodes.
The study examines multi-objective problems such as energy, makespan, and lateness of both workfow and fne-grained jobs based on the suggested mathematical formula. As a result, multi-objective optimization is a subsection of multiple criteria decision making that deals with mathematical optimization problems that necessitate simultaneous optimizations of many objective functions. The Pareto frontier is used to construct the multi-objective problem. The Pareto frontier is an optimal technique for solving the problem restrictions since the study has conficting aims in the suggested system with various resources. No one solution simultaneously optimizes each objective for a nontrivial multiobjective optimization problem. The objective functions are incompatible in this instance, and there are a (potentially infnite) number of Pareto optimal solutions. If there is no improvement in a single function value without deteriorating some of the other objective values, the key is nondominated, Pareto optimum, Pareto-effcient, or noninferior. All Pareto optimum solutions are deemed equally desirable without any additional subjective preference information. Many existing multi-objective optimization techniques from many problems suggested formulating and solving them. The goal could be to locate a representative group of Pareto optimal solutions, quantify the trade-offs in achieving several objectives, or identify a single solution that satisfes a human decision maker's subjective preferences (DM). min Equation (15) shows the objective functions of both workfow and fne-grained tasks.
Proposed Security-Effcient Optimal Solution (SEOS) Algorithm Framework
This work considers the IoMT workfow applications, where each application has three types of tasks: security tasks, delay-sensitive tasks, and delay-tolerant tasks. We analyze the heterogeneous computing nodes (e.g., mobile node, edge node and remote cloud node) that are distinct by their speeds and resources. The advised problem is secure offoading and scheduling for IoMT workfow applications in heterogeneous computing nodes. This section proposes the Security-Effcient Offoading and Scheduling (SEOS) algorithm framework, which consists of different components to solve the considered problem. Initially, we divide the applications into task deadlines. In the second part, we sort all tasks into topological order based on the proposed three sequence rules. The thirdparty offoading-based homomorphic encryption method encrypts and decrypts security tasks locally on the local devices. Due to precedence constraint requirements, the cyphertext data of tasks are offoaded to the edge cloud for delay-sensitive tasks. The edge node applies computation on cypher-text instead of converting it into plaintext. The fnal part is local searching-based task scheduling, where all tasks are scheduled in different computing nodes. We explain the SEOS framework steps in the following algorithm, Algorithm 1.
Deadline Division
The deadline division is a way to divide the application deadline into task deadlines; this way, we can achieve the quality of tasks based on their deadlines. For example, we split the applications into the following form.
Initially, we obtained the ratio of all applications based on Equation (16), which determines the division of the deadline of each application with the makespan of the application. This way, we assigned the deadline to each task based on the execution time and communication based on executions ((17)- (19)).
Algorithm 2 divides the deadline of all applications into task deadlines to obtain the optimal makespan of each application onto heterogeneous computing nodes.
Task Sequencing
In this section, we introduce task sequencing rules based on the following methods. Earliest Due Date (EDD) is exploited to order the tasks in a deadline manner. Each task is prioritized via Equations (20) and (21). Smallest Process First (SPF) is exploited when the smallest processing task is assigned the highest rank and scheduled before the longest process task. Smallest Slack Time First (SSTF) method shows the remaining time between fnish time and the actual deadline should be smaller when a task i is scheduled on the same paradigm. We assigned the priority to each task in the following way: We assume that the w i is equal to whether ordinal data or generated of the task i during priority assignment. Both Equations (20) and (21) defne the priority of all tasks from entry the task i to exit V by considering all predecessors and successors of the given application. Initially, we sort the topological priority of tasks in the following way.
•
All workfow tasks sort out by descending order by their deadlines; • All fne-grained tasks sort out by their deadlines.
We tried all sequences during initial task scheduling until submitted tasks are satisfed by the given requirements.
Security Aware Offoading Method
The Homomorphic Encryption [32] is a tool that allows computation on encrypted data of tasks. In this way, data of tasks remain private and confdential during offoading and scheduling at heterogenous mobile edge cloud networks. FHE encourages securitysensitive applications to work with sensitive data in untrusted environments. The geographically distributed computation and heterogeneous mobile edge cloud networking; secure communication is a good indication related to the applications.
When the data transfers to the cloud, we use standard encryption methods to secure the operations and the data storage. Our basic concept was to encrypt the data to the cloud provider before sending it back. However, at every transaction, the last one has to decrypt data. Therefore, the client will need to provide the server (cloud provider) with the private key to decrypt data before executing the required calculations, affecting the confdentiality and privacy of data stored in the cloud.
The secure homomorphic Algorithm 3 takes as input a list of security tasks S, which is annotated at the time of design. Algorithm 3 has the following steps. Firstly, it encrypts all security tasks of all applications locally and offoads them for assistance to external computing nodes. Secondly, the computing nodes apply the ciphertext instead of plaintext and then return them to the corresponding end-user devices. Finally, it will decrypt the results of encrypted tasks locally on the computers. foreach (i = 1 S) do DoS ← 1 Denial of Service Attack present; p ← large integer; q ← large integer; Decrypted all tasks; d c (i y ) = i y a mod n; End inner loop; else DoS=1; Waiting for offoading; 21 End Main; In this paper, we suggest implementing a system for performing operations on encrypted data without decrypting them, which will produce the same results after calculations as if we were directly operating on the raw data. Homomorphic encryption systems are exploited to execute encrypted data operations without understanding the private key (without decryption); the client is the sole owner of the secret key.
In the considered problem, the study considered the homomorphic encryption in the following condition. If Enc (a) and Enc (b) are used to estimate Enc (function (a, b)), where function can be: +, x, and outwardly practicing the private key. Moreover, additive homomorphic encryption considers raw data additions is the Pailler.
In contrast, Equation (23) is multiplicative homomorphic encryption. An algorithm is fully homomorphic if both properties are satisfed simultaneously.
For the multiplicative homomorphic encryption,let us assume that n = pq, where p and q integer primes. Then, we choose a and b keys such that ab = 1, i.e., (mod φ(n)). In contrast, b and n represent the public key, and p, q, and a denote the private key. We encrypt sensitive tasks in the following way.
Equations (24) and (25) show the encryption and decryption of a task. We suppose that s i and s j are plaintexts of task i and j; then, we denote as follows b b e c (s i ) e c (s j ) = s i s j mod n = (s i , s j ) b mod n = e c (s i , s j ).
We defne the FHE security scheme in Algorithm 3 as follows: • Let us assume the algorithm takes the s i and s j inputs as the security tasks, and they require encryption locally on the IoT devices; • p and q are long integers exploited during the encryption round. n is cross multiplication during block-switching performed from lines 2 to 6; • e is a small positive number employed for variation in the ordinal 64-bit block of encryption. At the same time, gdc and mod functions perform the fully homomorphic operation; • The algorithm performs encryption on security tasks from lines 7 to 15. The list was added after all were encrypted after applying the security mechanism. The offoader engine is a method used inside devices which offoads the ciphertext of tasks to the system for further computations. Once the calculation was practiced on ciphertext, and the result was sent back to the devices, and they all decrypted on devices with their private keys; • DoS is the profling that identifes denial of service in the system; if it is 1 it means there is a risk of attack else, otherwise it will remain zero.
Initial Task Scheduling
The initial scheduling is not the fnal scheduling of all tasks, and they can reschedule heterogeneous mobile edge cloud networks (e.g., heterogeneous computing nodes). The initial scheduling depends upon the deadline division component, task sequencing and security scheme. We propose the iterative scheduling algorithm, Algorithm 4, which shows the process of scheduling tasks under their requirements.
Algorithm 4 performs the scheduling in the following way: • Initially, the algorithm conducts deadline division, which shows the deadline of each task; • All tasks schedule based on given sequences by sequence rules methods; • All local tasks are encrypted and decrypted by the homomorphic security method and executed locally in the devices; • The delay-sensitive tasks are scheduled at the edge node; this is necessary for all nodes, and the requested workload must be less than their resources during processing;. • All delay-tolerant tasks are to be scheduled at the public cloud for execution; • The algorithm iteratively allocates all tasks to heterogeneous computing nodes and calculates the makespan of each IoMT workfow application at initial scheduling. Optimize objection based on Equation (6); calculate the objective function in the following way; Calculate execution time of delay-sensitive tasks based on Equation (4); Optimize objection based on Equation (6); Calculate execution time of local tasks based on Equation (4); Optimize objection based on Equation (6); Z ← dr i ← r; 23 End-Loop;
Searching Optimal Solution-Based VNS
The variable neighborhood Search (VNS) solves the initial scheduling when tasks are distributed and allocated to different computing networks. It traverses distant neighborhoods of the current obligatory solution, i.e., Z, and proceeds from beyond the new key if any improvement is made. Algorithm 5 is a global search iterative algorithm that improves the current solution with the new one via variable temperatures. If the temperature decreases, the makespan of applications reduces the initial schedule with the new key.
Algorithm 5 has the following steps to reach the optimal solution: • The algorithm takes the initial cost of each application with the initial solution C; • The temperature tmp is a variable whose initial value = 100; it reduces to near zero, as tmp minimizes the cost of each application minimizes; • The set of candidate solutions, i.e., N, and C 0 is a new solution with available costs compared with the initial solution C; The Boltzmann constant, i.e., rand(0, 1) ≤ e tmp is an acceptance method; it allows one to replace the original solution with a new one with the minimum exponential rate and temperature tmp. The rate of change in Δtmp temperature could be minimized or increased depending upon the situation; • If the solution reached the maximum level, no furthermore improvement is made, then the algorithm accepts C * as a fnal solution.
Energy-Effcient Scheduling
All the nodes are ordered according to the power consumption in the network. In the frst step, both fne-grained and workfow applications are scheduled based on their deadlines. In the second step, all the tasks are rescheduled based on their execution costs. Finally, in the third scheduling, all nodes are rescheduled according to their power consumption to minimize their power consumption without violence or service quality applications. Algorithm 6 reschedules all tasks based on computing nodes' energy. In contrast, it is no matter if either the energy of node j is consumed due to virtual machines or functions for executing the workfow and fne-grained workload. Algorithm 6 ensures the energy-effcient scheduling without violating the deadline and cost of applications in the system. Apply Dynamic Voltage Frequency Scaling method to re-arrange the node according to their power consumption; Calculate the power consumption of nodes based on Equation (14); Schedule all workloads based on Algorithms 4 and 5; End of Assignment until all workloads checked respect nodes energy consumption;
Multi-Objective Deep Graph Convolutional Network-Based Scheme
These days, for graph-structured aware applications, the usage of deep convolutional networks have become extremely popular. As a result, multi-objective decisions based on heterogeneous resources and parameters of applications can be made effciently. However, early neural networks could only be implemented with regular or Euclidean data, even though many data in the actual world have non-Euclidean graph structures. The nonregularity of data structures has driven recent advances in graph neural networks. As a result, graph neural networks have developed different variations in recent years, with Graph Convolutional Networks (GCNs) being one of them. GCNs are also one of the most fundamental graph neural networks variations.
The study devises the weighting multi-objective nondominant schemes based on the deep graph convolutional network. Algorithm 7 shows the process of the proposed method with different steps. The algorithm has three layers: the input layer, deep convolutional layer, and output layer. According to the given scenario in Figure 4, the algorithm performs the following operations. The input model takes by model as the graph ; The variable features x j for individual node j; Each deep convolutional network layer is the nonlinear function; Calculate the workloads and functions optimization based on Algorithms 4-6; Calculate the weight sum of all objectives should be optimal than existing weight in different convolutional layers; End of Inner Optimization; End of sum optimization ; 16 End of main; • In the frst step, the workload of all applications after initial scheduling will be considered an input; • All objectives have their weights concerning workloads and resources; • The resources are virtual machines and functions which are assigned based on their cost function; • The deep convolutional network chooses the best optimal weight of all objectives and sum them together. If the optimal weight is greater than the existing one, the multiobjective weight of all objectives is optimal, e.g., Z * ; • All types of tasks such as delay-sensitive, delay-tolerant and security ones and their quality of service must be satisfed as defned in Figure 4. • Every 10 min, the multi-objective tasks will call to optimize each objective function based on the available weights in the network; • If the algorithm fnds no further improvement, it will terminate the network with no further improvement in the system.
Performance Evaluation
This section shows the effciency and effectiveness of the proposed work via the simulation results. Somehow, the simulation results are the same as a real-practice experiment in practice. The performance evaluation part consists of many sub-parts such as parameter setting, system implementation, component calibrations and result discussion. The paper explains sub-parts in detail to ensure an easy understanding of the experiment.
Parameter Settings
This subsection shows the experimental setup of the program confguration, languages and computing nodes as shown in Table 3. All parameters are included in the implementation part, such as programs and algorithms, in the JAVA, Python and YAML languages. There are three computing nodes confgured for the proposed architecture. For instance, mobile node (e.g., HTC G17 and Samsung 1997), edge node (e.g., Intel 5 laptop, AndroidX86 runtime), and cloud node (e.g., AndroidX86 Amazon). We repeated all experiments 50 times with different parameters. Table 3 describes the simulation parameters of the experiment. Furthermore, we extended the computing nodes resource specifcation into a different table, Table 4. The main goal of this is to offer the computing capability and resource availability of each node in the system. There are three types of resources: a likewise mobile node, edge node and cloud node. All nodes are distinct by their speeds and resource specifcations. All resources of different computing nodes are fxed, and they cannot scale up and scale during runtime in the implemented system. Table 4. Heterogenous node resource specifcation.
Component Calibration
There are three main layers in the proposed architecture, as shown Figure 1. However, the application layer and system layer components are included in the calibration to evaluate the performances of the entire system. The features are secure offoading, task sequencing, and task scheduling. In addition, the Relative Percentage Deviation (RPD) was adopted to measure the performances of the components, as mentioned earlier, to run many types of IoMT workfow tasks in the system. The RPD measures in the following way: (27) shows the overall performances of all applications using distributed computing (e.g., mobile, edge and cloud nodes). The Z is the initial scheduling in the system; however, due to roaming features of applications, the initial solution of scheduling could be replaced with optimal scheduling Z * during the searching for space in the solution. As we mentioned above, all answers are achieved via candidate solutions during global searching with limited iterations during the process. The RPD% is the difference between the initial and best solutions during the entire process.
Iomt Workfow Tasks and Fine-Grained Tasks
The study implemented both types of workloads such as workfow and fne-grained in the simulation confguration fle. Figure 5 shows the interfaces of the system with the results of workfow dag tasks graph during execution in the system. All tasks are workfows; some have original data, and some share their data for processing. All tasks are constrained by their predecessors and successors in the system.
Workfow Tasks Generator
In this paper, we consider only three types of tasks. All workfow applications are real IoMT applications, which are open source and available at GitHub: https://github.com/ OpenIoMeT/Iomet-wiki accessed on 1 July 2021. Initially, we analyzed all applications in DAG graphics with different types of tasks. The initial application is annotating notations (e.g., all types of tasks annotated at the design time). After that, we converted the IoMT workfow into a DAG graph, where blue nodes are security tasks (e.g., local tasks), light yellow nodes are edge tasks, and red nodes are remote tasks, and they have their execution time and communication time (e.g., ms and kb) due to precedence constraints.
Discussion of Results
This subsection compares the results of IoMT workfow tasks with the proposed framework with its components and existing offoading and scheduling frameworks. The discussion of component results starts with the following subsections.
Secure Offoading Performance
After the deadline division for each task, the security aware offoading applies security to the list of security tasks locally at the devices. We implemented fully homomorphic encryption and decryption methods that convert plaintext of security tasks into ciphertext in the application layer. Then, the offoader engine offoads those tasks to the system to be carried out further. The other performance means the ciphertext data of tasks are the inputs of different tasks in the system. Therefore, it is necessary, and we measured the accounts of the offoading method into two environments. The frst environment is stable where there is no risk of hacking or Denial of Service (DOS) attacks; another environment is unstable where some chances of DoS exist in the network during offoading. In this case, we compared our proposed secure offoading schemes with the existing best security aware offoading schemes, i.e., baseline 1 and baseline 2. In baseline 1, an RSA-based encryption method is implemented, which offoads tasks with encrypted data to the server, and then the server decrypts tasks with the key and performs computations. After the calculation again, the server encrypted tasks and sent them back to the devices, and then devices interpreted all tasks in the original form. This entire process is risky, and we can trust the untrusted cloud, and it is not good practice to leave essential data on the server. Figure 6a,b show that the proposed component (e.g., secure offoading) of the SEOS framework outperforms in any environment compared to the existing secure offoading techniques concerning resources and performance. The main reason behind this is that all existing baseline approaches only consider the security and require resources; however, the proposed secure offoading method encrypted and decrypted all tasks based on their deadlines and availability of resources. Furthermore, before offoading to any nodes, we anticipated the available network which was either secure or not in the system. Our approach can stabilize and be unstable because we care about resource utilization, tasks' QoS and network stability before sending data to the surrogate edge or remote servers. A denial of service (DoS) outbreak happens whenever verifiable applications can not access their edge nodes or remote nodes resources for further execution due to either a cyber attack or network attack in the system. These nodes may be concerned by any attack and not able to respond. A denial of service attack may harm both resources and time even though tasks are encrypted. With this consideration, the proposed secure offloading method, including encryption decryption and deadline, detects and anticipates any attack before offloading via network monitoring and surfing profiling at the local device. It may save our resources and time during offloading in all kinds of environments. Therefore, Figure 7a-d show that the component of SEOS outperforms in terms of resource utilization and the deadlines of tasks, and identifes DoS in advance, in contrast to all existing approaches which considered only encryption and decryption and resources without deadlines and availability of DoS attack. Figure 8a,b show that the proposed task sequence rules adopt initial sorting and dynamic sorting to maintain the deadline of tasks for the runtime. Therefore, it is necessary to execute all tasks under their deadlines with a minimum loss of generosity.
Task Scheduling
Based on security-effcient offoading, sorting with different rules, task scheduling is the fnal phase where all tasks must be completed with precedence and deadline constraints. We set four fows of IoMT tasks with different numbers for scheduling. These tasks have different types, as we discussed above. The goal of the study is to minimize the makespan of all applications. We consider the four various applications with a different number of tasks. Each application has three different types of tasks and deadlines with constraint rules. Somehow, a few tasks are executed in parallel order, and few tasks are performed in the sequencing order; it depends upon the application order. We implemented Heterogeneous Earliest Finish Time (HEFT) and genetic algorithm (GA) as the baseline 1 framework, and Dynamic Heterogeneous Earliest Finish Time (DHEFT) and particle Swarm Optimization framework as baseline 2. These frameworks are widely investigated for traditional and mobile workfow applications in the literature. These frameworks offer different components to run mobile workfow applications in additional steps, such as task sequencing and scheduling. We ran all applications with other frameworks (e.g., SEOS, baseline 1 and baseline 2), the results of all applications with their objectives can be seen in Figure 9a-d. Each application has different requirements, such as security, latency, and resources to run its tasks. However, the SEOS outperforms all existing frameworks in terms of all makespans and the needs of all applications. The main reason for this is that all existing algorithm frameworks have some races in the encryption and decryption format. They consume many resources and time to run different types of tasks ( Figure 10): (i) Encryption of all tasks locally with the sharing key and offoading to the surrogate server for further execution. The server decrypts all tasks with a shared key and applies computation on plaintext instead of ciphertext. After the calculation, the server again encrypts tasks into ciphertext and send back their results. Furthermore, local devices decrypt the result into plaintext with the key. This way, the authentication, time and resources are challenging and uses at extending level. (ii) All existing studies partition the application into different types of tasks at the runtime based on various parameters (e.g., deadline, availability of resources, network contexts). However, due to the dynamic environment and load balancing situation in computing, these techniques beneft from lower running time and waste of resources. (iii) The loss of deadline and failure ratio of tasks in the system becoming very high. Therefore, the proposed SEOS partitioned the application at the design level to security, latency and resource requirements of all applications effciently and ran them in the heterogeneous computing node during execution.
Conclusions
This work proposed a new healthcare architecture based on workfow applications based on heterogeneous computing nodes, consisting of different layers: an application layer, management layer, and resource layer. The goal is to minimize the makespan of all applications. Based on these layers, the work proposed the secure offoading-effcient task scheduling (SEOS) algorithm framework, which includes the deadline division method, task sequencing rules, homomorphic security scheme, initial scheduling, and variable neighborhood searching method. The performance evaluation results show that the proposed plans outperform all existing baseline approaches for healthcare applications in terms of makespan. The discussion of the results showed that the proposed idea and SEOS framework outperformed all IoMT applications' existing methods in heterogeneous computing nodes. The discussion of results and comparison has been made via different components based on HSD and ANOVA famous techniques. However, there are few things to be improved in the future.
This work did not consider the mobility aware offoading and scheduling for IoMT workfow in a heterogeneous computing node environment. The runtime uncertainty in the network contexts, load balancing, failure of tasks situation will be future work of our study. We will design deep reinforcement learning architecture and framework, which will include policy, Q-deep learning, and different methods. | 10,201.8 | 2021-08-17T00:00:00.000 | [
"Medicine",
"Engineering",
"Computer Science",
"Environmental Science"
] |
Current Progress and Future Perspectives in Contact and Releasing-Type Antimicrobial Coatings of Orthopaedic Implants: A Systematic Review Analysis Emanated from In Vitro and In Vivo Models
Background: Despite the expanding use of orthopedic devices and the application of strict pre- and postoperative protocols, the elimination of postoperative implant-related infections remains a challenge. Objectives: To identify and assess the in vitro and in vivo properties of antimicrobial-, silver- and iodine-based implants, as well as to present novel approaches to surface modifications of orthopedic implants. Methods: A systematic computer-based review on the development of these implants, on PubMed and Web of Science databases, was carried out according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Results: Overall, 31 in vitro and 40 in vivo entries were evaluated. Regarding the in vitro studies, antimicrobial-based coatings were assessed in 12 entries, silver-based coatings in 10, iodine-based in 1, and novel-applied coating technologies in 8 entries. Regarding the in vivo studies, antimicrobial coatings were evaluated in 23 entries, silver-coated implants in 12, and iodine-coated in 1 entry, respectively. The application of novel coatings was studied in the rest of the cases (4). Antimicrobial efficacy was examined using different bacterial strains, and osseointegration ability and biocompatibility were examined in eukaryotic cells and different animal models, including rats, rabbits, and sheep. Conclusions: Assessment of both in vivo and in vitro studies revealed a wide antimicrobial spectrum of the coated implants, related to reduced bacterial growth, inhibition of biofilm formation, and unaffected or enhanced osseointegration, emphasizing the importance of the application of surface modification techniques as an alternative for the treatment of orthopedic implant infections in the clinical settings.
Introduction
Postoperative implant-related infection, following bone defect and primary or revision total joint arthroplasties, is a reality and remains a challenge in orthopedics with devastat-ing clinical consequences despite the application of strict protocols of aseptic techniques and perioperative antibiotics [1,2].As surgery techniques and orthopedic implants are constantly being optimized, so is the need for these implants by patients, and hence, so is the possibility of infection occurrence.Implant devices are, therefore, not a panacea; their use is not devoid of issues and their introduction bears the risk of them being colonized by bacteria, which will ultimately lead to the development of implant-related infection [3].Implant removal for the elimination of infection not only impacts patients' health and quality of life but also poses a huge financial burden, which relates to the repetition of surgical procedures, long-term hospitalization, and medication costs, visits to physicians, as well as time off work [2,[4][5][6].Implant-related infection is immensely difficult to avoid or treat; it may impede the healing process and result in implant failure, chronic osteomyelitis, sepsis, and even death [4,7].
For the most part, difficulty in the treatment of infection rests on several factors, such as bacteria affinity for the implants' surface, adhesion, and biofilm formation, which is a crucial step, and the development of antimicrobial resistance [1].The process of biofilm formation, which helps many bacterial species to adapt to various stresses, comprises cellular attachment (reversible and irreversible) to surfaces, microcolony formation, maturation, and dispersion of single cells from the biofilm.Biofilm formation decreases sensitivity to host immune defenses, circumvents systemic antimicrobial regimens, and increases resistance to antimicrobials [5,8,9].Staphylococcus aureus, Staphylococcus epidermidis, Pseudomonas aeruginosa, as well as methicillin-resistant S. aureus (MRSA) are notorious for the formation of biofilms, which are implicated in implant failure.
Systemic application of antimicrobials, as the first-line treatment strategy, is associated with poor site accessibility and increased toxicity [7].The preclinical use of antimicrobials for the prophylaxis of dreaded implant-associated infections has been reported for a long time [1,5,7,10,11].A plethora of surface modifications by coating the implants' surfaces with appropriate molecules have been developed, and a subset will be reviewed here.
The antimicrobial activity of these implants is mainly based on drug-release or nonrelease methods.Non-release methods refer to materials that can defend the adhesion of microbes and avoid access to the coated material and biofilm formation [12].To this end, the ideal coating would need to eradicate bacterial growth, inhibit adhesion and biofilm formation, and then facilitate bone formation.It would, therefore, have to achieve a balance between cytotoxicity and antimicrobial efficacy and hence support the adhesion of bone-related cells (e.g., osteoblasts) while inhibiting bacterial adhesion.In the case of Intraosseous Amputation Prosthesis, achievement of tissue integration requires eukaryotic rather than bacteria cells to win the "race for the surface", keeping in mind that bacteria may as well reside in the surrounding tissue, a bit further away from the implant surface [11,13].Another important aspect of the delivery system is release kinetics, both in vitro and in vivo.These aspects are reflected in the in vitro and in vivo assessments of the reported categories of implants in this review.
Development and/or identification of biomaterials that combine both antimicrobial and osteogenesis activities are promising approaches for infected bone repair, with a focus on the interface of the implant and the surrounding tissue.Notwithstanding the nonspecific effects, poor release kinetics, and toxicity profiles, a lot of effort is now concentrated on modifying the implants' properties, rendering them less susceptible to infections [1,7].Amongst the coatings developed during the last years, antimicrobial-coated and silvercoated biomaterials have been extensively studied [2,10,14].In addition, iodine and a variety of other coatings (including metal-, vitamin-E (VE), antimicrobial peptides (AMPep)) have gained attention.Coatings can be classified as active or passive [10], based on whether they allow or not release of the antimicrobial agents, with the majority of the ones reviewed here being passive.
Although clinical translation has been relatively limited, there are antimicrobial implant coatings available in clinics nowadays [15], and they have so far shown promise and fewer drawbacks.Nonetheless, there is a pressing need for more knowledge regarding the in vitro and in vivo performance of orthopedic-related coatings.The aim of this study is the qualitative, systematic review of the in vitro and in vivo properties of antimicrobial-, silver-, iodine-based, and novel technology of released and contact-type orthopedic implant coatings in order to point out the possible use of these materials in clinical settings and the need to validate any promising new tools to be introduced in the shield against infection.
Protocol
The protocol of the present systematic review was registered in the PROSPERO international register of systematic reviews (registration number: CRD42023444527).
Research Strategy
A systematic computer-based literature review search with predefined criteria was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [16] in the following databases: PubMed (1947 to 10 August 2023) and Web of Science (1900 to 10 August 2023).Research methodology used a combination of the following terms: "coated implant infection" [All Fields] AND "bone" [All Fields] AND "orthopaedics" [All Fields] AND "in vitro and in vivo [All Fields]", AND "surface modification" [All Fields].
All the electronic literature search was conducted independently by two authors (A.K. and E.P.) and an experienced librarian.Moreover, the above authors independently screened the titles and abstracts to identify relevant studies of outcomes and periprosthetic infection complications after the application of antimicrobial coating.If there was a disagreement between them, the final decision was made by the senior authors (P.J.P. and O.D.S.).
Inclusion Criteria and Study Selection
Studies that examined the outcome of modification in prosthetic surfaces for prophylactic effects against infection in preclinical settings were included in our systematic review.The eligibility criteria were defined according to the acronym PICOS (Population, Intervention, Comparison, Outcome and Study design) such that (P): animals from all species and sexes; (I): application of contact and releasing-type antimicrobial-coated implants; (C): control group without application of coating techniques; (O): studies where the outcome was convincingly and clearly presented; (S): studies that examined the efficacy of the coating techniques in specific micro-organisms compared to the control group.Additional inclusion criteria included (a) studies written in the English language and (b) experimental studies concerning the effectiveness of contact and releasing-type antimicrobial-coated implants in vitro.Contact and releasing-type implant coatings for joint or long-bone applications by any biological or chemical agent were selected.Only full-text articles were eligible for our study.There were no publication date limitations set.
Research that did not include comparative results or was written in a language other than English was excluded.Case reports, reviews, letters to the editor, expert opinions articles, or book chapters with insufficient details about the type of surface modification, the experimental outcome regarding infection rates, osseointegration, biocompatibility, and toxicity effects or studies with non-obtainable data were excluded.Entries with spinalrelated implants, referred to as composites, bars, cones, discs, or cylinder plugs, were excluded.All clinical studies were also excluded.
Data Extraction
Two reviewers (A.K. and E.P.) examined all the identified studies and extracted information using a predetermined form.Data from each study were assembled in a Microsoft Excel spreadsheet and classified per orthopedic implant, type of coated prosthesis, cell lines, species of the animal, bacterial strains, and animal model characteristics.The presence of duplicate studies was examined using Endnote 20 software (Clarivate Analytics, Philadelphia, PA, USA).
Quality Assessment
Three reviewers (A.K., E.P., and E.V.) independently evaluated the quality of the included studies.Since different types of studies were included, the 10-scale CAMARADES (The Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies) [17] and 12-score QUIN [18] quality assessment tools for in vivo and in vitro studies were applied, respectively.CAMARADES and QUIN scores greater than 5 and 12, respectively, were considered of good quality.The CAMARADES SCORE, which is an updated score based on the STAIR SCORE, assesses the quality of animal studies using the following criteria: (a) peer-reviewed publication, (b) statement of control of temperature, (c) random allocation to treatment or control, (d) allocation concealment, (e) blinded evaluation of the published outcome, (f) use of anesthetic without significant alteration of results, (g) appropriate animal model, such as the assessment of the antimicrobial efficacy, osteointegration ability and biocompatibility, (h) sample size calculation, (i) compliance with animal welfare rules and regulations, and (j) statement of potential conflict of interests.The QUIN score is a tool to assess the risk of bias for in vitro studies, which originates from surveys from medical experts who identified key points for a rightly structured study and then verified by other colleagues.The QUIN criteria were (a) clarified aims/objectives, (b) explained sample size calculation, (c) detailed explanation sampling method, (d) details of comparison group, (e) detailed explanation of methodology, (f) operator details, (g) randomization, (h) methods of outcomes measurement, (i) blinding, (j) statistical analysis, and (k) presentation of results.
Search Results
There were 2423 studies identified from the initial search.After evaluation of the titles and abstracts, we excluded 1679 studies and reviewed the full texts of the remaining 144 studies.Fifteen studies were excluded based on the review of the full text.After reviewing the 129 remaining studies and their bibliographies, 71 entries were included in this systematic review (Figure 1).Entries refer to in vitro and in vivo observations; one study (publication) could have two entries corresponding to in vitro and in vivo results.
Bacterial Strains and Antimicrobial Effectiveness
Assessment of the antimicrobial efficacy of the reported coatings was performed against a variety of bacteria (Tables 1-3), with more prominence observed for S. aureus (in 24 entries), followed by S. epidermidis (in eight studies), P. aeruginosa (in five studies), MRSA (in four studies), E. coli (in five studies), and B. subtilis, doxycycline (doxy) susceptible MSSA and S. epidermidis (methicillin-resistant) in one study each.In the majority of studies, one pathogen was used as a means of infection, while there were seven studies where two different bacteria were used and five studies where three or more were used in different assays (Tables 1-3).Similarly, with the in vitro cases, the majority of the in vivo ones studied the antimicrobial effects on S. aureus (26 of them).Other Gram-positive bacteria under investigation included MRSA (eight cases) and S. epidermidis (three cases), while Gram-negative bacteria, including P. aeruginosa (four cases) and E. coli (five cases), were also used.Finally, a combination of two or more bacteria has been reported in 12 studies.Infections during surgery or postoperatively are characterized by bacterial adhesion, subsequent colonization, and, ultimately, the formation of biofilms.Antibacterial efficacy (effect on bacterial growth, adhesion, biofilm formation, and even on the occurrence of resistance) of these coatings was assessed in vitro with standard microbiological assays,
Bacterial Strains and Antimicrobial Effectiveness
Assessment of the antimicrobial efficacy of the reported coatings was performed against a variety of bacteria (Tables 1-3), with more prominence observed for S. aureus (in 24 entries), followed by S. epidermidis (in eight studies), P. aeruginosa (in five studies), MRSA (in four studies), E. coli (in five studies), and B. subtilis, doxycycline (doxy) susceptible MSSA and S. epidermidis (methicillin-resistant) in one study each.In the majority of studies, one pathogen was used as a means of infection, while there were seven studies where two different bacteria were used and five studies where three or more were used in different assays (Tables 1-3).Similarly, with the in vitro cases, the majority of the in vivo ones studied the antimicrobial effects on S. aureus (26 of them).Other Gram-positive bacteria under investigation included MRSA (eight cases) and S. epidermidis (three cases), while Gramnegative bacteria, including P. aeruginosa (four cases) and E. coli (five cases), were also used.Finally, a combination of two or more bacteria has been reported in 12 studies.Infections during surgery or postoperatively are characterized by bacterial adhesion, subsequent colonization, and, ultimately, the formation of biofilms.Antibacterial efficacy (effect on bacterial growth, adhesion, biofilm formation, and even on the occurrence of resistance) of these coatings was assessed in vitro with standard microbiological assays, such as the colony-counting method, inhibition zone assay, and microscopic techniques.In many cases, antibacterial activity was measured after the release of the agent in question from the coating or following the adhesion of bacteria onto the implant.The antimicrobial activity of the majority of these coatings has also been evaluated in vivo (Tables 4 and 5); the studies that are accompanied by in vivo observations are marked by an asterisk in Tables 1-3.
Osteointegration Ability and Biocompatibility
For implants that are intended for long-term use, besides antimicrobial efficacy, osseointegration ability is highly desired.Interestingly, a well-osseointegrated implant is less susceptible to bacterial infection [11].To assess biocompatibility, a plethora of relevant bone-related cell lines were used (Tables 1-3), with more prominent being the murine osteoblast cell line MC3T3-E1 (in eight studies), the human bone osteosarcoma cell lines MG-63 (in three studies), and Saos-2 (in two studies), as well as other cell lines, including primary osteoblasts, human microvascular endothelial cells (HMVEC), fibroblasts, and mesenchymal stem cells (MSCs).No effect regarding biocompatibility was observed in all in vivo studies, as summarized in Table 1.
Viability and adhesion of human cells are important for osseointegration and bone repair.The effect of coatings on the morphology, viability, and adhesion of cells was assessed (Tables 1-3) via proliferation/cytotoxicity assays and microscopic evaluation of the cells.Osteogenic differentiation and osteogenesis were also widely estimated by assessing the activity of alkaline phosphatase (ALP), quantifying osteogenesis-related genes, and using the mineralization assay and deposition of calcium nodules.Among the genes that are central to bone turnover are ALP, a marker for early bone differentiation/maturation; osteocalcin (OC), a marker of late-phase osteogenic differentiation and bone mineralization; collagen-I (Col-1), an abundant component of the extracellular matrix; and runt-related transcription factor 2 (runx2), an early stage osteogenetic transcription factor.In vivo, osseointegration and osteogenesis have been summarized in Tables 4 and 5.
Quality Assessment
CAMARADES and QUIN assessment tools that were used to evaluate the quality of the included experimental studies demonstrated good quality of the included studies.
Discussion
Although several strategies, such as aseptic techniques and the use of antibiotics, have predominated in the current prophylaxis of infection in orthopedic interventions, the prevalence of periprosthetic infections in orthopedic surgery remains high.Regarding the promising clinical results of coating techniques in the prevention of implant infections, further research on novel in vitro and in vivo research findings may provide not only an increased understanding of the current applied techniques but also novel therapeutic approaches in biofilm reduction [12].To the best of our knowledge, this is the first systematic review of released and contact-type coating techniques that analyzed the results of both in vitro and in vivo studies, providing robust evidence about the antimicrobial and osteoinductive activities along with the biocompatibility of these materials.
Gentamicin has a broad bactericidal spectrum and appears to be non-toxic and biocompatible [7].Emerging resistance, however, to gentamicin poses a serious problem [21].VA is a glycopeptide with a broad antimicrobial spectrum, which extends to methicillinresistant strains [19].Doxy is a broad-spectrum antibiotic, and its low resistance (even for MRSA) is documented.It is less nephrotoxic and enters host cells more efficiently than gentamicin [20].Linezolid, a synthetic antibiotic, has a low potential of developing intrinsic resistance and does not show cross-resistance to other systemically administered antibiotics.Linezolid has 100% oral bioavailability, good pharmacokinetics, and good osteo-articular tissue penetration [22,[65][66][67][68][69].The selected MR-5 lytic phage, which is a broad-spectrum bacteriophage, represents a simple, inexpensive, and safe tool.As transduction of virulence or resistance genes is minimal, phages can self-multiply in the tissue surrounding the implants for as long as the bacteria are present without having adverse effects or causing tissue toxicity [8].The benefits of the combination of phage and linezolid were supported by the biocompatibility of hydroxypropyl methylcellulose (HPMC).Release of the antibiotic amikacin and the biofilm inhibitor C2DA have been suggested to have synergistic effects against a variety of pathogens.The added value of the presence of C2DA is that it lowers the amount of antibiotic that needs to be loaded onto the coating [20].An envisaged sequence of action would have amikacin act first by killing bacteria, while C2DA would work afterward by delaying/preventing bacterial adhesion and biofilm formation, allowing time for the antibiotics or the immune system to respond [20].In terms of the phosphatidylcholine (PC) coating, it is envisaged that PC liposomes can be formed following erosion of PC from the coating, which will contain amikacin and C2DA.These liposomes will extend the elution period [20].RFP and fusidic acid both have broad spectra of effect, including biofilm-producing bacteria.They are complementary to each other in terms of the bactericidal and bacteriostatic actions and together can minimize the risk of occurrence of resistance.They can also penetrate the tissue and exert their effect around the bone and in the surrounding tissue.Octenidin and Irgasan also have broad spectra of action.The antiseptics have a faster outcome compared to the more delayed action of the antibiotics, as they directly attack the bacterial cell membrane, contrary to inhibition of the bacterial DNA-dependent RNA synthesis or inhibition of bacterial RNA polymerase and of protein synthesis, that each of the antibiotics RFP and fusidic acid causes, respectively.In terms of the poly-L-lactide (PLLA) matrix, it ensured mechanical stability, while its gradual degradation was essential for the release of the antimicrobial substances incorporated there [21].Moxifloxacin, as used in sol-gel coatings, provides anti-infective activity both in vivo and in vitro in Ti implants [27].This activity summarizes the inhibition of biofilm formation and mature biofilm treatment.Chlorhexidine (CHX), one of the frequently used antiseptics, has a broad spectrum of activity.The inclusion of dopamine increases adhesion to metallic substrates [23].Finally, the use of fosfomycin seems not to be effective regarding bacterial eradication and the prophylaxis of biofilm formation [57].However, an important drawback of antimicrobial-based delivery systems is the continuous decrease in the antimicrobial's concentration.In addition, as the development of bacterial resistance is a complication of antibiotic therapy, different coatings exhibiting antimicrobial properties have been developed and presented in Tables 1 and 2.
Controlled delivery of antimicrobial-based coatings through transfer systems with high encapsulation ability has already been used to enhance the antimicrobial capacity.Mesoporous silica nanoparticles (MSNs) have been tested in order to encapsulate CHX, and the combination was then incorporated in Polydimethylsiloxane (PDMS), ending up in a thin coating film used to investigate possible medical and dental aspects.The results of the in vitro study revealed higher biocompatibility and antibacterial rate without accompanying toxicity of the combined coating substance [70].Finally, the results of this in vitro and other similar studies seem to be promising regarding the development of new coating systems combining encapsulation technology to achieve synergistic antibacterial properties [71].
Evaluation of Ag-Based Coatings
All silver (Ag)-based coatings, irrespective of whether they were bare or as nanoparticles (NPs), exhibited antimicrobial activity, with the majority reporting inhibition of bacterial growth, adhesion, and biofilm formation [8,11,30,31,34,35].All but one study examined biocompatibility and reported a lack of any negative effect on cell morphology and adhesion, as well as viability/proliferation. Cytotoxicity was only observed at 10 mM and >11.36% silver [34] and, in the case of AgNTs/HA, lacking chitosan [23], which will be discussed later.In terms of osseointegration, three studies confirmed the promotion of osteogenic differentiation and osteogenesis [4,31,35].Ag is widely used because it exerts broad-spectrum antimicrobial activity against Gram-positive and -negative bacteria, including antibiotic-resistant strains, fungi, protozoa, and certain viruses [33].In fact, the bactericidal properties of Ag are well-established [41].Ag is inert but ionizes to Ag + in the presence of body fluids.AgNPs also release Ag + .Ag + is known to confer effective antibacterial activity in vitro and in vivo without allowing the development of resistance.Of importance is its ability to prevent biofilm formation.The antibacterial mechanism of Ag + consists of structural changes to the bacterial cell wall, increased permeability, damage of the bacteria's proteins, DNA, and RNA, disruption of metabolism and inhibition of bacteria respiratory chain, and, ultimately, cell death [31,34].Nanolayer Ag has the advantage of preventing the release of potentially very toxic quantities of silver whilst retaining its antimicrobial activity [30,31].When AgNPs are combined with polydopamine (PDA), which itself has antimicrobial activity, better antibacterial efficacy is envisaged [4,68,69,72,73].In addition, PDA biocompatibility and adhesive properties render it a useful coating [31].Chitosan (CS), a biopolymer with complexing and chelating properties, allows the sustainable release of Ag + from the coating [31].The absence of CS from AgNTs/HA could be responsible for the reported cytotoxicity [33].Immobilization of Ag + via IP6 chelation retains antibacterial efficacy [27].In Svensson et al. (2013), the antibacterial mode of action was not fully elucidated; it did not seem to be dependent on release but rather on the nanostructure of the coating itself [11].However, according to several in vivo and limited clinical studies [15], the application of Ag-coated implants (Ag + ) has proven to be well-tolerated without toxicity or related side effects [23,33,41].
Evaluation of Iodine-Based and Other Novel Coatings
The paper on the iodine-based coating examined solely the antibacterial effect of iodine and reported inhibition of adhesion and biofilm formation.Povidone-iodine is a broad-spectrum (including viruses and fungi) antimicrobial agent with a low propensity for developing resistance or causing toxicity [37].
In the diverse category of novel coatings, almost all showed considerable antibacterial activity; reports on Cu [9], Zn [42], VE [40], and RP [39] demonstrated inhibition of adhesion and biofilm formation.Exceptions were reported for the blended VE implants, where intra-species differences were noticed [40], and for TiCuN + BONIT ® [9].Considering biocompatibility, most coatings were shown not to have any effect on cell viability/proliferation (in some cases, it was even found to be increased) [39,43].Interestingly, moderate compatibility and decreased viability/proliferation were found for TiCuN and TiCuN + BONIT ® 9, 4xCu-TiO2 27, NT-Zn3h 30, and HHC36 AMPep for >200 µg/mL [44].As far as osseointegration was concerned, three studies reported enhanced osteogenic differentiation and osteogenesis [33,34,36].The antibacterial efficacy and osteoinductive ability of the aforementioned coatings have also been evaluated in vivo (Table 5), complementing and strengthening the in vitro findings and suggesting that these properties are due to and not impaired by the respective coatings.Specifically, reduced bacterial growth, biofilm formation, and inflammation have been noted by several studies, while osseointegration has either been unaffected [5,7,10,60] or enhanced [31,43,46,53,62].Moreover, satisfying or excellent biocompatibility has been reported, too [28,29,39].
Selenium NPs damage the bacterial membrane of MRSA, thus inducing rapid cell lysis.They are stable due to their inorganic nature and can easily be immobilized on implant surfaces whilst retaining their activity [38].The advantages of the RP-IR780-RGDC titanium implant are as follows: RP and its degradation products are non-toxic, the small amount of singlet O2 seems to enhance the susceptibility of the bacteria to heat, increase the bacterial membrane's permeability, and eliminate biofilm, following irradiation with an 808 nm laser.In addition, RGDC seems to improve adhesion and proliferation [39].VE and its antioxidant properties may be a key point affecting bacterial adhesive ability and biofilm formation.Modifications of the properties of UHMWPE by VE showcased a reduction in adherence of some bacterial strains, with the intraspecies differences, however, suggesting the need for more research in order to fully appreciate the added advantage of VE [40].A more recent in vivo study showcased that VE phosphate could enhance bone stimulation and deposition [48].There is increasing interest in determining the antimicrobial and osseointegrative properties of VE as a coating for orthopedic or dental implants.Heavy metal ions, such as Cu ions, can become toxic.Cu can have a bacteriolytic effect and stop bacteria from replicating [9].The low release of Cu observed for TiCuN + BONIT ® could be responsible for the lack of antibacterial properties reported [9].Cu seems to be effective on planktonic bacteria and bacteria formatting a biofilm while presenting low toxicity.This activity is based on the inhibition of biofilm formation by influencing the advantage of the osteoblasts on the implants' surface [9].The higher affinity of AMPs for bacterial membranes renders them suitable for antimicrobial agents with low toxicity peptides to form electrostatic interactions with anionic phospholipid groups of the bacterial membrane, to then disrupt the membrane and cause bacteria death [43,44].The higher affinity of AMPeps for bacterial membranes renders them suitable for antimicrobial agents with low toxicity [73].Covalent immobilization of MBD-14 on SP is believed to prevent its rapid degradation and ensure its stability.This association in combination with the porous matrix, might be responsible for the antibacterial activity observed [44].Similarly, the antimicrobial effect of zinc (Zn) is mainly expressed by Zn complexes and ZnO NPs [74].Zinc complexes express antifungal activity, whereas ZnO NPs are characterized by antimicrobial activity by two different mechanisms.These activities summarize in the release of reactive oxygen species (photocatalytic process) or ZnO nanoparticles, which lead to the production of intracellular ROS, inducing damage to the cells.
One of the most important limitations of all the nanoparticles used is the lack of available data from in vivo studies with long-term results summarizing the use of these types of implants in animal models.The use of a variety of implants in different animal models provides heterogeneous results, which need to be further specified in future studies.
The presented data of the in vitro and in vivo results of the included studies strongly suggest the application of conventional and novel antimicrobial surface modifications of the implants by orthopedic physicians in the management of postoperative implant infections.However, our systematic review has several limitations.Although 73 entries of high quality were included in this review, the studies' designs and methods were heterogeneous as different animal models were used and no standardized methods were applied in order to evaluate the reproducibility of the outcomes.Additionally, there are some novel coating techniques that have not been tested in vivo.The lack of experience in clinical settings raises concerns about the long-term results of these implants and the growth of multidrugresistant micro-organisms as a result of their clinical use.Finally, a language bias could be present as only studies written in English were reviewed.
Conclusions
Assessment of both in vivo and in vitro studies revealed a wide antimicrobial spectrum of the coated implants under investigation, related to inhibition of biofilm formation and unaffected or enhanced osteointegration, as expressed through the impact of various cells on surface attachment or proliferation.Moreover, the use of these implants was often not related to elevated toxicity levels.Taking into account the known limitations associated with the use of different types of coated implants, their presence can be regarded as a promising candidate for the efficient treatment of implant-related infections.Results from in vitro studies involving both novel coatings and the use of encapsulation technology could aid in the design of effective antibacterial coating materials with high biocompatibility and nontoxicity.Finally, these outcomes should be further studied and validated through clinical trials to be used in clinical practice in the future.
Figure 1 .
Figure 1.Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) flow for seeking and identifying included studies.
Figure 1 .
Figure 1.Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) flowchart for seeking and identifying included studies.
Figure 2 .
Figure 2. Titanium nitride (TiN)-coated implants for total knee arthroplasty (A) composed of the femoral (B), tibial (C) components, and the polyethylene insert (D) displaying significant anti-infective activity and excellent biocompatibility linked to controlled ion release and long-term chemical stability.
Figure 2 .
Figure 2. Titanium nitride (TiN)-coated implants for total knee arthroplasty (A) composed of the femoral (B), tibial (C) components, and the polyethylene insert (D) displaying significant antiinfective activity and excellent biocompatibility linked to controlled ion release and long-term chemical stability.
Figure 3 .
Figure 3. Silver-coated femoral stem (A), titanium nitride (TiN) (B), and vitamin E-coated (C) femoral heads applied in patients after a two-stage revision for infected total hip replacement.(D) custom-made titanium nitride (TiN)-coated implant fabricated with 3D printing technique for the replacement of the calcaneus after complex osteomyelitis.
Figure 3 .
Figure 3. Silver-coated femoral stem (A), titanium nitride (TiN) (B), and vitamin E-coated (C) femoral heads applied in patients after a two-stage revision for infected total hip replacement.(D) custommade titanium nitride (TiN)-coated implant fabricated with 3D printing technique for the replacement of the calcaneus after complex osteomyelitis.
Table 1 .
In vitro studies with antimicrobial-based coatings.
Table 2 .
In vitro studies with silver and iodine-based coatings.
Table 3 .
In vitro studies with novel coating techniques.
Table 4 .
In vivo research data of antibiotic-coated internal fixation and prostheses implants.
Table 5 .
In vivo research data of internal fixation and prostheses implants coated with silver and novel modifications. | 6,920.2 | 2024-03-26T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Mechanical Behavior Probing multi-scale mechanics of peripheral nerve collagen and myelin by X-ray di ff raction
Peripheral nerves are continuously subjected to mechanical forces, both during everyday movement and as a result of traumatic events. Current mechanical models focus on explaining the macroscopic behaviour of the tissue, but do not investigate how tissue strain translates to deformations at the microstructural level. Predicting the e ff ect of macro-scale loading can help explain changes in nerve function and suggest new strategies for prevention and therapy. The aim of this study was to determine the relationship between macroscopic tensile loading and micro scale deformation in structures thought to be mechanically active in peripheral nerves: the myelin sheath enveloping axons, and axially aligned epineurial collagen fi brils. The microstructure was probed using X-ray di ff raction during in situ tensile loading, measuring the micro-scale deformation in collagen and myelin, combined with high de fi nition macroscopic video extensiometry. At a tissue level, tensile loading elongates nerves axially, whilst simultaneously compressing circumferen- tially. The non-linear behaviour observed in both directions is evidence, circumferentially, that the nerve core components have the ability to rearrange before bearing load and axially, of a recruitment process in epineurial collagen. At the molecular level, axially aligned epineurial collagen fi brils are strained, whilst the myelin sheath enveloping axons is compressed circumferentially. During induced compression, the myelin sheath shows high circumferential sti ff ness, indicating a possible role in mechanical protection of axons. The myelin sheath is deformed from low loads, despite the non-linearity of whole tissue compression, indicating more than one mechanism contributing to myelin compression. Epineurial collagen shows similar load-bearing characteristics to those of other collagenous connective tissues. This new microstructural knowledge is key to understand peripheral nerve mechanical behaviour, and will support new regenerative strategies for traumatic and repetitive injury.
Introduction
Peripheral nerves are continuously subjected to mechanical forces, elongation, and compression during everyday movement, without suffering functional losses and damage. Traumatic events and inappropriate continuous mechanical loading, however, are associated with common disabling and painful entrapment, overstretch, or compression neuropathies. Carpal tunnel syndrome, for example, has a prevalence in the United Kingdom of 5-16% annually, varying by age and gender group, with decompression surgery provided for 43-74 people per 100,000, annually (Aroori and Spence, 2008;Burke, 2000). Erb's palsy, caused by excessive stretching of infant heads and arms during birth, induces loss of sensation and abnormal motor function in 0.1% of births in the US (Gilbert et al., 1999). In sciatic nerves and brachial plexus, stretch due to trauma and abnormal limb positioning during operations are widespread causes of debilitating iatrogenic injury, both temporary and chronic (Lalkhen and Bhatia, 2012). Mechanical tension has also been proposed as a regenerative method, with mild stretch inducing axonal and whole-nerve elongation and growth, although the multi-scale effects of this elongation have not been studied (Pfister et al., 2004;Chuang et al., 2013;Saijilafu et al., 2008). A better understanding of the multi-scale link between macroscopic loading and loss of function in peripheral nerve is required for effective prevention and treatment of neuropathies, and to explore tension as a strategy for injury prevention and regeneration (Bueno and Shah, 2008).
Multiple concurrent factors cause functional alterations in peripheral nerves. Occlusion of nerve blood supply, leading to hypoxia, occurs at macroscopic strains above 15% (Ogata and Naito, 1986). A reduction in electrical conduction has also been observed as a result of stretch, with Compound Action Potential magnitude decreasing with increasing strain (Li and Shi, 2007;Wall et al., 1992), and completely subsiding at strains of 5-20% (Li and Shi, 2007;Takai et al., 2002). At a cellular level, strains above 10% have been shown to affect axonal transport, responsible for motility of cytoskeletal elements, energy production, and growth factor trafficking (Ikeda et al., 2000;Aomura et al., 2016), and AFM studies on single axons showed that the compression required to block axonal transport was variable, between 65 and 540 Pa (Magdesian et al., 2012). Furthermore, voltage-gated sodium channels clustered at Nodes of Ranvier, fundamental for saltatory impulse conduction, have been shown to disperse with applied nerve elongation (Ichimura et al., 2005). Axonal tensile strains of 5% can cause growth cone collapse (Yap et al., 2017), membrane permeabilisation (Geddes et al., 2003), and morphological changes in neurons (Kilinc et al., 2008), also potentially leading to loss of function. The mechanical properties of peripheral nerve tissue derive from its complex structural organisation. Substructures in peripheral nerves include the endoneurium, a loose structure of collagenous channels in which myelinated and unmyelinated axons are embedded, surrounded by the perineurium, made up of multiple layers of transversely aligned lamellar collagen. The epineurium, a thick layer of densely packed collagen fibres, envelopes the whole nerve in an axially aligned pattern, showing fibril waviness and crimp ( Fig. 1) (Ushiki and Ide, 1990). The properties of these collagenous substructures have been shown to be similar to those of tendon (Mason and Phillips, 2011) and either or both the epineurium and perineurium have been described as the main load bearing structures (Sunderland and Bradley, 1961;Haftek, 1970;Rydevik et al., 1990;Tillett et al., 2004). Here, we analyse the partitioning of strain between whole tissue and collagen molecules, to compare the mechanical properties of axially aligned eipneurial collagen to those of other load-bearing collagenous tissues such as tendon.
Mechanically, peripheral nerves have been modelled as concentric, two-layer composites: a more compliant, water-rich core representing endoneurium, the inner part of the perineurium, and myelinated and unmyelinated axons, and a stiffer outer sheath representing the epineurium and the outer perineurium, connected by a sliding interface layer (Tillett et al., 2004;Georgeu et al., 2005;Walbeehm et al., 2004). Independent mechanical testing characterises the core as significantly softer and more compliant than the sheath, and indicates that, during tensile loading of the nerve, the outer sheath applies a compressive force on the core, which is resisted by a positive endoneurial pressure (Walbeehm et al., 2004;Georgeu et al., 2005). The loose organisation of endoneurial tissue suggests that, at low loads, components within the core rearrange, rather than compress, during whole tissue compression (Millesi et al., 1995). The transverse compression induced by tensile loading has been observed in vitro, but its effect on the microstructural elements within the core has not been studied (Topp and Boyd, 2006;Millesi et al., 1995).
Within the nerve core, chemical demyelination has been shown to reduce nerve axial stiffness, implying that myelin contributes to tissue mechanical properties (Shreiber et al., 2009). Additionally, AFM studies have shown that digestion of Schwann cell basal lamina reduces nerve fibre circumferential stiffness and resilience significantly (Rosso et al., 2014), suggesting a protective role for myelin during nerve deformation. However, the mechanical role of myelin is not yet clear, as it has mostly been studied in isolation, rather than within the multi-scale mechanical environment of the whole nerve. Here, we aim to characterise the mechanical properties of the myelin sheath during circumferential compression induced by whole-nerve elongation.
Linking macroscopic loading of peripheral nerves with changes in cell function and damage requires knowledge of the multi-scale mechanical behaviour of nerves. Current knowledge of the micro-scale effects of macroscopic loading is limited, and a better multi-scale understanding is required to successfully prevent and treat mechanicallyinduced peripheral neuropathy and functionality loss (Bueno and Shah, 2008). X-ray diffraction is an ideal modality for in situ strain measurements of micro-scale quasi crystalline structures. In nerve, it has been shown to effectively probe collagen fibres (Inouye and Worthington, 1983) as well as the intra-lamellar spacing in the myelin sheath (Finean, 1960;Inouye et al., 2014;Kurg et al., 1982), but it has not been applied to studying the mechanical properties of these structures during in situ loading. Here we investigate the multi-scale mechanical properties of peripheral nerve during tensile loading, by probing the microstructure of peripheral nerve collagen and myelin by X-ray diffraction during macroscopic tensile loading.
Sciatic nerve harvesting and imaging
Sciatic nerves were harvested from 300 to 350 g (10-12 week old), male Sprague-Dawley rats, sacrificed by cervical dislocation for an unrelated study. Briefly, following skin removal, the sciatic nerves were exposed by dorsal incision of the gluteus muscle. Nerves were excised proximally close to the spinal cord, and distally at the tibial-peroneal branching. Nerve samples were stored in Phosphate Buffered Saline (PBS, Gibco, UK), immediately frozen at −20°until use. This has been previously shown not to not alter the mechanical properties of collagenous tissues (Bruneau et al., 2010;Fessel et al., 2014), as well as retraction properties of sciatic nerves (Walbeehm et al., 2004). Furthermore, Wallarian degeneration of axons and the myelin sheath has been shown to be delayed by low temperatures (Sea et al., 1995). Together, these suggest that freezing nerves upon excision and thawing prior to experiments should retain the tissue mechanical properties. After thawing and mounting, ink marks were made on samples, for optical measurement of whole tissue strain as previously used for spinal cord and tendon strain measurements (Shreiber et al., 2009;Bianchi et al., 2016). To highlight the crimped epineurial collagen structure, nerves were imaged using multi-photon second harmonic generation (SHG) imaging using a Zeiss LSM780 microscope (20x lens), with excitation set at 880 nm.
In situ loading
Sciatic nerves were loaded in situ to failure (defined as complete tissue disconnection) using a Deben Microtest (Deben, UK) tensile loading stage, equipped with a 2 N load cell. Samples were glued to a rectangular plastic frame using cyanoacrylate glue (Loctite, UK). Frames were mounted in the Deben stage by clamps (Fig. 2). Before experiments, the sides of the frames were cut, leaving the nerve as the only load bearing element between the clamps. Two rectangular sheets of Kapton film (Goodfellows, UK) film were places either side of the nerves, and PBS was added to the samples to maintain physiological hydration, and to hold the Kapton film together by capillary action. The loading rig was mounted vertically, and rotated by 45°to avoid blocking detectors. Two cameras were placed, facing the sample, at 45°r elative to each other (Fig. 2, inset).
For 6 sciatic nerve samples, increasing tensile loads were applied, and diffraction patterns recorded at approximately 0.1 N intervals. In order to avoid creep-like effects, X-ray diffraction images were taken during continuous loading, at the instant when the load cell indicated the required load. Loading was assumed to be constant during acquisition, as a quasi-static loading at − 0.1 s 1 strain rate was used, much slower than acquisition time.
X-ray diffractometry
Diffraction experiments were carried out at beamline I22, Diamond Light Source, UK. The X-ray beam dimensions were approximately × 300 100 μm 2 FWHM Horizontal × Vertical, with the largest dimension parallel to the sample axis (z-direction in Fig. 2, inset). A photon energy of 12.4012 ± 0.0014 keV was selected ( = × − 1.12 10 dE E 4 ), corresponding to an X-ray wavelength of ± − × − 0.0997 1.0066 10 5 nm. SAXS data was collected over a q range of − − 0.0026 0.13 Å 1 , and over an azimuthal range of°360 using a 2D Pilatus P3-2M detector (DECTRIS, Switzerland) placed 6.539 m downstream of the sample and with a typical exposure time of 0.15 s. Beam centre on the detector was determined using the Debye-Scherrer rings from Silver Behenate, and the sample-detector distance was determined using the diffraction from chicken tendon collagen standard reference samples.
Pilot experiments showed small angle X-ray scattering (SAXS) of peripheral nerve with a highly textured, axially aligned collagen diffraction pattern, with peaks corresponding to multiple diffraction orders of the 67 nm axial spacing (D-spacing) between molecules within fibrils (Fig. 1b), whilst myelin diffraction produced equatorial peaks at°9 0 orientation to the collagen, measuring the 18 nm average spacing between lamellae (Fig. 1C).
X-ray diffraction pattern analysis
SAXS diffraction patterns were converted from Cartesian detector coordinates to polar coordinates, showing diffraction data as a function of azimuthal angle, χ , and scattering vector magnitude, q (Fig. 3). The geometrical parameters of the experimental setup, essential for this remapping, were determined by calibration SAXS measurement of chicken tendon collagen standard reference samples. Gaussian profiles were fitted to the 1st, 3rd, 5th and 9th order meridional collagen peaks, and their maxima used to determine peak position. Similarly, myelin diffraction patterns were extracted by fitting Gaussians to two equatorial peaks from the same images.
Q-scale peak positions were then divided by the diffraction peak order number, converted to D-spacing measurements, and averaged over the number of fitted peaks. Nominal strain was measured by the displacement of average peak position relative to an unloaded state, using Eq. (1), where X can refer to any spacing parameter observed, and X 0 being the value of the parameter in the unloaded state.
(1) Fig. 2. Experimental setup and data acquisition. X-ray beam is diffracted by nerve sample loaded in tension by Deben rig. Kapton film and PBS keep the samples under physiological hydration conditions. Small angle scattering of X-rays is detected, showing peaks corresponding to intra-molecular spacing in epineurial collagen fibrils, and peaks corresponding to the intra-lamellar spacing in the myelin sheath. Inset shows diagrammatic view of camera setup, as seen from the X-ray source.
Stress-strain data was calculated only for axial tissue and collagen deformation. Stress values for collagen were calculating assuming that the epineurium constituted 50% of nerve cross-sectional area (Flores et al., 2000). For all other measurements where area could not be approximated, data was presented as load against strain plots, showing material stiffness. Load, being the controlled variable, is plotted on the horizontal axis, unlike in stress-strain curves, as previously used to characterise the mechanical properties of other collagenous tissues (Oxlund et al., 2010). Linear and non-linear regression and curve fitting is carried out using GraphPad PRISM. Non-linear fitting is used for macroscopic tissue strains, where non-linear relationships between applied force and elongation have been previously reported (Millesi et al., 1995;Walbeehm et al., 2004).
Whole tissue strain calculations
Strain in the whole tissue was measured axially and transversely from images taken by a HD camera (AlliedVision, Germany). Axial tissue strain was estimated using marker tracking of the black surface ink marks. A custom MATLAB code was used, which estimates strain by extracting centroid coordinates of the ink marks and calculating the change in distance between marks. This was validated by manual measurement of the distance between marks. In the transverse direction, contraction with applied load was measured by extracting nerve diameter from images, by measuring full width at half maximum of line profiles across the nerve. The nerve was split into 50 axial segments, and the diameter calculated as the average of the diameter value measured in each section. Tissue strain measurements and relation to molecular strains assume circular geometry and symmetry.
Tissue and molecular strains
With increased tensile load, nerve tissue elongates axially, with simultaneous circumferential compression ( Fig. 4a and b). In the axial direction, the tissue extends at a faster rate at lower loads, a non-linear behaviour (average > R 0.87 2 ) consistent with previously published results (Millesi et al., 1995;Topp and Boyd, 2006). Circumferential compression is also non-linear, with a higher rate of compression at lower loads, and with strain reaching a limiting value at higher loads (average > R 0.9 2 ), consistent with the hypothesis that internal core components rearrange before being loaded during induced compression (Topp and Boyd, 2006;Millesi et al., 1995).
Loading modulus (calculated from the linear section of the loading curves) for whole nerves was 5.31 ± 2.55 MPa, in accordance with previously reported values for sciatic nerve stiffness (Topp and Boyd, 2006). Assuming epineurial thickness to be 50% of nerve volume (Flores et al., 2000), loading modulus of epineurial collagen fibrils was 92.34 ± 35.01 MPa, lower than that of rat tail tendon collagen fibrils measured by X-ray diffraction (Bianchi et al., 2016), but in line with other reported moduli for collagen fibres (Sasaki and Odajima, 1996).
With increasing applied load, the average molecular D-spacing measured in collagen increased linearly in all samples (average > R 0.9 2 ) up to strains of 2% (Fig. 5a). Concurrently, average spacing between lamellae in the myelin sheath decreased, reaching strains of up to −0.6% (Fig. 5b).
Strain partitioning between length scales
Partitioning of strain can be measured in both collagen and myelin (Fig. 6). Tissue level axial strain exceeds collagen fibril strain, indicating the existence of multiple mechanisms for tissue extension in addition to collagen fibril elongation, as reported for other collagenous connective tissues (Fratzl et al., 1998;Bianchi et al., 2016). Tissue circumferential compressive strain is also larger than the myelin layer compression suggesting that the myelin sheath significantly stiffer than its surroundings. Average myelin stiffness is higher than circumferential tissue stiffness, with a ratio of the slopes of linear portions of load-strain curves of 100 (Fig. 6b). For collagen, the same slope ratio is 10 (Fig. 6a), indicating that collagen bears a higher proportion of axial load than the proportion of circumferential compression borne by myelin.
The difference between tissue and molecular strain at each loading point represents straining due to mechanisms other than molecular deformation, such as molecular sliding and structural rearrangements. Axially, rearrangements in collagen structure account for more than 90% strain at lower loads, and molecular elongation increases at higher loads. Circumferentially, myelin compression accounts for less than 2%, on average, of tissue compression.
Macroscopically, axial tissue strain correlates linearly (average = R 0.87 2 ) with collagen strain (Fig. 7a), indicating a direct mechanical relation between the two length scales. Variations in slope of tissue strain to collagen strain relations are due to the higher variability of tissue strain measurement, as previously shown for whole tendon tissue strains measured using a similar technique (Bianchi et al., 2016). Circumferentially, no linear correlation is observed between molecular and tissue strain (Fig. 7b), suggesting that induced macroscopic compression is not the only mechanism contributing to myelin deformation.
Discussion
Using X-ray diffraction and HD video extensiometry during in situ tensile loading, we have investigated the multi-scale mechanical properties of peripheral nerves. By using X-ray diffraction, molecularlevel mechanical behaviour was observed in both axially-aligned epineurial collagen, and in the myelin sheath enveloping axons. Results presented show a pronounced circumferential compression in rat sciatic nerves during axial elongation. At a molecular level, the spacing between myelin lamellae decreases, with indication of high compressive stiffness of myelin, and epineurial collagen elongates, showing similar properties to those of load-bearing connective tissues.
A pronounced macroscopic circumferential compression is observed during tensile loading of sciatic nerves. This supports previously proposed models, where a compression of the nerve trunk occurs during tensile loading (Georgeu et al., 2005;Topp and Boyd, 2006;Millesi et al., 1995). Evidence of epineuriual thickness remaining constant during elongation (Islam et al., 2012) indicates that compressive strain is localised in the core. The loose organisation of perineurial and endoneurial tissue suggests that components within the core can rearrange, rather than compress, with decreasing nerve diameter (Millesi et al., 1995). The non-linear behaviour observed here at tissue level ( Fig. 4b) confirms the presence of softer structures that allow rearrangement under small loads, and stiffer structural elements contributing to the behaviour at higher loads.
Our results show that, when a peripheral nerve is elongated axially, the average spacing between lamellae in the myelin sheath decreases linearly (Fig. 5b). This suggests that myelin is being deformed throughout the loading curve, both at lower loads, during rearrangement of softer structures, and at higher loads, where the nerve exhibits higher circumferential stiffness. No direct correlation between axial tissue compression and myelin compression is observed (Fig. 7b), indicating that the circumferential compression is not the only mechanism causing myelin compression.
Myelin has been shown to have significant compressive stiffness (Rosso et al., 2014;Shreiber et al., 2009). The observed behaviour can be explained if myelin compression is not solely caused by tissue compression. Previously, it has been shown that axial elongation of myelinated nerve fibres is both due to a lengthening of the internodal region (Maxwell, 1996;Yokota et al., 2003), as well as a more prominent widening of the Nodes of Ranvier (Kerns et al., 2001;Ikeda et al., 2000;Ichimura et al., 2005). This suggests that axons are being loaded in tension, and that the myelin sheath, despite contributing to tensile stiffness, is being elongated with the axons. The myelin sheath is tethered to the axons it surrounds through contact proteins located at the paranodal region. If axons are also loaded in tension, the spacing between myelin lamellae would decrease, as it cannot slide along the axons (Fig. 8), explaining behaviour at low loads (Inouye et al., 2014). At higher loads, the stiff myelin sheath is further compressed with induced circumferential strain, but acts as a stiff protective layer to reduce potentially damaging axonal compression.
When peripheral nerves are loaded in tension, axially aligned collagen fibrils present in the epineurial outer sheath also extend, showing a linear relation between axial tissue strain and collagen molecular strain (Fig. 7a). In tendon collagen, a load-bearing energy storing structure of similar fibrillar morphology and density (Ushiki and Ide, 1990) subjected to continuous physiological tensile loading, accepted deformation mechanisms accounting for differences in molecular and tissue axial strain include both geometric rearrangement of collagen fibrils (uncrimping) and relative sliding of discontinuous fibrils (Bianchi et al., 2016;Fratzl et al., 1998). Similarly, when a peripheral nerve is stretched, part of this elongation is due to extension of the axially aligned epineurial collagen fibres, and part is due to molecular rearrangements and deformation of other components. We have shown that strain due to factors other than collagen molecular straining are more dominant at lower loads, and vary non-linearly with applied load. This is in accordance to models of collagenous materials which include fibril recruitment, where wavy fibrils are first recruited, and start bearing a higher proportion of load once they have been fully straightened (Thompson, 2013). This also agrees with mechanical testing performed on separated nerve cores and sheaths, where the outer layer has been shown to bear load only after an initial toe region (Tillett et al., 2004). Loading modulus of epineurial collagen measured here is smaller than that reported from X-ray diffraction studies on rat tail tendon collagen (Bianchi et al., 2016), suggesting that other elements of the peripheral nerve, such an perineurial collagen and axons, bear a fraction of the applied tensile load, but confirming that the epineurium has a vital mechanical role during tensile loading. The large inter-species variability, as well as the differences in mechanical properties between functionally specialised tendons may also contribute to the difference measured (LaCroix et al., 2013).
Limitations of this study
The study presented here is limited by the small sample number, and by the elements that are observable using X-ray diffraction. Direct information about the deformation of axons, blood vessels and other structures within the nerve would provide further explanation for the mechanisms of deformation, and could lead to a better quantification of induced compressive strain partitioning. Previous work on tracking axonal strain using surface protein markers suggests a technique which could be employed (Singh et al., 2017). This study is further limited by the lack of information about volume fraction of materials present in peripheral nerves. This information would allow stiffness of nerve collagen and myelin to be better estimated. Another limitation is the absence of information about myelin properties in tension, which would further refine the whole-nerve model. The loading methods should be considered as a factor further limiting these results. In situ tensile loading does not precisely reproduce in vivo nerve loading conditions, where the nerve is not clamped at both ends. This could affect the way in which nerve core elements are strained. Furthermore, a thorough characterisation of the differences between freshly excised tissues and frozen tissues would provide stronger evidence that sample preservation did not affect its mechanical properties.
This study could be the base for further studies to observe how larger deformations translate to the micro-scale, and damage peripheral nerves, as well as to study the effect of demyelinating disease models on the mechanical behaviours described here.
Conclusions
In this study, we probe the micro-mechanical behaviour of peripheral nerve collagen and myelin during in situ tensile loading, by X-ray diffraction. Results show a non-linear compression of the nerve induced by tensile loading, confirming previous hypotheses that soft nerve core elements rearrange before being loaded. We show that at the microscopic level, the spacing between lemellae of the myelin sheath decreases with applied whole-nerve tensile load, but myelin exhibits a much higher stiffness than the whole nerve. This suggests that myelin is mechanically significant in compression as well as in tension, protecting underlying axons from compressive damage.
Axially, we confirm strain partitioning in epineurial nerve collagen, similar to that observed in tendons, suggesting similar load-bearing properties.
These results have implications in understanding mechanical behaviour of peripheral nerve tissues, as well as mechanisms of cellular damage caused by macroscopic loading. Furthermore, understanding the microscopic nerve mechanical environment during mechanical loading is fundamental to devise appropriate strategies for injury Fig. 6. Strain partitioning between molecular (red) and tissue (blue) levels, for collagen (a) and myelin (b). In both cases, straining of the observed molecular structure only accounts for a fraction of the total tissue strain in the same direction. Lines correspond to non-linear fit as shown in Fig. 4. Dotted lines = 95% confidence interval. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.). Fig. 7. Molecular strain increases linearly with tissue strain in axially aligned collagen (a), but shows no relationship axially during myelin compression (b) in rat sciatic nerves (N = 6). prevention and regeneration strategies (Bueno and Shah, 2008).
Further investigation, including analysis of axon deformation during whole nerve loading, is required to fully understand how the functional properties of nerve tissues are altered during mechanical loading.
Conflicts of interest
The authors declare that they have no conflict of interest.
Funding
This work was supported by the Rosetrees Trust (award M186-F1) and China Regenerative Medicine International Limited (CRMI) for materials, and EPSRC for F.B. funding through DTP award 1514540. We acknowledge Diamond Light Source for time on beamline I22 under proposal SM12518. Fig. 8. Proposed mechanism of myelin compression as a result of nerve tensile loading. Epineurial collagen elongates axially, taking most of the load, with some load being taken by the axons and myelin sheath. The myelin sheath, which is tethered at paranodes, elongates with the axons, and is compressed circumferentially. The combined effect of induced tissue compression and axonal elongation induces a compression of myelin layers. | 6,186.6 | 2018-07-25T00:00:00.000 | [
"Materials Science",
"Medicine",
"Physics"
] |
Natural Language Generation for Effective Knowledge Distillation
Knowledge distillation can effectively transfer knowledge from BERT, a deep language representation model, to traditional, shallow word embedding-based neural networks, helping them approach or exceed the quality of other heavyweight language representation models. As shown in previous work, critical to this distillation procedure is the construction of an unlabeled transfer dataset, which enables effective knowledge transfer. To create transfer set examples, we propose to sample from pretrained language models fine-tuned on task-specific text. Unlike previous techniques, this directly captures the purpose of the transfer set. We hypothesize that this principled, general approach outperforms rule-based techniques. On four datasets in sentiment classification, sentence similarity, and linguistic acceptability, we show that our approach improves upon previous methods. We outperform OpenAI GPT, a deep pretrained transformer, on three of the datasets, while using a single-layer bidirectional LSTM that runs at least ten times faster.
Introduction
That bigger neural networks plus more data equals higher quality is a tried-and-true formula. In the natural language processing (NLP) literature, the recent darling of this mantra is the deep, pretrained language representation model. After pretraining hundreds of millions of parameters on vast amounts of text, models such as BERT (Bidirectional Encoder Representations from Transformers; Devlin et al., 2018) achieve remarkable state of the art in question answering, sentiment analysis, and sentence similarity tasks, to list a few.
Does this progress mean, then, that classic, shallow word embedding-based neural networks are noncompetitive? Not quite. Recently, Tang et al. (2019) demonstrate that knowledge distillation (Ba and Caruana, 2014;Hinton et al., 2015) can transfer knowledge from BERT to small, traditional neural networks, helping them approach or exceed the quality of much larger pretrained long short-term memory (LSTM; Hochreiter and Schmidhuber, 1997) language models, such as ELMo (Embeddings from Language Models; Peters et al., 2018).
As shown in Tang et al. (2019), crucial to knowledge distillation is constructing a transfer dataset of unlabeled examples. In this paper, we explore how to construct such an effective transfer set. Previous approaches comprise manual data curation, a meticulous method where the end user manually selects a corpus similar enough to the present task, and rule-based techniques, where a transfer set is fabricated from the training set using a set of data augmentation rules. However, these rules only indirectly model the purpose of the transfer set, which is to provide more input drawn from the task-specific data distribution. Hence, we instead propose to construct the transfer set by generating text with pretrained language models fine-tuned on task-specific text. We validate our approach on four small-to mid-sized datasets in sentiment classification, sentence similarity, and linguistic acceptability.
We claim two contributions: first, we elucidate a novel approach for constructing the transfer set in knowledge distillation. Second, we are the first to outperform OpenAI GPT (Radford et al., 2018) in sentiment classification and sentence similarity with a single-layer bidirectional LSTM (Bi-LSTM) that runs more than ten times faster, without pretraining or domain-specific data curation. We make our datasets and codebase public in a GitHub repository. 1 1 https://github.com/castorini/d-bert 203 2 Background and Related Work Ba and Caruana (2014) propose knowledge distillation, a method for improving the quality of a smaller student model by encouraging it to match the outputs of a larger, higher-quality teacher network. Concretely, suppose h S (·) and h T (·) respectively denote the untrained student and trained teacher models, and we are given a training set of inputs S = {x 1 , . . . , x N }. On classification tasks, the model outputs are log probabilities; on regression tasks, the outputs are as-is. Then, the distillation objective L KD is (1) Hinton et al. (2015) alternatively use Kullback-Leibler divergence for classification, along with additional hyperparameters. For simplicity and generality, we stick with the original meansquared error (MSE) formulation. We minimize L KD end-to-end with backpropagation, updating the student's parameters and fixing the teacher's. L KD can optionally be combined with the original, supervised cross-entropy or MSE loss; following Tang et al. (2019) and Shi et al. (2019), we optimize only L KD for training the student. Using only the given training set for S, however, is often insufficient. Thus, Ba and Caruana (2014) augment S with a transfer set comprising unlabeled input, providing the student with more examples to distill from the teacher. Techniques for constructing this transfer set consist of either manual data curation or unprincipled data synthesis rules. Ba and Caruana (2014) choose images from the 80 million tiny images dataset, which is a superset of their dataset. In the NLP domain, Tang et al. (2019) propose text perturbation rules for creating a transfer set from the training set, achieving results comparable to ELMo using a BiLSTM with 100 times fewer parameters.
We wish to avoid these previous approaches. Manual data curation requires the researcher to select an unlabeled set similar enough to the target dataset, a difficult-to-impossible task for many datasets in, for example, linguistic acceptability and sentence similarity. Rule-based techniques, while general, unfortunately deviate from the true purpose of modeling the input distribution; hence, we hypothesize that they are less effective than a principled approach, which we detail below.
Our Approach
In knowledge distillation, the student perceives the oracular teacher to be the true p(Y |X), where X and Y respectively denote the input sentence and label. This is reasonable, since the student treats the teacher output y as ground truth, given some sentence x comprising words {w 1 , . . . , w n }. The purpose of the transfer set is, then, to provide additional input sentences for querying the teacher. To construct such a set, we propose the following: first, we parameterize p(X) directly as a language model p(w 1 , . . . , w n ) = Π n i=1 p(w i |w 1 , . . . , w i−1 ) trained on the given sentences {x 1 , . . . , x N }. Then, to generate unlabeled examples, we sample from the language model, i.e., the i th word of a sentence is drawn from p(w i |w 1 , . . . , w i−1 ). We stop upon generating the special end-of-sentence token [EOS], which we append to each sentence while fine-tuning the language model (LM).
Unlike previous methods, our approach directly parameterizes p(X) to provide unlabeled examples. We hypothesize that this approach outperforms ad hoc rule-based methods, which only indirectly model the input distribution p(X).
Sentence-pair modeling. To language model sentence pairs, we follow Devlin et al. (2018) and join both sentences with a special separator token [SEP] between, treating the resulting sequence as a single contiguous sentence.
Model Architecture
For simplicity and efficient inference, our student models use the same single-layer BiLSTM models from Tang First, we map an input sequence of words to their corresponding word2vec embeddings, trained on Google News. Next, for single-sentence tasks, these embeddings are fed into a single-layer BiLSTM encoder to yield concatenated forward and backward states h = [h f ; h b ]. For sentencepair tasks, we encode each sentence separately using a BiLSTM to yield h 1 and h 2 . To produce a single vector h, following Wang et al. (2018), where · denotes elementwise multiplication and δ denotes elementwise absolute difference. Finally, for both single-and paired-sentence tasks, h is passed through a multilayer perceptron (MLP) with one hidden layer that uses a rectified linear unit (ReLU) activation. For classification, the fi-... ... nal output is interpreted as the logits of each class; for real-valued sentence similarity, the final output is a single score. Our teacher model is the large variant of BERT, a deep pretrained language representation model that achieves close to state of the art (SOTA) on our tasks. Extremely recent, improved pretrained models like XLNet (Yang et al., 2019) and RoBERTa likely offer greater benefits to the student model, but BERT is widely used and sufficient for the point of this paper. We follow the same experimental procedure in Devlin et al. (2018) and fine-tune BERT end-to-end for each task, varying only the final classifier layer for the desired number of classes.
Language modeling. For creating the transfer set, we apply two public, state-of-the-art language models: the word-level Transformer-XL (TXL; Dai et al., 2019) pretrained on WikiText-103 (Merity et al., 2017), which is derived from Wikipedia, and the subword-level GPT-2 (345M version; Radford et al., 2019) pretrained on Web-Text, which represents a large web corpus that excludes Wikipedia. Other models exist, but we choose these two since they represent the state of the art. We name the GPT-2 and TXL-constructed transfer sets TS GPT-2 and TS TXL , respectively.
Experimental Setup
We validate our approach on four datasets in sentiment classification, linguistic acceptability, sen- Brockett, 2005). SST-2 is a binary polarity dataset of single-sentence movie reviews. CoLA is a single-sentence grammaticality task, with expertly annotated binary judgements. STS-B comprises sentence pairs labeled with realvalued similarity between 1 and 5. Lastly, MRPC has sentence pairs with binary labels denoting semantic equivalence. We pick these four tasks from the General Language Understanding Evaluation (GLUE; Wang et al., 2018) benchmark, and submit results to their public evaluation server. 2
Baselines
As a sanity check, we attempt knowledge distillation without a transfer set, as well as training our BiLSTM from scratch on the original labels. We compare to the best official GLUE test results reported for single-and multi-task ELMo models, OpenAI GPT, single-and multi-task single-layer BiLSTMs, and the SOTA before GPT. ELMo and GPT are pretrained language representation models with around a hundred million parameters. We name our distilled model BiLSTM KD .
205
Transfer set construction baselines. For our rule-based baseline, we use the masking and part of speech (POS)-guided word swapping rules as originally suggested by Tang et al. (2019), which consist of the following: iterating through a dataset's sentences, we replace 10% of the words with the masking token [MASK]. We swap another mutually exclusive 10% of the words with others of the same POS tag from the vocabulary, randomly sampling by unigram probability. For sentence-pair tasks, we apply the rules to the first sentence only, then the second only, and, finally, both. Discarding any duplicates, we repeat this entire process until meeting the target number of transfer set sentences. Tang et al. (2019) also suggest to sample n-grams; however, we omit this rule, since our preliminary experiments find that it hurts accuracy. We call this method TS MP . For our unlabeled dataset baseline, we choose the document-level IMDb movie reviews dataset (Diao et al., 2014) as our transfer set for SST-2. To match the single-sentence SST-2, we break paragraphs into individual linguistic sentences and, hence, multiple transfer set examples. To confirm that this is domain sensitive, we also apply it to the out-of-domain CoLA task in linguistic acceptability. We are unable to find a suitable unlabeled set for our other tasks-by construction, most sentence-pair datasets require manual balancing to prevent an overabundance of a single class, e.g., dissimilar examples in sentence similarity. We call this method TS IMDb .
Training and Hyperparameters
We fine-tune our pretrained language models using largely the same procedure from Devlin et al. (2018). For fair comparison, we use 800K sentences for all transfer sets, including TS IMDb . For our BiLSTM student models, we follow Tang et al. (2019) and use ADADELTA (Zeiler, 2012) with its default LR of 1.0 and ρ = 0.95. We train our models for 30 epochs, choosing the best performing on the standard development set. As is standard, for classification tasks, we minimize the negative log-likelihood; for regression, the mean-squared error. Depending on the loss on the development set, we choose either 150 or 300 LSTM units, and 200 or 400 hidden MLP units. This results in a model size between 1-3 million parameters. We use the 300-dimensional word2vec vectors trained on Google News, initial-izing out-of-vocabulary (OOV) vectors from UNI-FORM[−0.25, 0.25], following Kim (2014), along with multichannel embeddings.
To fine-tune our pretrained language models, we use Adam (Kingma and Ba, 2014) with a learning rate (LR) linear warmup proportion of 0.1, linearly decaying the LR afterwards. We choose a batch size of eight and one fine-tuning epoch, which is sufficient for convergence. We tune the LR from {1, 5} × 10 −5 based on word-level perplexity on the development set.
Results and Discussion
We present our results in Table 1. As an initial sanity check, we confirm that our BiLSTM (row 11) is acceptably similar to the previous best reported BiLSTM (row 5). We also verify that a transfer set is necessary-see rows 10 and 11, where using only the training dataset for distillation is insufficient. We further confirm that TS IMDb works poorly for the out-of-domain CoLA dataset (row 8). Note that the absolute best result on SST-2 before BERT is 93.2, from Radford et al. (2017), but that approach demands copious amounts of domain-specific data from the practitioner.
Quality and Efficiency
Of the transfer set construction approaches, our principled generation methods consistently achieve the highest results (see Table 1, rows 6 and 7), followed by the rule-based TS MP and the manually curated TS IMDb (rows 8 and 9). TS GPT-2 is especially effective for CoLA, yielding a respective 12.5-and 30-point increase in Matthew's Correlation Coefficient (MCC) over TS MP and training from scratch.
Interestingly, on SST-2, the synthetic GPT-2 samples outperform handwritten movie reviews from IMDb. Unlike the rule-based TS MP , our LMdriven approaches outperform ELMo on all four tasks. TS GPT-2 , our best method, reaches GPT parity on all but CoLA, establishing domain-agnostic, pre-BERT SOTA on SST-2 and STS-B.
Our models use between one and three million parameters, which is at least 30 and 40 times smaller than ELMo and GPT, respectively. This represents an improvement over the previous SOTA-see the official GLUE leaderboard and Devlin et al. (2018) for specifics.
It should be emphasized that using fewer model parameters does not necessarily reduce the total disk usage. All traditional, word embedding-based models require storing the word vectors, which obviously precludes many on-device applications. Instead, the main benefit is that these shallow Bi-LSTMs perform inference an order of magnitude faster than GPT, which is mostly important for server-based, in-production NLP systems.
Language Generation Analysis
To characterize the transfer sets, we present diversity statistics in Table 2. U3 % denotes the average percentage of unique trigrams (Fedus et al., 2018) across sequential dataset chunks of size M , where M matches the original dataset size for fairness. Specifically, it represents the following: where K = N/M and {x 1 , . . . , x N } the dataset. We find that TS GPT-2 and TS TXL (rows 1 and 2) contain more unique trigrams than TS MP , the original training set, and, surprisingly, handwritten movie reviews from IMDb (see rows 3-5). To examine whether the class distribution of the transfer sets matches the original, we compute p/n, the positive-to-negative label ratio. Based on the statistics, we conclude that p/n varies wildly among the methods and datasets, with our LMgenerated transfer sets differing substantially on MRPC, e.g., TS GPT-2 's 0.41 versus the original's 2.07. This suggests that similar examples are more difficult to generate than dissimilar ones.
Finally, to characterize the LMs, we report GPT-2's and TXL's word-level perplexity (PPL) and bits per character (BPC) on the development sets, as well as the percentage of OOV tokens on the dataset-see Table 3, where lower scores are better. GPT-2 has practically no OOV for English, due to its byte-pair encoding scheme. In spite of using half as many parameters, GPT-2 is better at character-level language modeling than TXL is on all datasets, and its word-level PPL is similar, except on CoLA. As a rough analysis, BPC is a stronger predictor of improved quality than PPL is. Across the datasets, distillation quality strictly increases with decreasing BPC, unlike PPL, suggesting that character-level modeling is more important for constructing an effective transfer set. Generation examples. We present a random example from each transfer set in Table 4 for SST-2. The generated samples ostensibly consist of movie reviews and contain acceptable linguistic structure, despite only one epoch of fine-tuning. Due to space limitations, we show only SST-2; however, the other transfer sets are public for examination in our GitHub repository.
Conclusions and Future Work
We propose using text generation for constructing the transfer set in knowledge distillation. We validate our hypothesis that generating text using pretrained LMs outperforms manual data curation and rule-based techniques: the former in generality, and the latter efficacy. Across multiple datasets, we achieve OpenAI GPT-level quality using a single-layer BiLSTM.
The presented techniques can be readily extended to sequence-to-sequence-level knowledge distillation for applications in neural machine translation and logical form induction. Another line of future work involves applying the techniques to knowledge distillation for traditional, inproduction NLP systems. | 3,936.8 | 2019-11-01T00:00:00.000 | [
"Computer Science"
] |
Physical-Chemical Evaluation of Active Food Packaging Material Based on Thermoplastic Starch Loaded with Grape cane Extract
The aim of this paper is to evaluate the physicochemical and microbiological properties of active thermoplastic starch-based materials. The extract obtained from grape cane waste was used as a source of stilbene bioactive components to enhance the functional properties of thermoplastic starch (TPS). The biomaterials were prepared by the compression molding technique and subjected to mechanical, thermal, antioxidant, and microbiological tests. The results showed that the addition of grape cane extract up to 15 wt% (TPS/WE15) did not significantly influence the thermal stability of obtained biomaterials, whereas mechanical resistance decreased. On the other side, among all tested pathogens, thermoplastic starch based materials showed antifungal activity toward Botrytis cinerea and antimicrobial activity toward Staphylococcus aureus, suggesting potential application in food packaging as an active biomaterial layer.
Introduction
Environmental pollution and the management of accumulated waste has become one of the major global problems of contemporary society. World production of plastics grew up to 335 million tons in 2016, where 70% of the total amount of plastic packaging ended up in landfills [1]. Every year, millions of tons of food packaging waste has been generated in landfills, which presents a serious environmental problem. Therefore, the use of biopolymers to produce environmentally sustainable packaging can be a promising solution, as an alternative to plastic packaging. Namely, by use of biopolymers, it can reduce the problem of plastic waste accumulation, as well as a quantity of biomass and agro-industrial waste, from which biopolymers are mostly derived.
In addition to biobased food packaging, there is an increased expansion of active packaging on the market [2]. The active packaging material is designed to release active ingredients into food
FTIR Analysis
FTIR spectroscopy is a technique that can give insight into starch granules transformation during thermal processing and detects possible interactions between components in material. Hence, in order to confirm the plasticization of starch and to evaluate interactions between grape cane extract and starch, FTIR analysis of neat powder starch, compression-molded starch material (TPS) and thermoplastic starch/grape cane extract material (TPS/WE) was performed and presented in Figure 1. The broad band between 3600 and 3000 cm −1 in the spectrum of neat starch is related to stretching vibrations of the OH groups. The band located at 998 cm −1 is attributed to the C-O stretching vibrations of the C-O-C group, whereas bands located at 1152 and 1082 cm −1 correspond to the C-O stretching vibrations of the C-O-H group. The band at 1645 cm −1 is ascribed by δ(O-H) groups from water. It has been reported in the literature that the absorbance band around 1050 cm −1 represents the amount of crystalline structure, and the bands around 1020 and 995 cm −1 are characteristic of the amorphous starch [32]. Thermal processing of starch causes a shift of bands located around 1020 and 995 cm −1 to higher frequencies, whereas the band located around 1050 cm −1 completely disappears in the case of the TPS sample. A new peak located at 1105 cm −1 has been detected in the spectrum of the S sample, confirming the presence of glycerol [33]. Moreover, the band in the region of 3600 and 3000 cm −1 becomes broader, with higher intensity, and this band shifts to higher frequencies after thermal processing of starch, indicating the formation of new hydrogen bonds between water, glycerol, and starch. Regarding the spectra of TPS/WE samples, additional peaks have been detected, i.e., a 1608 cm −1 (stretching vibrations of C-C aromatic double group) and 1508 cm −1 (in-plane bending vibrations of phenyl C−H bonds) [34,35]. It was reported in the literature that starch could interact with phenolic compounds through the formation of a V-type inclusion complex where the phenolic compound is tightly complexed inside the cavity of amylose helices or through the formation of a complex with much weaker binding mostly through hydrogen bonds [36]. The incorporation of grape cane extract into the starch matrix causes shifts of bands associated with aromatic moieties and (OH) groups, suggesting interactions between starch and bioactive components from the extract via hydrogen bonding and inclusion complexation.
Molecules 2020, 25, x FOR PEER REVIEW 3 of 13 starch. Regarding the spectra of TPS/WE samples, additional peaks have been detected, i.e., a 1608 cm −1 (stretching vibrations of C-C aromatic double group) and 1508 cm −1 (in-plane bending vibrations of phenyl C−H bonds) [34,35]. It was reported in the literature that starch could interact with phenolic compounds through the formation of a V-type inclusion complex where the phenolic compound is tightly complexed inside the cavity of amylose helices or through the formation of a complex with much weaker binding mostly through hydrogen bonds [36]. The incorporation of grape cane extract into the starch matrix causes shifts of bands associated with aromatic moieties and (OH) groups, suggesting interactions between starch and bioactive components from the extract via hydrogen bonding and inclusion complexation.
SEM Analysis
The morphology of thermoplastic starch-based materials is shown in Figure 2. All tested materials demonstrate a rough and dense surface. Moreover, it can be seen that the addition of extract into the starch matrix causes a more dense structure. In fact, a higher concentration of extract leads to its agglomeration in the starch matrix. It is important to note that the compression molding process started the process of losing the structural order in native starch granules and their constitutive crystals. Still, the melt of all crystals is not achieved, which is evidenced by random half-melted crystals on the SEM micrograph (SEM of control TPS). Hence, it was possible to convert starch into thermoplastic starch in a compression molding machine, but full conversion did not occur. In order to obtain a full conversion, the extrusion step before compression molding is required for better homogenization of gelatinized starch with water and glycerol and/or longer time of processing in the compression molding machine. However, both steps can influence the degradation of active components from extracts during the processing of material, giving as a final result material with less bioactive potential. Hence, in this work, compromise with respect to the extract activity has been made, keeping the processing of material as simple as it is possible and with the shortest processing time required to obtain material.
SEM Analysis
The morphology of thermoplastic starch-based materials is shown in Figure 2. All tested materials demonstrate a rough and dense surface. Moreover, it can be seen that the addition of extract into the starch matrix causes a more dense structure. In fact, a higher concentration of extract leads to its agglomeration in the starch matrix. It is important to note that the compression molding process started the process of losing the structural order in native starch granules and their constitutive crystals. Still, the melt of all crystals is not achieved, which is evidenced by random half-melted crystals on the SEM micrograph (SEM of control TPS). Hence, it was possible to convert starch into thermoplastic starch in a compression molding machine, but full conversion did not occur. In order to obtain a full conversion, the extrusion step before compression molding is required for better homogenization of gelatinized starch with water and glycerol and/or longer time of processing in the compression molding machine. However, both steps can influence the degradation of active components from extracts during the processing of material, giving as a final result material with less bioactive potential. Hence, in this work, compromise with respect to the extract activity has been made, keeping the processing of material as simple as it is possible and with the shortest processing time required to obtain material. starch. Regarding the spectra of TPS/WE samples, additional peaks have been detected, i.e., a 1608 cm −1 (stretching vibrations of C-C aromatic double group) and 1508 cm −1 (in-plane bending vibrations of phenyl C−H bonds) [34,35]. It was reported in the literature that starch could interact with phenolic compounds through the formation of a V-type inclusion complex where the phenolic compound is tightly complexed inside the cavity of amylose helices or through the formation of a complex with much weaker binding mostly through hydrogen bonds [36]. The incorporation of grape cane extract into the starch matrix causes shifts of bands associated with aromatic moieties and (OH) groups, suggesting interactions between starch and bioactive components from the extract via hydrogen bonding and inclusion complexation.
SEM Analysis
The morphology of thermoplastic starch-based materials is shown in Figure 2. All tested materials demonstrate a rough and dense surface. Moreover, it can be seen that the addition of extract into the starch matrix causes a more dense structure. In fact, a higher concentration of extract leads to its agglomeration in the starch matrix. It is important to note that the compression molding process started the process of losing the structural order in native starch granules and their constitutive crystals. Still, the melt of all crystals is not achieved, which is evidenced by random half-melted crystals on the SEM micrograph (SEM of control TPS). Hence, it was possible to convert starch into thermoplastic starch in a compression molding machine, but full conversion did not occur. In order to obtain a full conversion, the extrusion step before compression molding is required for better homogenization of gelatinized starch with water and glycerol and/or longer time of processing in the compression molding machine. However, both steps can influence the degradation of active components from extracts during the processing of material, giving as a final result material with less bioactive potential. Hence, in this work, compromise with respect to the extract activity has been made, keeping the processing of material as simple as it is possible and with the shortest processing time required to obtain material.
Mechanical Analysis
Mechanical resistance is a key parameter for food packaging because the package should maintain its integrity during packaging, transport and storage of food products. The mechanical properties of starch materials are presented in Figure 3. The results demonstrate that the addition of grape cane extract up to 15 wt% causes a decrease in tensile strength and Young's modulus. On the other side, elongation at the break increases for the samples that contain up to the 10 wt% of the extract. Further increase of the extract content in the thermoplastic starch matrix leads to a decrease in elongation at break value. The increase of elongation at break of TPS/WE samples up to specific content can be explained by weakening the intermolecular bonds between the starch chains. Hence, the segmental mobility of starch chains increases, thus leading to improved flexibility and reduced tensile strength and Young's modulus of TPS/WE5 and TPS/WE10 samples. According to these results, it can be concluded that grape cane extract has an additional plasticizing effect on starch, which has been expected since the extract is rich in polyphenols. The obtained results are in agreement with data from the literature, where the presence of extracts rich in polyphenols (thymol [37], blackberry pulp [38], carvacrol [39], grape pomace waste [30]) increased elongation at break and decreased the tensile strength of starch films. Moreover, Silva et al. showed that the addition of resveratrol into the cellulose matrix led to reduced tensile strength and enhanced elasticity [40]. However, it is important to note that the concentration of extracts in the starch matrix from the above-mentioned literature did not exceed 10 wt%. In the case of the sample TPS/WE15, tensile strength and elongation at the break decrease, probably due to higher agglomeration of grape cane extract particles and their non-homogeneous distribution within the starch matrix. Mechanical resistance is a key parameter for food packaging because the package should maintain its integrity during packaging, transport and storage of food products. The mechanical properties of starch materials are presented in Figure 3. The results demonstrate that the addition of grape cane extract up to 15 wt% causes a decrease in tensile strength and Young's modulus. On the other side, elongation at the break increases for the samples that contain up to the 10 wt% of the extract. Further increase of the extract content in the thermoplastic starch matrix leads to a decrease in elongation at break value. The increase of elongation at break of TPS/WE samples up to specific content can be explained by weakening the intermolecular bonds between the starch chains. Hence, the segmental mobility of starch chains increases, thus leading to improved flexibility and reduced tensile strength and Young's modulus of TPS/WE5 and TPS/WE10 samples. According to these results, it can be concluded that grape cane extract has an additional plasticizing effect on starch, which has been expected since the extract is rich in polyphenols. The obtained results are in agreement with data from the literature, where the presence of extracts rich in polyphenols (thymol [37], blackberry pulp [38], carvacrol [39], grape pomace waste [30]) increased elongation at break and decreased the tensile strength of starch films. Moreover, Silva et al. showed that the addition of resveratrol into the cellulose matrix led to reduced tensile strength and enhanced elasticity [40]. However, it is important to note that the concentration of extracts in the starch matrix from the abovementioned literature did not exceed 10 wt%. In the case of the sample TPS/WE15, tensile strength and elongation at the break decrease, probably due to higher agglomeration of grape cane extract particles and their non-homogeneous distribution within the starch matrix.
Thermal Analysis
Thermogravimetric analysis was carried out in order to evaluate the influence of extract on the thermal decomposition of starch-based films. As it can be seen from Figure 4 and Table 1, thermoplastic starch and their composites decompose in three weight loss steps: a) weight loss in the range of 50 °C and 100 °C associated with evaporation of free water, b) weight loss in the range between 100 °C and 180 °C associated with release of bounded water in system, and c) weight loss between 280 °C and 380 °C, associated with degradation of glycerol and starch chains. Grape cane extract shows one degradation step in the range of 120 and 220 °C, related to the decomposition of polyphenols, and also a wide degradation peak in the range of 240-390 °C, which is related to degradation of active components: trans-resveratrol and trans-viniferin [41,42]. The initial degradation temperature (Tonset) of control thermoplastic starch is detected at 280 °C, whereas the maximum degradation rate temperature (Tmax) appears at 312 °C. The Tonset values decrease upon the incorporation of grape cane extract, and it is most pronounced in the case of TPS/WE15. In fact, DTG curve of TPS/WE15 displays a wide shoulder degradation peak, suggesting decomposition of bioactive components from extract, glycerol and TPS chains, all together. These data are in agreement with research work published by Agustin-Salazar et al. [34] and Ortiz-Vazquez et al. [43], where a
Thermal Analysis
Thermogravimetric analysis was carried out in order to evaluate the influence of extract on the thermal decomposition of starch-based films. As it can be seen from Figure 4 and Table 1, thermoplastic starch and their composites decompose in three weight loss steps: a) weight loss in the range of 50 • C and 100 • C associated with evaporation of free water, b) weight loss in the range between 100 • C and 180 • C associated with release of bounded water in system, and c) weight loss between 280 • C and 380 • C, associated with degradation of glycerol and starch chains. Grape cane extract shows one degradation step in the range of 120 and 220 • C, related to the decomposition of polyphenols, and also a wide degradation peak in the range of 240-390 • C, which is related to degradation of active components: trans-resveratrol and trans-viniferin [41,42]. The initial degradation temperature (T onset ) of control thermoplastic starch is detected at 280 • C, whereas the maximum degradation rate temperature (T max ) appears at 312 • C. The T onset values decrease upon the incorporation of grape cane extract, and it is most pronounced in the case of TPS/WE15. In fact, DTG curve of TPS/WE15 displays a wide shoulder degradation peak, suggesting decomposition of bioactive components from extract, glycerol and TPS chains, all together. These data are in agreement with research work published by Agustin-Salazar et al. [34] and Ortiz-Vazquez et al. [43], where a decrease in thermal stability of PLA and butylated hydroxytoluene films with the addition of resveratrol as a bioactive component, respectively, was observed. Although the thermal stability of S films that contain the grape cane extracts is slightly reduced, T onset is above 270 • C, which is significantly above the processing temperature region of TPS in the compression molding machine (140 • C), thus confirming that these formulations could be processed without risk of high thermal degradation of neat components.
resveratrol as a bioactive component, respectively, was observed. Although the thermal stability of S films that contain the grape cane extracts is slightly reduced, Tonset is above 270 °C, which is significantly above the processing temperature region of TPS in the compression molding machine (140 °C), thus confirming that these formulations could be processed without risk of high thermal degradation of neat components.
Antioxidant Capacity
The grapevine cane extract used in the present study is mainly composed of the following stilbenoids: (E)-ε-vinifera and (E)-resveratrol. By another side, the solubilities of trans-resveratrol (stilbene model compound) in different alcohol solvents and water had been measured at different temperatures [44]. The authors found that its solubility increases with temperature but decreases along with carbon numbers in alcohol solvents, and the solubility in alcohol was higher than in water. That is why we selected water, pure methanol and its mixture with water as stilbene release media for antioxidant capacity measurements.
The radical-scavenging activity of the neat TPS and TPS/WE samples was assessed by DPPH assay, using three extractive mediums for bioactive components from the starch matrix: water, methanol/water 80/20 v/v and methanol ( Figure 5). Neat TPS does not show any antioxidant activity, whereas all TPS/WE samples exhibit the highest antioxidant activity when methanol/water 80/20 v/v is used as an extraction medium. This result is expected because grapevine cane extract is not soluble in water, whereas it has moderate solubility in pure alcohol and complete dissolution in alcohol/water 70/30 or 80/20 v/v mixture. The antioxidant capacity of TPS/WE samples increases with an increase of the extract content in materials. The radical-scavenger capacity ranged from 15 (TPS/WE5) to 39% (TPS/WE15) using water as a solvent, from 38 to 87 in methanol/water 80/20 v/v, and from 31 to 86% in absolute methanol, for 1000 µL of DPPH solution. High antioxidant capacity of obtained grape cane extract has been already proven in our previous paper [26] and supported in the literature by other authors [45,46], due to the presence of phenolic groups in extract itself. The IC50 (concentration required to scavenge 50% DPPH radicals) values of the oligostillbenes caraphenol A and α-viniferin A were determined and compared with Trolox antioxidant standard by Li et al.
Antioxidant Capacity
The grapevine cane extract used in the present study is mainly composed of the following stilbenoids: (E)-ε-vinifera and (E)-resveratrol. By another side, the solubilities of trans-resveratrol (stilbene model compound) in different alcohol solvents and water had been measured at different temperatures [44]. The authors found that its solubility increases with temperature but decreases along with carbon numbers in alcohol solvents, and the solubility in alcohol was higher than in water. That is why we selected water, pure methanol and its mixture with water as stilbene release media for antioxidant capacity measurements.
The radical-scavenging activity of the neat TPS and TPS/WE samples was assessed by DPPH assay, using three extractive mediums for bioactive components from the starch matrix: water, methanol/water 80/20 v/v and methanol ( Figure 5). Neat TPS does not show any antioxidant activity, whereas all TPS/WE samples exhibit the highest antioxidant activity when methanol/water 80/20 v/v is used as an extraction medium. This result is expected because grapevine cane extract is not soluble in water, whereas it has moderate solubility in pure alcohol and complete dissolution in alcohol/water 70/30 or 80/20 v/v mixture. The antioxidant capacity of TPS/WE samples increases with an increase of the extract content in materials. The radical-scavenger capacity ranged from 15 (TPS/WE5) to 39% (TPS/WE15) using water as a solvent, from 38 to 87 in methanol/water 80/20 v/v, and from 31 to 86% in absolute methanol, for 1000 µL of DPPH solution. High antioxidant capacity of obtained grape cane extract has been already proven in our previous paper [26] and supported in the literature by other authors [45,46], due to the presence of phenolic groups in extract itself. The IC 50 (concentration required to scavenge 50% DPPH radicals) values of the oligostillbenes caraphenol A and α-viniferin A were determined and compared with Trolox antioxidant standard by Li et al. [47]. All compounds showed antioxidant activity in a dose-dependent manner, which agrees with results from this work. The authors reported that the antioxidant reaction could proceed by redox-mediated mechanisms (especially electron transfer and H+ -transfer) as well as non-redox-mediated mechanisms. In another study, the scavenging activity of 10 new stilbenoids isolated from the roots of Caragana sinica was measured. Only three of these compounds showed moderate DPPH scavenging activity and lipid peroxidation inhibitory activities with IC 50 values ranging from 34.7 to 89.1 µM [48]. Regarding starch-based films, it was shown that the incorporation of orange peel oil/zein nanocapsules provided DPPH radical scavenging activity of 30% [49]. Yun at al. obtained 60% of DPPH scavenging activity when 4 wt% of Chinese bayberry was added into starch films [50]. On the other side, when a higher concentration of extract is included into the starch matrix (between 10 and 20%), high antioxidative activity can be obtained. For example, Pineros-Hernandez at al. obtained a similar antioxidant activity of starch/rosemary extract (20 wt% of rosemary extract) films to those in this work [51].
Molecules 2020, 25, x FOR PEER REVIEW 6 of 13 [47]. All compounds showed antioxidant activity in a dose-dependent manner, which agrees with results from this work. The authors reported that the antioxidant reaction could proceed by redoxmediated mechanisms (especially electron transfer and H+ -transfer) as well as non-redox-mediated mechanisms. In another study, the scavenging activity of 10 new stilbenoids isolated from the roots of Caragana sinica was measured. Only three of these compounds showed moderate DPPH scavenging activity and lipid peroxidation inhibitory activities with IC50 values ranging from 34.7 to 89.1 µM [48]. Regarding starch-based films, it was shown that the incorporation of orange peel oil/zein nanocapsules provided DPPH radical scavenging activity of 30% [49]. Yun at al. obtained 60% of DPPH scavenging activity when 4 wt% of Chinese bayberry was added into starch films [50]. On the other side, when a higher concentration of extract is included into the starch matrix (between 10 and 20%), high antioxidative activity can be obtained. For example, Pineros-Hernandez at al. obtained a similar antioxidant activity of starch/rosemary extract (20 wt% of rosemary extract) films to those in this work [51].
Microbiological Assay
The inhibition growth (IG) of the Botrytis cinerea, Mucor indicus, Aspergillus niger, Rhizopus stolonifer, and Geotrichum candidum on the control TPS, grape cane extract and TPS/WE samples was determined. TPS control sample does not show antifungal activity toward any of tested pathogens. On the other side, grape cane extract shows complete inhibition growth only of Botrytis cinerea. TPS/WE samples show moderate antifungal activity toward Botrytis cinerea by the reduction of the growth rate of fungi. The inhibition growth rate of Botrytis cinerea at the contact surface is in the range between 29% (TPS/W5) and 43% (TPS/WE15) (see Figure 6). As the concentration of extract in thermoplastic starch matrix is increasing, the inhibition growth of fungi is higher. Moreover, the spore germination has not been detected, which is important in the control of the phytopathogens, because a lack of spore germination inhibits reproduction and dissemination of fungi. This result implies that TPS/WE material can be used as a supportive layer in food packaging, but only directly placed in the contact zone with food products (fruits), and thus, prevents their further contamination or spoilage during storage and transport.
The antifungal activity of TPS/WE samples is mainly attributed to the presence of bioactive stilbenoids in the extract. In fact, high antifungal activity of resveratrol and moderate activity of viniferin toward Botrytis cinerea has already been proved by several authors [28,52,53]. On the other side, TPS/WE samples do not show any antifungal effect against other tested fungi pathogens. Regarding the antifungal activity of films containing resveratrol or viniferin, there is not enough data literature to be able to explain such selective antifungal behavior obtained in this work. Pastor et al. pointed out that chitosan-methylcellulose/resveratrol films did not have antifungal activity toward Botrytis cinerea and Penicillium italicum, due to the low release of resveratrol from the biopolymer matrix into the environment [54]. However, Lozano-Navarro et al. obtained moderate antifungal activity toward Penicillum notatum, Aspergillus niger and Aspergillus fumigatus [55].
Microbiological Assay
The inhibition growth (IG) of the Botrytis cinerea, Mucor indicus, Aspergillus niger, Rhizopus stolonifer, and Geotrichum candidum on the control TPS, grape cane extract and TPS/WE samples was determined. TPS control sample does not show antifungal activity toward any of tested pathogens. On the other side, grape cane extract shows complete inhibition growth only of Botrytis cinerea. TPS/WE samples show moderate antifungal activity toward Botrytis cinerea by the reduction of the growth rate of fungi. The inhibition growth rate of Botrytis cinerea at the contact surface is in the range between 29% (TPS/W5) and 43% (TPS/WE15) (see Figure 6). As the concentration of extract in thermoplastic starch matrix is increasing, the inhibition growth of fungi is higher. Moreover, the spore germination has not been detected, which is important in the control of the phytopathogens, because a lack of spore germination inhibits reproduction and dissemination of fungi. This result implies that TPS/WE material can be used as a supportive layer in food packaging, but only directly placed in the contact zone with food products (fruits), and thus, prevents their further contamination or spoilage during storage and transport. Antimicrobial activity tests of TPS/WE samples toward E. coli, S. aureus and Salmonella typhimurium were also performed. The control TPS film does not show any antimicrobial activity, as it is expected. TPS/WE samples show neglected antimicrobial activity in the contact zone toward E. coli and moderate antimicrobial activity toward S. aureus. On the other side, samples do not show any antimicrobial activity toward Salmonella typhimurium. It is interesting to note that all samples tested against S. aureus give two halo zones, first related to 100% of growth inhibition in area of 13 × 13 mm (TPS/WE15) and moderate growth inhibition in area 35 × 35 mm (see Figure 7). These results are in agreement with the data in the literature. Li et al. pointed out that resveratrol was most efficient for the growth inhibition of S. aureus and less active toward E.coli and C. albicans [56]. Moreover, Paulo et al. observed higher antimicrobial activity of resveratrol toward Gram positive bacteria (Bacillus and S. aureus) than Gram negative bacteria (E. coli, Salmonella and Klebsiella), suggesting that the antimicrobial mechanism of resveratrol disrupts the microbial cell cycle, i.e., microbial growth, evidenced by changes in cell morphology and DNA contents [57].
Materials and Methods
Corn starch with a molecular weight of 50,000 g/mol was obtained from Corn Products Chile Inducorn S.A. Glycerol was purchased from OCN company (China).
Extraction Method from Grape cane Waste
The detailed extraction procedure and characterization of active components from grape vine (Vitis vinifera L.) canes were briefly described in a Chilean Patent [58]. Namely, Pinot Noir grape canes pruned in the winter of 2014 at De Neira Vineyard, Bio-Bio region, Chile, were used as a source of bioactive compounds. After storage for over 3 months at 19 °C ± 5 and 70% relative humidity, the grape canes were chopped in a Retsch grinder (model SM) at 300-2000 rpm and immersed in a reactor that contained ethanol/water solution (80:20 v/v) at 80 °C for 100 min. After solvent evaporation, the extract was collected and spray-dried using a BHS Büttner-Schilde-Haas AG dryer at a rate of 15 The antifungal activity of TPS/WE samples is mainly attributed to the presence of bioactive stilbenoids in the extract. In fact, high antifungal activity of resveratrol and moderate activity of viniferin toward Botrytis cinerea has already been proved by several authors [28,52,53]. On the other side, TPS/WE samples do not show any antifungal effect against other tested fungi pathogens. Regarding the antifungal activity of films containing resveratrol or viniferin, there is not enough data literature to be able to explain such selective antifungal behavior obtained in this work. Pastor et al. pointed out that chitosan-methylcellulose/resveratrol films did not have antifungal activity toward Botrytis cinerea and Penicillium italicum, due to the low release of resveratrol from the biopolymer matrix into the environment [54]. However, Lozano-Navarro et al. obtained moderate antifungal activity toward Penicillum notatum, Aspergillus niger and Aspergillus fumigatus [55].
Antimicrobial activity tests of TPS/WE samples toward E. coli, S. aureus and Salmonella typhimurium were also performed. The control TPS film does not show any antimicrobial activity, as it is expected. TPS/WE samples show neglected antimicrobial activity in the contact zone toward E. coli and moderate antimicrobial activity toward S. aureus. On the other side, samples do not show any antimicrobial activity toward Salmonella typhimurium. It is interesting to note that all samples tested against S. aureus give two halo zones, first related to 100% of growth inhibition in area of 13 × 13 mm (TPS/WE15) and moderate growth inhibition in area 35 × 35 mm (see Figure 7). These results are in agreement with the data in the literature. Li et al. pointed out that resveratrol was most efficient for the growth inhibition of S. aureus and less active toward E.coli and C. albicans [56]. Moreover, Paulo et al. observed higher antimicrobial activity of resveratrol toward Gram positive bacteria (Bacillus and S. aureus) than Gram negative bacteria (E. coli, Salmonella and Klebsiella), suggesting that the antimicrobial mechanism of resveratrol disrupts the microbial cell cycle, i.e., microbial growth, evidenced by changes in cell morphology and DNA contents [57]. Antimicrobial activity tests of TPS/WE samples toward E. coli, S. aureus and Salmonella typhimurium were also performed. The control TPS film does not show any antimicrobial activity, as it is expected. TPS/WE samples show neglected antimicrobial activity in the contact zone toward E. coli and moderate antimicrobial activity toward S. aureus. On the other side, samples do not show any antimicrobial activity toward Salmonella typhimurium. It is interesting to note that all samples tested against S. aureus give two halo zones, first related to 100% of growth inhibition in area of 13 × 13 mm (TPS/WE15) and moderate growth inhibition in area 35 × 35 mm (see Figure 7). These results are in agreement with the data in the literature. Li et al. pointed out that resveratrol was most efficient for the growth inhibition of S. aureus and less active toward E.coli and C. albicans [56]. Moreover, Paulo et al. observed higher antimicrobial activity of resveratrol toward Gram positive bacteria (Bacillus and S. aureus) than Gram negative bacteria (E. coli, Salmonella and Klebsiella), suggesting that the antimicrobial mechanism of resveratrol disrupts the microbial cell cycle, i.e., microbial growth, evidenced by changes in cell morphology and DNA contents [57].
Materials and Methods
Corn starch with a molecular weight of 50,000 g/mol was obtained from Corn Products Chile Inducorn S.A. Glycerol was purchased from OCN company (China).
Extraction Method from Grape cane Waste
The detailed extraction procedure and characterization of active components from grape vine
Materials and Methods
Corn starch with a molecular weight of 50,000 g/mol was obtained from Corn Products Chile Inducorn S.A. Glycerol was purchased from OCN company (China).
Extraction Method from Grape cane Waste
The detailed extraction procedure and characterization of active components from grape vine (Vitis vinifera L.) canes were briefly described in a Chilean Patent [58]. Namely, Pinot Noir grape canes pruned in the winter of 2014 at De Neira Vineyard, Bio-Bio region, Chile, were used as a source of bioactive compounds. After storage for over 3 months at 19 • C ± 5 and 70% relative humidity, the grape canes were chopped in a Retsch grinder (model SM) at 300-2000 rpm and immersed in a reactor that contained ethanol/water solution (80:20 v/v) at 80 • C for 100 min. After solvent evaporation, the extract was collected and spray-dried using a BHS Büttner-Schilde-Haas AG dryer at a rate of 15 mL/min, operated with an inlet temperature of 160 ± 5 • C, outlet temperature at 60 ± 5 • C, and injected compressed air at 40 MPa. The spray-dried grape cane extract (WE) was stored under room temperature in aluminum containers. According to HPLC analysis, the main bioactive components of the obtained extract are trans-resveratrol (14.3 mg/L) and trans-ξ-viniferin (29.0 mg/L).
Preparation of Starch-Material
In order to obtain control thermoplastic starch, 500 g of corn starch was homogenized with 150 g of glycerol and 25 g of water at 45 • C and a speed rate of 2800 rpm in high-speed blade mixer (Cool Mixer, Labtech model LCM-24). Afterward, 25 g of homogenized starch was placed between two stainless steel plates that were covered with a teflon sheet. The dimension of the mold was 100 × 100 × 0.5 mm 3 . The starch samples were pressed in a Labtech LP-20B hydraulic press at an applied pressure of 70 bar for 3 min at 140 • C. The resulting material was cooled for 1 min before being unmolded. This material is coded as TPS. TPS/WE materials were prepared by homogenization of corn starch, water, glycerol, and WE at different concentrations (5, 10 and 15 wt% per mass of starch) in a high-speed blade mixer, following the same procedure as for control TPS. The concentration of glycerol and water was kept constant in all formulations. The code formulations were TPS/WE5, TPS/WE10 and TPS/WE15 for samples containing 5, 10 and 15 wt% of WE, respectively.
FTIR Analysis
FTIR spectra of thermoplastic starch-based materials were obtained at room temperature by Jasco FT/IR 400 spectrometer in the range of 4000-400 cm −1 at a resolution of 4 cm −1 .
SEM Analysis
The morphological analysis was performed by an ETEC autoscan SEM (Model U-1, University of Massachusetts; Worcester, MA, USA). The samples were fixed in a sample holder and covered with a gold layer for 3 min using an Edwards S150 sputter coater (BOC Edwards, São Paulo, Brazil).
Mechanical Analysis
The tensile test was performed by Instron dynamometer model 1185, equipped with a 1 kN load cell, according to the procedure described in ASTM D638 standard. The cross speed rate was 10 mm min −1 . All measurements were carried out at room temperature and 50% of relative humidity. The reported data are the average values of six determinations. The obtained values of the tensile strength, elongation at break and Young's modulus were within ± 10%. The thermal stability of TPS and TPS/WE sheets was monitored by a NETZSCH TG 209 F3 Tarsus ® thermal analyzer. The measurements were carried out at a heating rate of 10 • C/min and under nitrogen atmosphere from ambient temperature to 500 • C. For each composition, the thermogravimetric tests were performed in duplicate.
Antioxidant Capacity
For determination of the antioxidant capacity of biomaterials, the 2,2-diphenyl-1-picrylhydrazyl (DPPH) free radical was used, according to the methodology previously described by Ventura-Aguilar et al. [59]. Briefly, 1 cm 2 of each TPS/WE material was macerated with three solvents: distilled water, methanol/water (80:20), and methanol, and centrifuged at 8500 rpm for 10 min. Twenty microliters of the supernatant were recovered and mixed with 750 µL of DPPH% (133.33 µL) and incubated for 30 min at room temperature. The absorbance was measured at 517 nm, using a Genesys 10s UV-VIS spectrophotometer. The results were expressed as a percentage of DPPH radical scavenging according to the following Equation (1), where Ab and As represents absorbance of the blank and sample, respectively. % DPPH reduction = (Ab−As)/Ab × 100 (1)
In vitro Antimicrobial and Antifungal Activity
The antimicrobial assays were performed against 3 ATCC bacteria E. coli ATCC 25922, Salmonella typhimurium ATCC 14028 and S. aureus ATCC 25923. The inoculum was prepared using a direct colony suspension method in nutritive broth. The bacterial growth turbidity was established by the McFarland 0.5 method (1 x 108CFU/mL). A cotton swab was used to inoculate the nutritive agar, which was moistured with the bacterial suspension and distributed over the entire surface of the Petri plates. It was left to dry for 10 min, and afterward, the corresponding starch samples were placed. The inhibition zones were determined after 24 h of incubation at 35 ± 2 • C.
For antifungal assays, the microorganisms Botrytis cinerea, Mucor indicus, Aspergillus niger, Rhizopus stolonifera, and Geotrichum candidum were grown separately on Potato Dextrose Agar (PDA) Petri plates for a period of 3 weeks at 25 • C. Each starch sample (1cm × 1 cm) was placed on a PDA plate surface seeded with 5 mm of the fungal spore disc. The fungal plates were incubated at 25 • C for 7 days, and mycelial growth was measured daily using a Vernier caliper, in order to evaluate the diameter reached by the mycelium over time. Analyses were carried out in triplicate.
Conclusions
Grape cane extract obtained from viticulture residues due to its antifungal/antimicrobial properties was included in different ratios in thermoplastic starch materials by a compression molding technique. These materials were characterized by various techniques in order to evaluate their physical-chemical properties and potential usage in the food packaging sector. Materials containing the highest ratio of grape cane extract (15 wt%) showed sufficient thermal stability, moderate mechanical resistance and highest antifungal and antimicrobial activity, confirming that viticulture waste could be good source of natural, non-toxic active components in comparison to commonly used synthetic fungicides and could be efficiently incorporated into thermoplastic biopolymers, acting as a bioactive food packaging layer. | 8,878.8 | 2020-03-01T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
AFFECTS CORPORATE TAXATION ECONOMIC GROWTH? – DYNAMIC APPROACH FOR OECD COUNTRIES
This contribution deals with issues of corporate taxation in relation with economic growth. Its main objective is to quantify and analyse the relation of corporate taxation and economic growth using of OECD countries. The corporate tax rate is approximated by effective corporate tax rates such as corporate tax quota, marginal effective and average tax rates as determined by micro-forward looking approach and the alternative approach World Tax Index. The relation of taxation and economic growth is verified using an econometric model based on panel regression methods and tests using a dynamic panel. The model has shown a negative impact on economic growth for all six of the selected corporate tax approximators under the assumed significant level. A quantitatively higher negative impact has been verified in the case of labour taxation.
INTRODUCTION
The global economy experienced sharp growth followed by a decrease caused by the economic (financial) crisis in the last decade. The national economic policy-makers try to handle its consequences until now. Currently, individual countries face mainly debt issues, which were especially caused by fiscal policy, over indebtedness of the private sector and a decrease in economic activity. The crucial question is -how to set up fiscal systems in a way that would support economic growth and simultaneously follow the budget discipline with focus on decreasing current budget deficits?
The existence of the public sector requires the immediate need for tax collection, but until now the issue of optimal taxation and composition of tax mix remains in the hands of individual countries. The consolidation of public budgets is realized mainly on the income side (tax policy). The main reason is the high portion of mandatory expenditures on all government expenditure. This leads to the limitation of active expenditure economic policy. The effort to find the optimal level of taxation appears to be inevitable. The above mentioned must also be realized with respect to the dynamic side of economy. Taxation needs to be set up in a way that governments are able to fulfil their targets without any deformation of the economy. Systems, which are correctly set up, can lead to the optimal source of allocation and to higher economic growth.
From a global point of view the possibility of adequate tax system approximation sources for following economic analysis and the conclusion about usage of convenient tax rates can provide a basis for the next studies. The possibility to compare tax systems and their implementation provides, clearly, a new view on the tax system as a whole. The tax system, which also includes taxation of corporations, also presents part of economic policy of the country and the choice of correct indicator of tax burden enables a suitable evaluation of economic environment of the country. Corporate tax is mainly related to capital.
Capital is considered to be a highly mobile factor of productivity. It is necessary to carefully consider individual taxation systems as those are usually very complex and to allocate the capital to the country with the most convenient tax system. The taxation of the corporations influences not only revenues, but also the distribution of the profit. At the same time, capital (physical capital) is the elementary source of economic growth; hence taxation of corporations has an intermediate impact both on capital accumulation and economic growth. The remaining question is how to correctly approximate taxation so that the final indicator would reflect the economic reality in the best possible way.
The studies which focus on the taxation and economic growth usually use different variables approximating tax burden (tax quota, implicit tax rate). This approach reflects the elementary tax burden but it basically represents the share of the tax revenues to the basic value. It also omits the dynamics of the economic process as it uses only cross sectional data and therefore can lead to a biased conclusion. The presented paper utilizes not only the above mentioned variables (tax quota, implicit tax rate) but also effective marginal and average tax rates. The paper also uses the World Tax Index. All variables are incorporated by dynamic panel regression.
Taxation, on the one hand, presents a burden on economic subjects, on the other hand it also represents significant income for government expenditures.
Studies focusing on taxation and economic growth very often neglect the complexity of the tax systems. Denaux (2007) or Izák (2011) note that it is very important to include government expenditures to the models analysing impact of taxation to the economic growth as they represent one of the aspects of the taxation. With regard to the modern approach to taxation and economic growth (e.g. Kotlán and Machová, 2014a) it is suitable to also include other fiscal variables -other kinds of taxation. Then the evaluation of the tax to the economic growth can be considered as complex.
The aim of the paper is to evaluate the relation between corporate taxation and economic growth. We expect to confirm the negative impact of corporate taxation on economic growth on the sample of OECD member countries. The analysis is based on the neoclassical growth model extended by the human capital. The model also takes into consideration all the main types of taxation and government expenditures.
THE IMPACT OF TAXATION ON ECONOMIC GROWTH -CURRENT STATE OF KNOWLEDGE
There are many factors which impact the speed and size of economic growth. These can include climate, education, property rights, savings, access to ports etc. Generally the sources of economic growth can be divided into human and capital. As Frait and Červenka (2002) state, human sources are characterised by the growth of labour productivity and an increase in work effort. Similarly, this situation is valid for the capital which is influenced by the stock of real capital and the technical level of capital goods. Accumulation of those determinants is derived from the motivation of individuals to save and invest which then leads to the changes of economic growth. The relation of taxes to economic growth can be considered from many aspects. It can be perceived as a feature which burdens economic subjects and their behaviour and therefore influences their willingness to save or invest; or their work efforts. It can be also viewed as an instrument which ensures sources for government expenditures which can lead to the areas supporting economic growth (productive government expenditures, see below). With the respect to the afore-mentioned it is necessary to see taxation in its wider context.
One of the first studies which noted the possible relation between taxation and longterm economic growth was e.g. Barro (1999) or King and Rebelo (1990). The impact of taxation on the total economic growth was studied by Judd (1985), Chamley (1986), Rebelo (1991), Devereux and Love (1994); their papers are based either on the neoclassical growth model with physical capital or the two sector growth model with human and physical capital. Their common conclusion supports the idea that the three most commonly used taxes (consumption, corporate, taxation of labour) have a negative impact on the economic growth within the OECD member countries. They consider corporate taxation followed by income taxes and consumer taxes as the most damaging for economic growth. Similar results for corporation taxation were also received from Lee and Gordon (2005). On the other side there is also analysis that did not confirm this conclusion; these are more of an exception than the rule. For example Forbin (2011) analysed the Swedish economy for 1951-2010 period and didn't confirm any significant relation between tax corporation and long-term economic growth. He also admits that if he used marginal effective tax rates the conclusions could be different.
In the case of property and consumer taxes there are countless numbers of studies showing their low distortion effect and nearly no impact on the economic growth (e.g. Arnold, 2010;Johansson et al., 2008or Widmalm, 2001. To support economic growth, Myles (2009) supports a transition of taxation of income to the consumer. He also adds that taxation of capital is ineffective in the long-term. The new study of Gemmell et al. (2014) explores the merits of macro-and micro-based tax rate measures within an open economy. Their conclusion is that in general, tax effects on GDP operate largely via factor productivity rather than factor accumulation. Engen and Skinner (1996) define five main channels on how corporate taxation influences economic growth. They are represented by (i) investment discouragement, (ii) impact on the labour offer, (iii) decrease of the productivity growth of corporates, (iv) decreasing marginal productivity of capital, and (v) increase of effective utilization of labour capital. All above mentioned channels are usually connected to corporate and labour taxation. This fact is also confirmed by the knowledge of the distortion effect of taxation which influences behaviour of economic subjects. Cullen and Gordon (2002) conclude that tax policy is the key factor influencing business activity in the sense of its movement between employees and selfemployment. Kotlikoff and Summers (1987) support the opinion that the taxation of corporations leads to a lower return on capital which as a result tends to move out of the country. Kotlán et al. (2011) state that integration of taxation to growth theories can be divided into two main streams. The first one is focusing on the impact on the level of the savings, investments and capital accumulation. The progrowth effect is notable mainly in the case where countries which haven't reached a steady state. The second stream analysis integration via economic progress and accumulation of human capital; the final effect should be on countries which have already reached the steady state.
The relationship between economic growth, corporate taxation and economic activity of corporations is probably the most important and also commonly discussed in the empirical studies. Many published papers also study the impact of taxation on corporate decision making and their influence not only on the investment decision making but also on dividend policy, organizational structure etc. (e.g. Scholes and Wolfson, 1992;Auerbach and Slemrod, 1997;Shackelford and Shevlin, 2001). The results unambiguously confirmed the impact of corporate taxation on corporate policy. Tax policy has a significant impact on how corporations finance themselves. The capital for new investments can be obtained through their own capital, debt or undivided profit. High tax rates lower the income of the corporations and therefore possibility of the following reinvestment. Simultaneously the international movement of capital allows an easy choice for the investment allocation. For small open economies, which are usually recipients of the investments, the high taxation represents a competitiveness problem. The inflow of foreign investments has its positives, e.g. on the employment level. Harberger (1962) believes that high corporate tax rates discourage investment activity. The inflow of foreign investments has its positive relations also in the case of higher employment. The relation between foreign direct investments and corporate taxation confirmed e.g. Simmons (2003). In his study the index evaluating the attraction of the country based on the corporate taxation was presented. The impact of the tax rates changes on the intensive investments studies also Devereux (2007) or De Mooij and Ederveen (2003). They conclude that this kind of investment is more sensitive on the tax related law changes and on the average tax rate compared to the standard investments. Analysis of Buettner and Ruf (2007) or Buettner and Wamser (2009) point out that corporate taxation influences both the extent and allocation of the investments. Keuschingg (2008) created a model of monopoly competitive industry with extensive and intensive investments and showed how margin changes of those investments react on the changes of average and marginal rates of corporate taxes. Lanaspa et al. (2008) note that government has the ability to influence localised decisions (in the case of FDI) of the corporates due to the tax rate of capital incomes. They confirm the general conclusion that countries with a lower tax burden are net receivers of FDI. Mutti and Grubert (2004) study the impact of these types of taxes on horizontally integrated international organizations which consider investing in another country. They conclude that investments abroad are very sensitive to the tax rates and this sensitivity is higher in the case of developing countries compared to the developed countries; it also grows in time. Paretto (2007) provides a different view on corporate tax, this work is based on modern Schumpeterian growth theory. He concludes that higher dividend taxation has a positive impact on economic growth as it balances the deficit of government budget.
The investment activities of companies can be influenced by different taxation as well. It is easier to verify the impact of the direct taxes. Brett and Weymark (2008) believe that the immediate effect on capital accumulation and savings creation have also individual pension taxes -lower pension reduces intended savings; and also via lower yield from the savings. Lubian and Zarri (2011) mention the negative and positive impact of pension taxation. The negative impact is represented by (i) the decrease of disposable income and savings (ii) tax evasion in the case of capital incomes. The positive impact is based on the idea of growing work effort with the aim of achieving a particular value of pension before taxation. The pressure on salary growth as a result of growing labour taxation makes work offer rigid and therefore creates pressure on the decrease of corporate profits and later on the investment decrease. As a result the structure of capital accumulation is disrupted.
Taxation of dividends represents another approach to the investment activity of economic subjects. On a theoretical level there are three approaches. The traditional one views marginal source of investments in the new own capital where the investment yields are used for dividend payments. The new one sees it as the source of investments undivided profit. It can be noted that whereas the traditional approach attributes impact of the dividend taxation on the investment activity, the new approach holds the opposite opinion (e.g. Bradford, 1981;King, 1977;Poterba and Summers, 1985). The third approach applies the theory of tax insignificance. Its supporters claim that investors aren't facing different dividend and capital yields taxation (e.g. Miller and Scholes, 1982;Miller and Modigliany, 1961). Under the assumption of the validity of the theory the change of dividend taxation doesn't influence investment decision making and taxation is considered as non-distorting.
Savings represent the most important factor determining long-term economic growth and based on the above mentioned it is obvious that corporate taxation is, in parallel with labour taxation, a key factor influencing capital accumulation.
In the case of endogenous models of economic growth it is also necessary to mention approaches to the impact of taxation on technological advances and investment in the human capital. The number of studies handling this issue is not so vast. Some papers support the idea of immediate impact of taxation on accumulation both of physical and human capital (Leibfritz et al., 1997;King and Rebelo, 1990). On the level of corporate taxation the conclusions vary and a clear impact has not been confirmed on the empirical level. For example Tremblay (2010) highlights the nonexistent neutral relation between corporate taxation and investment to human capital. He shows a negative impact in the case that both employees and corporations are engaged in the investments to the human capital. On the other hand, if only corporations are involved the impact is positive. But if we analyse the issue from the side of public finances (tax incomes) there is a positive correlation between economic growth and taxation (Lin, 2001). This relation exists mainly if the tax incomes for the accumulation of human capital are used. Myles (2007) or Erosa and Koreshkova (2007) state that mainly personal income tax has an essential impact on the return of investments to human capital and decision making about future education. Tremblay (2010) adds that if the investment in human capital is performed both by employee and corporation, the level of the investment in human capital will increase in the case of higher taxation of personal income; conversely the effect of corporate taxes is opposite. Zeng and Zhang (2001) study the growth effect of taxes within Howitt's (1999) growth model where the main sources of growth is innovation. They conclude that tax of capital income is harmful for growth as it discourages creation of savings and capital investments. In the case of technologically advanced countries where innovation is key for the long term growth they recommend focussing on consumer and labour taxes instead of investment taxation. The impact of taxes on economic growth is studied mainly in the sense of tax incentives aimed on research and development. The economic literature confirms that shorttime incentives in research and development are relatively non-elastic, in the long-term their elasticity is close to one and there is a positive relation between economic growth and tax incentives (Bloom et al., 2002;Hall and Van Reenen, 2000).
For government expenditures two aspects are important -their productivity and their efficiency. To evaluate the impact of government expenditures on economic growth properly it is necessary to perceive the above mentioned aspects and connection between taxation and government expenditures. It can be assumed that growth-supporting effect belongs to government expenditures which are financed by non-distorted taxes. On the other hand, nonproductive government expenditures which are financed by distorted taxes have an anti-growth effect (for more details e.g. Afonso and Furceri, 2008;Agénor, 2010). Devarajan et al. (1996) point out the significance of the difference between productive and non-productive government expenditures. They support the opinion that there is a positive relationship between economic growth and public investment expenditures; the relation between consumer related public expenditures and economic growth is negative. As productive government expenditures are considered mainly investment expenditures and expenditures to the education. Non-productive expenditures are represented by mandatory expenditures (mainly social expenditures). Drobiszová and Machová (2015) add that government expenditures also indirectly support economic growth by the creation of suitable institutional conditions for private investments. If the private investments were absent or non-realized in the economy it would disturb its functioning.
From the above mentioned literature review it is obvious that the impact of corporate taxation on economic growth is realised within the saving and investment channel; and its impact is negative. The impact on the economic growth within the human capital is rather negative and the impact of technological progress is not clear. For government expenditure their composition is crucial; in the case of productive expenditures the impact is positive, in the case of nonproductive negative.
METHODOLOGY AND DATA
The presented paper is based on the Mankiw et al. (1992) growth model which represents the basic neoclassical growth model of economic growth extended by human capital. The model also includes other fiscal variables, which together with delayed explained variable characterizing the dynamic of economic relation, modify the whole model.
Economic variables can be perceived as dynamic processes within the time. It can be therefore expected that the current growth rate is determined among others by its delayed value. Integration of taxation to the model needs to be performed complexly. Because of that the model also includes other taxes which exist in the tax systems of the chosen countries. This approach is consistent with the modern approach of economical agents as they are defined by e.g. Kotlán and Machová (2014a). Judd (1987) claims that it is desirable to estimate impact of all taxes on economic growth. Denaux (2007) or Izák (2011) add that it is also necessary to quantify impact of other fiscal variables, mainly government expenditures. Because of that, the model is extended by control of tax variables and government expenditures.
Analysis of the relation between corporate taxation and economic growth is based on the dynamic of panel regression. Panel regression as a statistical-econometric method investigates relations in two dimensional space. Panel data enables the connection of time and cross-section dimension of data and at the same time the statistics are more reliable and robust. With respect to the used data, the estimation is performed under Generalized Method of Moments (GMM) specifically the Arellano-Bond estimator (Arellano and Bond, 1991) which uses instrumental variables. To obtain consistent estimation and to remove possible homogeneity the first differentiations are used; so the special differentiation form of GMM with institutional variables is applied (details in Baltagi, 2010). Baltagi (2010) states that dynamic relations are usually characterized by delayed variable, so the model can be defined as following (1): where i = 1, 2, . . . , N , t = 1, 2, . . . , T , δ is scalar variable, x ′ it represents vector of explanatory variables (1 × K), β is vector of regression coefficients (K × 1) and u it is random variable given by equation (2): where µ i represents individual effects and ν it is idiosyncratic variable; µ i and ν it are independent on each other. The above presented model is a model with fixed effects which are commonly used in macroeconomics as the individual effects represent voided variables. It is possible that characteristics for individual entities are correlated with other regressors.
The individual variables are defined below in Tab. 1, the last column states the source of the data. All used data are quantitative and secondary. Their collection was performed in a way to ensure their consistency comparability. A review of descriptive statistic of input data is added in appendix.
As Kotlán (2010) states, in accordance with Barro and Sala-i-Martin (2004) for the sample of chosen countries it is appropriate to apply homogeneity criteria. This request is fulfilled by the membership of all chosen countries in OECD 1 . Time period of the analysis is 2000-2014. Four models are created. These models reflect impact of corporate taxation on economic growth. In the first model the taxation (TAX) is approximated by part of tax quota representing tax burden of corporations (TQ1200) and control tax variablestaxation of personal income (TQ1100), social insurance (TQ2000), property taxes (TQ4000) consumption taxes (TQ5110) and special consumption taxes (TQ5120). Based on the mircoforward looking approach the corporation taxation is approximated by Effective average tax rate (EATR) and Effective marginal tax rate (EMTR); which represent second and third model. In the case of those taxes there is no equivalent measure considering directly labour, property or consumption taxation which would be based on the same methodology. Taxation of labour and property are considered within the indicators (detailed Spengel et al., 2014). Consumption taxation is reflected by partial tax quota (TQ5110 and TQ5120). Fourth model applies alternative possibility to approximate tax burden by World Tax Index and its sub-index Corporate Income Tax (CIT); control variables are represented by sub-indexes Personal Income Tax (PIT), Value Added Tax (VAT), Individual Property Taxes (PRO) and Other Taxes on Consumption (OTC).
Kotlán and Machová (2014b) point out that fiscal policy horizon and its delay are important for the economic policy efficiency, economic cycle and long-term growth. Therefore it is desirable to reflect dynamic of the model with focus on the possibilities of quantitative methods. Kotlán and Machová (2014b) also note that tax policy efficiency is the most visible with 2-3 years delay. The aim of the following analysis is to reflect fiscal (tax) policy delay and therefore individual fiscal variables will be delayed by 1-4 years. The analysis is performed on E-Views (8).
RESULTS AND DISCUSSION
The following part describes the results of the dynamic panel model. To obtain robust estimations of individual's models it is necessary to adjust the data. All time series apart of EATR and EMTR were changed to its logarithmic form (LOG). It is not possible to transform EATR and EMTR because of the micro-forward looking approach some of their values are negative. Lammersen and Schwager (2005) Notes: a Index was created by Penn World Table 9.0. It is based on the study by Feenstra et al. (2015). b Methodology is based on Devereux and Griffith (1998).
result of lower value of capital costs compared to the real interest rate. This suggests that there is indirect tax support of investments which increases the rate of profit after taxation compared to its value before taxation. This paper applies the Arelano-Bond estimator which ensures elimination of endogeneity issue as it transforms the variable to its first differentiations and transformed variables do not contain a unit root (so they are stationary). It is convenient to obtain stationary data mainly in first differentiations. Stationarity testing for panel data can be performed due to panel unit root test (Levin et al., 2002;Im et al., 2003) and ADF and PP test (Maddala and Wu, 1999). All those tests have the same null hypothesis which is confirmation of a single root existence. An alternative hypothesis varies. In the case of Levin, Lin and Chu test the alternative hypothesis states that there are no unit roots. Alternative hypothesis of other tests state that some objects have unit roots (detailed in Novák, 2007or Baltagi, 2010. The existence of a single root was tested both on levels and on first differentiation. All variables apart of human capital were stationary in the first differentiation so due to applied methodology it wasn't necessary to adjust the time series. Therefore to obtain valid results only HUM was adjusted. Its stochastic instability was removed by the transformation of the variable to its first differentiation. The adjusted variable was again tested for unit roots and results show that the variable is stationary in the case of its second differentiation. The above mentioned follows a study of Xiao et al. (2010) or Kitamura and Phillips (1997) who state that even though a dependent variable is non-stationary the GMM method provides consistent estimates. Source: E-Views (8). Note: *, **, *** represent a significance level of 10%, 5% and 1%.
It was empirically proved (e.g. Kotlán and Machová, 2014b;Matsumoto, 2008;De Cesare and Sportelli, 2012) that tax policy has an impact on economic growth with time delay. This delay varies based on the type of tax and its distortion effects. Different delays is also given by calculation of taxation and length of time series. In summary, the delay of individual taxes can have a quantitative effect on economic growth with different delays. To work with different delays within individual models and different tax approximations is therefore relevant and reasonable. As was mentioned before, Kotlán and Machová (2014b) state that the effect of tax policy is the most visible in the case of a 2-3 year delay. The aim of the following analysis is to reflect the delayed effect of tax policy and because of that the individual fiscal variables are delayed by 1-4 years with respect to the relevance of econometric and economical point of view. For the individual approximations of tax burden the results which reflect the best economical and econometric sides with the respect to time delay are presented.
As it is usual Tab. 2 represents values of estimated regression coefficients of individual independent variables ant t-statistics values -Sargan-Hansen test which verifies the explanatory value of the model and Arellano-Bond test of serial correlation (AB corr. test) which tests the model for the presence of autocorrelation of second order are presented.
The results of Sargan-Hansen test for all four models show that number of instruments is higher than J-statistic and the null hypothesis is not denied. This means that instruments of models are not correlated with residues which confirm correct verification of models. Instru-mental variables were chosen correctly and removed endogenity from the models. Based on results of Arellano-Bond test of serial correlation no significant evidence of serial correlation in the firs-differenced errors is presented. It is also obvious that all four models are dynamic stable. The stability is supported by high statistical significance of delayed explanatory variables (on 1% significance level). It can be therefore stated that use of dynamic model under GMM method and first differentiation is reasonable.
Relation between economic growth and exogenous variable CAP (physical capital accumulation) confirmed theoretical assumptions. This variable was estimated with expected positive impact on economic growth (on 1% significance level). Contradictory results were received in the case of HUM (human capital). Within the first model which uses TQ and fourth model which uses WTI the HUM is on 1% significance level significant with positive impact on economic growth. But in the case of model 2 using EATR this variable is insignificant. Same result is obtained for model 3 with EMTR where the variable is significant on the border of 10% significance level and estimated impact is negative. Human capital represents variable for which the existence of positive impact on economic growth has been confirmed both on theoretical and empirical level (e.g. Barro, 1999). Its approximation seems to be problematical but as this variable has function of control variable in the model it was decided to leave it in the model to preserve complexity of model.
In the case of fiscal variable there is a conformity between theoretical expectations and obtained results as there is a positive impact on economic growth (on 1% significance level) in the case of all four models. In all cases variable was delayed for 1 year. On the general level it is expected that government expenditures leads to the support of economic growth. Some studies (e.g. Devarajan et al., 1996;Afonso et al., 2005) doubt this statement and point out that it is important to distinguish between productive and non-productive government expenditures. Non-productive government expenditures have therefore opposite impact on economic growth. Due to lack of available data only the aggregate government expenditures are used. On theoretical level prevailing positive impact of government expenditures is expected; this assumption was confirmed.
Taxation of labour in first model (TQ1100) was verified as significant on 1% significance level and has negative impact on economic growth; variable is delayed 2 years. Corporate tax (TQ1200) was also verified on 1% significance level. The impact of labour tax is higher than impact of corporate taxation. The impact of social insurance, including social insurance covered by employees, was verified on 10% significance level and no delay was used. It can be stated that social insurance has immediate impact on economic growth. The explanation can lay in a fact that social insurance is a tax in a wider meaning and in the case of quasi taxes there are only very limited possibilities to reallocate them mainly in sense of substitution effect as it is in the case of income taxes. It is necessary to consider that tax system represents interconnected systems which influence each other and in the case of change of corporate taxes tax incidence occurs. Tax burden in the form of higher corporations will not only corporations but will be also moved on employees. Fullerton et al. (1980) state that it is obvious that corporations move tax burden but it is very difficult to evaluate real impacts of this phenomenon.
First model haven't confirmed negative impact of property taxes (TQ4000) on economic growth (on 5% significance level). Same result was obtained also for other consumption taxes (TQ5120) on 1% significance level. In this case the results confirm conclusions of other empirical studies which show low distortion effect of those taxes and their negligible impact on economic growth (e.g. Arnold, 2010;Johansson et al., 2008;Widmalm, 2001). On the other side the negative impact of consumption taxes VAT (TQ5110) was confirmed on 1% significance level. The influence of this tax category was out of all tax variables the highest one which indicates that it's increasing bound economic growth within OECD member countries. This conclusion collides with another empirical papers (e.g. Kotlán et al., 2011;Simionescu and Albu, 2016) which showed either insignificant negative impact or slightly positive impact on economic growth. Ebrill et al. (2001) state that value added tax creates economic deformations which are smaller compared to other taxes as they reflect lower productivity and savings. To obtain optimal economic growth the tax systems should be correctly adjusted. Many empirical papers (e.g. Myles, 2009) advice to move tax burden from direct to indirect taxes and VAT can represent one of the possible solutions as it reduces only consumption and not production or investments. Our results suggest that this shift of tax burden could be inappropriate and could have negative impact on economic growth. It is appropriate to consider characteristics of tax quota. This conclusion can have its reasoning in the efficiencies of tax quota itself (detailed e.g. Baranová and Janíčková, 2012). Because of that it is appropriate to consider also another approximates or tax burden, mainly WTI which have significantly higher explanatory value and are less sensitive on the fluctuations of economy.
Second model presents impact of taxation presented by average effective tax rate on economic growth. As the results show EATR has negative impact on economic growth on 1% significance level and delay 4 years. Control tax variables are represented by consumption taxes and were verified in the case of TQ5110 as a negative on 1% significance level and in the case of TQ5120 as a positive on 5% significance level. The results are in accordance with the results of previous model 1.
In the case of corporate taxation represented by effective tax rate (model 3) the negative impact on economic growth (on 5% significance level) was again confirmed. From the quantitative point of view this impact is not so high. The highest negative impact on the economic growth was verified in the case of control variable presenting general consumption taxation VAT (TQ5100) which was confirmed on 5% significance level and delay 3 years. On the other hand positive impact was estimated for control variable TQ5120 but this impact is statistically insignificant. To remain complexity of the model the variable wasn't removed.
From the results of model 2 and 3 it is obvious that quantitative effect of corporate taxation within economic growth approximated by effective average and marginal tax rates is relatively weak, compared to the other determinants of economic growth. The reasoning can lie in the aggregation of different data which can have contradictory effect. Effective marginal and average rates were proved to be significant only in the case of investment activity as Janíčková and Baranová (2013) describe.
Based on model 4 results, corporate taxation represented by sub-index CIT has also negative impact on economic growth on 1% significance level and delay 2 years. Compared to impact of personal taxation (PIT) this influence can be considered as relatively low. Personal income taxation shows quantitatively highest negative impact (on 1% significance level) on economic growth. This conclusion is similar with results of model 1 using tax quota. In the case of PRO negative impact is confirmed on the 5% significance level which responds with theoretical assumptions about negative impact of property/direct taxes. This impact wasn't confirmed in the case of model using tax quota.
From the quantitative point of view higher impact compared to corporation taxation is also confirmed. In the case of consumption taxes the positive impact on economic growth was proved both for OTC on 5% significance level and VAT as non-significant. These results dispute with conclusions gained while using tax quota where the impact of VAT was negative and other selective consumption taxes positive on 1% significance level. From above described it can be concluded that impact of indirect taxes is not so obvious as in the case of income taxes. Same conclusion provides e.g. Xu and IMF (1994) or Mendoza et al. (1997) whose studies didn't prove correlation of consumption taxation and economic growth.
CONCLUSION
The main objective of the paper was to evaluate the relation between corporate taxation and economic growth on the sample of OECD member countries under a hypothesis of negative impact of corporate taxation on economic growth. Corporate taxation is approximated by the variety of corporate taxation indicators with respect to the dynamic nature of economy.
From the presented empirical evidence, the negative impact of corporation taxation on economic growth was proved, even though as quantitative more significant the impact of labour taxation was determined. This result is probably based on the following explanation. It is necessary to consider fact that tax system is usually very complex its individual taxes interact among themselves. Mainly the existence of substitution effect provides corporations with the possibility to spread their tax burden on different subjects. In the case of personal income taxation the substitution is enabled mainly between work and free time and employee doesn't have many possibilities to distribute his tax burden in a the same way as corporations. Fullerton et al. (1980) point out those corporations obviously shift their tax burden and it is very difficult to evaluate whole impact of this feature. Higher taxation of corporations therefore does not influence only corporations themselves but it can be concluded that changes will affect also employee and price policy of corporations. How much tax burden will be spread depends on many specific features as e.g. size and nature of market, type of product or openness of the economy. It is also important to consider interconnection between corporate taxation and other income taxes. Each change of labour taxation (and also social contributions) has also transferred impact on corporate sector which creates labour demand. Realized changes of personal income taxation will influence chosen marginal values and labour costs for nearly all labour market participants. From the above mentioned it is necessary to perceive both personal and corporate taxes as a complex with functions in synergy in given tax system. It can be assumed that this synergy is robust mainly within mentioned taxes.
In the case of effective tax rates determined by micro-forward looking approach it was not possible to include other direct taxes to the models (model 2 and 3), as these are already partially aggregated in the indicator. For those models only consumption taxes were added. The effective corporate tax rates are related mainly to the investment decision making. Negative impact on economic growth of those rates was proved although quantitative not very strong. Janíčková and Baranová (2013) conclude that this type of tax rates directly influence mainly size of investment.
For other control variables it is necessary to mention huge ambiguity mainly in the case of consumption and property taxes. VAT approximated by tax quota negative and quantitative significant impact was verified in relation to economic growth. When this variable was represented by World Tax Index its impact was proved as insignificant and positive. For the other selective consumption taxes positive impact was determined. Same positive impact was also evaluated for implicit consumption tax rates but only in a few cases as statistically significant. These findings are similar with papers of e.g. Vráblíková (2016). On the other hand Xu and IMF (1994) or Mendoza et al. (1997) haven't proved any impact of consumption taxes on economic growth. Within individual empirical papers the results considering consumption taxation are ambiguous. Interesting point of view on consumption tax provides Alm and El-Ganainy (2013) who state that indirect taxes have mediated effect on economic growth via investments. They describe the fact that consumption influences investment level as there is substitution effect due to lower consumption and higher savings which finally leads to higher economic growth (as opposite to income taxes).
The contradictory results are in the case of property taxes shown as well. Approximation by tax quota points out on the positive impact on economic growth but PRO sub-index provides opposite results supporting strong negative impact on the same variable. One of the features of property taxes is their low dynamics which can cause some problems while approximating them. Kotlán (2010) states that higher tax quota doesn't necessary imply higher tax burden but it can present higher efficiency in the collection processes. On the other side, as the Laffer curve define, lower tax burden can lead to the higher collection of taxes and increase of tax quota. Kotlán (2010) also adds that it is appropriate to extend the analysis for effective tax indicator WTI as well. This indicator is less sensitive to the economy distortion. Different results of the individual models can be therefore also caused by shortcomings of the indicators.
In the case of government expenditures and supplementary variables the positive impact was verified. It can be stated that the positive effect of government expenditures prevails over negative impact, on the sample of OECD member countries. It can be also assumed that government expenditures financed by nondistort taxes and aiming to productivity part of government expenditures have pro-growth effect On the other side non-productive government expenditures financed by distortion taxes have anti-growth tendency.
Considering the suitability of used indicators the most convenient appears to be World Tax Index and its sub-index Corporate Income Tax both from economic and econometric point of view. This multi criteria indicator shows the most stable evolution in time and till now it hasn't shown any predisposition to deflections of economy compared to the other indicators.
From the above mentioned it is clear that mainly income taxes have negative impact on economic growth. Therefore it is suitable to shift tax burden to consumption and property taxes if the policy makers want to support economic growth. | 9,531.2 | 2017-12-31T00:00:00.000 | [
"Economics"
] |
Study on intelligent analysis algorithm for achieving standard of polymer flooding well group
: In order to master the development effect of polymer flooding well group, it is necessary to accurately analyze the influence of different factors in the whole polymer flooding process on the development index. Combined with the principle of big data analysis, based on the neighborhood rough set theory and Kmeans clustering algorithm, an intelligent analysis algorithm is proposed to determine the achievement of development indicators of polymer flooding well group. Firstly, the neighborhood rough set was used to reduce the attributes of the influencing factors of the Wells with and without the standards. Secondly, Kmeans algorithm was used to cluster the reduced influencing factors to delete the data inconsistent with the actual compliance. Finally, the clustering model is used to judge the standard status of other well groups, and the practical application effect is very good.
Introduction
The geological static factors, production dynamic factors and actual development factors of oilfield development blocks play an important role in the process of oilfield polymer development [1,2].In order to carry on the longterm reasonable exploitation of the oilfield, it is very necessary to study and analyze the law and development effect of various influencing factors in the development process, and keep abreast of the development effect standard of the polymer flooding well group.Therefore, the prediction method of polymer flooding development index has been paid more and more attention by the oilfield enterprises.Shi Chengfang et al. [3] established a prediction model for fluid production of produced Wells in polymer blocks and production dynamic changes in the initial stage and process of polymer injection by analyzing the corresponding relationship between fluid production of produced Wells and water absorption index of produced reservoirs.Zhao Guozhong et al. [4] conducted index prediction based on the three-layer CBP neural network model by studying the change of water content and its influencing factors in the polymer flooding stage.Qiu Haiyan et al. [5] combined the advantages and disadvantages of HCZ, Weibull and Weng's forecasting models and optimized them, and proposed a weighted combination forecasting model to guide the actual production.Hou Jian et al. [6][7] used numerical simulation technology to analyze the factors affecting the change of polyflooding oil augmenting effect, studied the relationship between characteristic parameters and influencing factors through regression statistical model, and then obtained the prediction model of characteristic parameters to predict the change trend of polyflooding oil augmenting.In this paper, according to the actual dynamic and static data of the development process of polymer flooding in oil field, combined with the principle of big data analysis, based on the neighborhood rough set theory and Kmeans clustering algorithm, an intelligent analysis algorithm is proposed to determine the development index compliance of polymer flooding well group, which has achieved good practical results.
Basic rough set
Rough set [8][9][10], proposed by Professor Pawlak in 1982, can effectively analyze and process all kinds of incomplete data information, such as data with some characteristics of imprecision, inconsistency and incompleteness.Through rough set theory, the hidden knowledge in data information can be mined, and the hidden law inside data information can be mined.The main principle of rough set theory is to use the method of knowledge reduction to get the classification rules of the problem to be solved without changing the reasoning ability of knowledge classification.The basic idea of rough set theory is to classify things by equivalence relation to recognize knowledge.
Calculate the information attribute dependence.That is, relative to the set of conditional attributes B ,Calculate the dependence degree of decision attribute set D on it.To determine how important set D is to set B. The calculation method of dependence degree is shown in Formula (1) : As can be seen from formula (1), the so-called dependence degree of D on B subset is actually the proportion of the positive domain set determined by B subset in the domain U. 1.1.4Calculate the importance of the attribute.In information knowledge decision system, attribute importance is defined as the influence degree of conditional attribute on decision attribute.Let the information system be denoted by 1.1.5I'm going to reduce the attribute.Take any subset B of attributes of an information system S,The ambiguous relation
Neighborhood rough set
Basic rough set theory is aimed at discrete data processing, and so on the continuous data processing, need to first discretization operation, it will introduce error lead to change the original data attributes, which can express the information of the original set of properties, cause the loss of information of information system, and thus the information system of classification performance.Therefore, the neighborhood rough set model [11][12][13]] is adopted in this paper to directly process continuous data to avoid information loss caused by data discretization.
The importance degree of decision attribute D to condition attribute B is: 1.2.4 Attribute reduction.When the obtained attribute importance degree is greater than the set lower limit of importance degree,Output the set of reductions red , red is the set that holds the reduction
Attribute data screening based on Kmeans clustering
After the rough set algorithm is reduced, it is necessary to remove the unqualified data in the classified data.The basic idea is to classify the data based on the reduced data attributes, and delete the data inconsistent with the original standard classification.K-means clustering algorithm is an unsupervised clustering algorithm [14][15], which has the advantages of simple principle, easy implementation and fast clustering speed and is widely used in various fields.However, the algorithm itself is also sensitive to the initial cluster centroid and the comparison of noise and outliers.The clustering principle of K-means algorithm is to continuously divide the data set into different categories according to the centroid through iteration, and verify the clustering effect by evaluating the criterion function, so as to obtain independent inter-class and compact intra-class clustering results.
Algorithm principle
2.1.1Select the similarity measurement method between samples.K-means clustering algorithm is not easy to deal with discrete data, but it is very suitable for continuous data.A hypothetical set of data samples properties , Let's call them A1,A2…Ad , And these attributes are continuous data.Then the sample data I and j can be expressed as Xi=(Xi1,Xi2,…Xid) , Xj=(Xj1,Xj2,…,Xjd).d(Xi,Xj) is used to represent the similarity between samples Xi and Xj.The smaller the value of d(Xi,Xj), the smaller the distance between samples, and the more similar the two samples are.On the contrary, if the sample distance is larger, it indicates that the two samples are more dissimilar.In the aspect of sample similarity calculation, Euclidean distance and Manhattan distance can be selected according to the specific situation to measure the similarity between data samples.The more commonly used measurement method is Euclidean distance, and the specific calculation method is shown in Formula (6): 2.1.2Set the criterion function of clustering effect evaluation.The classical K-means clustering algorithm uses the sum of squared error as the criterion function to evaluate the clustering effect.Suppose the data set X is partitioned into k subsets X1,X2... Xk, the number of samples contained in the cluster subset is denoted by N1, N2... , NK, and the cluster centroids are denoted by M1,m2... Mk is denoted by.Then the formula of error sum of squares criterion function is: 2.1.3Calculate the centroid of each cluster subset 1) In the initial state, k centroids are randomly generated, and the sample data are assigned to K clusters according to formula (6); 2) Calculate the average value of the sample data in each cluster and replace the centroid of the cluster with this value; 3) Redistribution was carried out according to the distance between each sample and the centroid of each cluster; 4) Judge whether the evaluation criterion is met, and stop clustering if it is.Otherwise, go to 2) and recalculate k cluster centroids.
K-means clustering algorithm flow
Input: number of sample clusters K and data set.Output: K clustering results for the dataset.Specific implementation process: 1) Randomly generate K cluster centers; 2) According to the principle of minimum distance, the samples in the data set to be clustered are divided into the nearest clustering set; 3) Calculate the average value of sample data in each cluster set, and replace the center of the original cluster as the cluster center of the next iteration; 4) Repeat Step 2 and Step 3 continuously until the stopping rule is met or the cluster center does not change, then K clusters of the data sample set will be returned.3Real Data Simulation
Attribute reduction of influencing factors based on neighborhood rough sets
Was the first to classify a block of polymer flooding well group, and then according to the monthly statistics a standard well group of the well group and not done well group production days, polymer concentration, oil production, to produce liquid, flowing pressure, effective thickness, water cut, oil of nissan, nissan fluid, formation pressure, oil intensity, fluid producing intensity, oil change, water change, flowing pressure changes, and polymer concentration changes, Oil production index, fluid production index, geological reserves, pore volume, injection rate, polymer usage, cumulative injectionproduction ratio, compliance identifier (1 for compliance, 0 for noncompliance).The statistics of the first well group in January 2018 are shown in Table 1.The theory in Section 2 was used to reduce rough set attributes, delete redundant attributes, and finally determine the factors related to achieving the standard of polymer flooding well group, including water cut change, liquid yield change, flow pressure change, concentration change, polymer dosage change, injection-production ratio, injection rate and liquid extraction index.
Actual data screening and well group standard determination based on Kmeans clustering
Kmeans clustering was performed on the 8 attributes after rough set attribute reduction according to the theory in Section 3. Some results are shown in Table 2.It can be seen that the clustering result of well 4 is inconsistent with the standard identification.According to the cluster analysis, there are 142 data of the well group reaching the standard in the first class, among which 15 data of the well group have inconsistent clustering results.There are 120 data in the non-standard well group of the first class, and the clustering results are consistent with the standard identification.Through data analysis and comparison, the correctness and effectiveness of the algorithm are verified.Through the influence factors of 3.1 and 3.2 reduction and data filtering, the intelligent analysis model was established based on clustering algorithm, select the block type of well group in recent 3 months up to standard data to verify the validity of the intelligent analysis algorithm, the block type of well group, 45, which the standard well group 32, not done well group 13, through the method of intelligent analysis, The recognition rate of the well group reaching the standard is 87.5%, and the recognition rate of the well group not reaching the standard is 84.6%.
Conclusion and Understanding
This paper proposes a judging standard of a kind of intelligent polymer flooding well group development analysis algorithm, this algorithm can be combined with data of oil field, has the advantages of adaptive with the increase of oilfield development data, the proposed intelligent analysis method can get a better mark identification model, and improve the accuracy of the identification model, has good popularization value.
;
result.Neighborhood rough set algorithm flow:Step1: Input decision system Set neighborhood radius calculation parameters and lower importance limit efc; Step2: Preprocessing, normalizing the original data, and calculating the neighborhood radius ;Step3: Initializes the reduction setStep5:Calculate the dependency and importance of each attribute; Step6 : IF is greater than efc; otherwise, output the reduction result red ,else, return to Step4.
2.2 Neighborhood decision system.Assuming that
Table 1
Influencing factors of Class I well group
Table 2
Comparison of actual compliance and clustering results | 2,706.2 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Building attack detection system base on machine learning
These days, security threats detection, generally discussed to as intrusion, has befitted actual significant and serious problem in network, information and data security. Thus, an intrusion detection system (IDS) has befitted actual important element in computer or network security. Avoidance of such intrusions wholly bases on detection ability of Intrusion Detection System (IDS) which productions necessary job in network security such it identifies different kinds of attacks in network. Moreover, the data mining has been playing an important job in the different disciplines of technologies and sciences. For computer security, data mining are presented for serving intrusion detection System (IDS) to detect intruders accurately. One of the vital techniques of data mining is characteristic, so we suggest Intrusion Detection System utilizing data mining approach: SVM (Support Vector Machine). In suggest system, the classification will be through by employing SVM and realization concerning the suggested system efficiency will be accomplish by executing a number of experiments employing KDD Cup’99 dataset. SVM (Support Vector Machine) is one of the best distinguished classification techniques in the data mining region. KDD Cup’99 data set is utilized to execute several investigates in our suggested system. The experimental results illustration that we can decrease wide time is taken to construct SVM model by accomplishment suitable data set pre-processing. False Positive Rate (FPR) is decrease and Attack detection rate of SVM is increased .applied with classification algorithm gives the accuracy highest result. Implementation Environment Intrusion detection system is implemented using Mat lab 2015 programming language, and the examinations have been implemented in the environment of Windows-7 operating system mat lab R2015a, the processor: Core i7-Duo CPU 2670, 2.5 GHz, and (8GB) RAM.
Introduction
With the rise utilize the networkeds computerss fors crucial systemss ands thes common utilize of distributed ands large computers networks, the security of computers networks concern rises and network intrusions have been a dangerous risk in latest time. Intrusions detections systems (IDS) hass beens great used to be a seconds row of protection form networked computers systems sideways with additional network security methods for instance access controland firewall. The majoraimof IDS is to detect illegal utilize,abuse and misuse of computers systemss by boths systems insiderss ands outsiders intruders. Theres are differentsmethodssto construct intrusions detections systemss (IDS). IDSs can bes classifieds into twos classifications depend on the approachess utilized to detects intrusions: abuse detectionsand anomalysdetection [1,2,3]. Anomalysdetectionsmethodscreatessthe profilessofsusual actions ofs users, system resources, operatings systems, networks services ands traffic usingthe examination trails created by anetwork scanning program or a host operating system. This method detects intrusions by classifying important perversions from the usual attitude samples of these profiles. Anomaly detection method is not necessary that previous knowing of the security holes of the goal systems. So, this system is capable tosdetect not onlysidentified intrusionssbut alsosunidentifiedsintrusions. Moreover,thissmethodscan identify the intrusionssthatsare accomplished by the misuse of legal users or disguise without violation security politics [4,5,26]. The drawbackssof this methodwere it hadsrise fakespositive recognition fault, the hardness oftre atment progressive misbehavior, and costly calculation [4,6,7]. otherwise, misuse recognition method determines doubtful abuse signatures depended on known system weaknesses and a security procedure. Abuse method achievesswhether signaturessof identified attackssare existing or notsin thesauditing paths, andsany corresponded behavior is recognized ansattack. Misusesdetection method identifiessonlyspreviously recognized interferencessignatures. Thesbenefit ofsthis method issthatsit seldom defeat to identify prior toldsintrusions,si.e.decrease fake positivesrate [5,s8]. Thesdifficulties of this method cannot identify modern intrusions it have not ever before been detected, i.e. Greater fake negativesrate. Morever, thissmethod hassanother disadvantages as the hardness of misusessignature bases and the hardness of creating and updatingssignature rulessof intrusion [4,8,9]. These are twostypessof Intrusionsdetectionssystems IDS are NetworksIntrusion DetectionsSystem NIDSsand Host-basedsIntrusionsDetection (HIDS) [9,10,27]. In our study we built Intrusion detection systems (IDS) based on data mining.
Thesseeking forsproof ofsattacks depend onsthe information collectedsfrom recognizedsattacks.moreover, it is indicate to attacks type as detectionsby appearance or misusesdetection. Anomalysdetection is seeking for perversions from the pattern of uncommon attitude depend on the monitoring of a systemsduring a ordinary status andsis indicated to such anomalysdetection or find via behavior.
Soni andSharma in 2014[13]
suggested two techniques artificial neural network (ANN) and C5.0 are employed together with characteristic picking. Feature picking method seliminate several irrelevant characteristics while C5.0sand ANNsperformed such a classifiersto categorizesthe input datasin eithersnormal category orattack that onesof the fivestypes. KDD99sdataset is employed to test and train the system; C5.0ssystem through numberssof characteristics is make improvedsresults withsnearly 100% accuracy. Morever, they used ANN approach to categorize intrusion data depend on their partition size. A comparative result demonstrates that C5.0 is execution better than ANN and yields best outcome with 36 features.
Zargar and Baghaie in 2012[14]
offered a category-basedspicking of active parameterssfor detection of intrusion utilizing PrincipalsComponents Analysiss (PCA). They employ 32smain characteristics from Transmission Control Protocol// Internet Protocol (TCP/IP) sheader, also 116sresulted featuressfrom TCPsdump are picked in adataset of networkstraffic. Attackssare classified in fourssets, User attack (U2R), Denialsof Services (DoS), Remotesto and Probingsattack, Remotesto Usersattack (R2L). Moreover, they used TCP dumpsfromsDARPA 1998sdataset insthe tests as the pickedsdataset. PCAsapproach is utilized to define an ideal characteristic setsto produce thesdetection procedure higher speed. The experimentalsresults display that characteristic reductionscan get better detectionsrate for the categorybasedsdetection method while the continuing detectionsaccuracy within asuitable range. The KNNsclassification technique is utilized for the attackssclassification. The experimentalsresults illustrate that characteristic reductionswill importantly speedsup the testing and training the time for recognition of thesintrusion challenges.
Mukkamala and Sung in 2003 [15]
proposed Feature picking for Intrusion Detection utilizing Two learning machine classes for intrusion detection system (IDS) aresstudied: ArtificialsNeural Networkss (ANNs) andsSupport VectorsMachines (SVM). They display that SVMs are better than ANN insthree serious respectssof intrusion detection system: SVM execute and train are greatness quicker; SVMs scale much superior; and SVM provide greater classification accuracy. Moreover, address the concerning matter of rankingsthe significancesof input characteristics, whichsis a hazered of major significant. Sincesremoval of the useless and/orsunimportant inputssproduce a simplified hazered and probably quicker and extra precise detection, characteristic picking is quite significant in intrusion detection. The experimental results show that SVM-depend IDS utilizing a reduced feature numbers can deliver improved or comparable performance. In conclusion, IDS suggesteddepend on SVM for detecting an exact category.
Zhu et al.,2005 [16]
RICGAs (ReliefFsImmunesClonal GeneticsAlgorithm), a collective characteristic subsetspickingmethod depend on the Immune Clonal selection,ReliefFsalgorithm and GA is suggested in isemployed BP networks as classifier. RICGA has higher accuracy of classification (86.47%)forssmall scope characteristic subsetssthansReliefF-GA. In the paper, the features are not mentioned.a composite characteristic subset selection method, claimed RICGAs (ReliefFsimmune clonalsgenetic algorithm), depend on the immune clonal selection algorithm ,ReliefF algorithmand GA. In the RICGA method, they employed in the first ReliefFsto getsrid ofsirrelevant characteristics, then execute a improved geneticsalgorithm to get the lastly characteristicssubset. They analyse hardly theRICGA Markovschain modelsalgorithm and itssconvergence. Experimentalsresults on realsKDD CUP'99 datasets display that the RICGAsmethodsis eminent to the ReliefF-GAsand GAson classificationsprecision and input characteristic subsetssize.
Ming-Yang Su (2011) [17]
offered anapproach for featurespicking to identify DoS/DDoSsattacksfor designing in realstime ansanomaly-based Network Intrusion Detection System NIDS. Genetic algorithms (GA) collective with KNN (knearest-neighbor) are utilized for characteristic weighting and picking. The outcome of KNNsclassification is employedas the fitnesssfunction in a GA to improve the characteristics weightsvectors. First 35scharacteristics in the train stage aresweighted. The highest 19 characteristics are taking into account for recognized attackssand the tops28 characteristics for unidentifiedsattacks. In this paper, extractedscharacteristics are not aforesaid. Atotal accuracy rateof 97.42% is obtained for recognized attacks and 78% for unidentified attacks.
Data Mining and Intrusion Detection Systems
Intrusion0detection systems0(IDS) have been depend intraditionally on0the feature of an attack and the system tracking activity to check if it matches that description. IDS depend on data mining is creation their0appearance more ability. The system of Data0Mining approaches for0intrusion detection applications have been commonly employed these days. The0intrusion detection difficult has been0reduced to a0Data Mining mission of classifying data.Summarized, given a data pointsset belonging to various attacks activity (0Classes0) and purposes to isolate them assaccurately as possible by means of0a model0. Many various data0mining approaches found for intrusion detection0classification. In this system, we employed a Support Vector Machine (SVM) for attach detection as a classification algorithm. Also, we used feature extraction and dimensionalitysreduction algorithmss(PCAsand LDA, SVDs) basing on the KDD'99 Cupsdatasets.
Design and Implementation of Proposed System
The proposed intrusion detection system to scheme a proficient intrusion detection and recognition system is described as follows:
Figure 10 Intrusion Detection Classification proposed System0approach
The aim of analyses is to increase the intrusion detection system achievement; the data which used as input to proposed systemis KDD Cup 99 dataset. The KDD Cup 99 datasetis requirement to pre-processing which is done by converting all data into similar format. Then feature reduction is performed to extract and reduction features.Finally, intrusion classification stage is done by based on different kind of system insertions, the classification algorithms Support Vector Machine (SVM). As KDD Cup 99 dataset holds some symbolic attribute and also numeric attributes, two sorts of transformation technique have been utilized for these properties. The two machine learning procedures are prepared on both kind of transformed dataset and afterward their outcomes are looked at with respect to the correctness of intrusion detection. The suggested system is containing fundamentally of two essentialjobs which aresfeature reductionsand attack detection.
Our proposedsintrusionsdetection system steps are showed in Figure(1), swhich includes the main parts. Input KDD'99 Dataset, Dataset Pre-processing, Dimensionality Reduction and feature selection, Classification0 Algorithms0 and Performance0Measurement.
KDD'99Input Dataset
In first phase of the suggested intrusion detection system gets the KDD Cup 99dataset as an input where the whole record numbers. In our proposed system, we utilized the total KDD Cup 99 dataset. Each record on 42 features; the records have labeled either attackor normal type.
The KDDsCUP 1999 [18] standard datasetssare employed tosevaluate various characteristic selectionstechnique for Intrusionsdetectionssystem. This system contains of 4,940,000srelatedsrecords. Every relation had a labelsof eithersattack or the normal kind, with quite one exact attack category happens in one of the four attacks types [19] as: User to Root Attacks(U2R), Remote to Local Attacks(R2L),Denial of Service Attack (DoS) and Probing Attack.
Denial of Service Attack (DOS):
Attackssof this category deprivesthe legitimate or host user from utilizing the resources orservice.
Probe Attack: Thesesattacks mechanically scan a computer networks or a DNS server to getlegal IPsaddresses.
Remote to Local (R2L) Attack:
In thissattack category an attackerswho doessnot have an accountson a victim machine achievements localsaccess to the mechanism and changes the data.
KDD'99 Pre-processing
KDD'99 pre-processing is second phase and is one of the significant phases of system. This stage proper data to be accepted to next phase for extraction and reduction data. This phase contain from two step(Dataset Labeling, Normalization). The following subsection will illustrate all details about these steps:
Dataset Labeling
The Dataset Labeling is the first step in KDD'99 Pre-processing phase. This step is so important. The output of Dataset Labeling employed as input to next step in Pre-processing phase (Normalization). The dataset labeling is done by utilizing the whole features in thes KDD 10%0corrected0datasets at it displayed in the screen shot which is sited in the second cell of the entire dataset. The figure (2) isthe KDD 99 dataset screenshot that we took it from environment of our matlab.
Figure 2
First KDD cup dataset row of 10% correction (data sample) So, the The datasets records0includes 42 characteristics (e.g0,, 0service0, protocolstype0, andsFlag) and is labeledsasseither attack or a normal also illustrateany one of attack type as presented in Figure (1.2).as an example,if we take a sample of first rowfromthe KDD 99 dataset before doing the normalization. The Figure (2) is clearthat the feature numbers is (42) which has the definite attack categoryas describedin There is also another issue in this step. There are many nominal values in the dataset such as HTTP, SF, and ICMP. Consequentlyin this step transformall nominal values to numeric values in advance. For instance, the service form of "tcp" is mapped to 1,"udp" is mapped to 2,"icmp" is mapped to 3 and the table (3) shows all transformation the dataset nominal value features into the numeric values. Figure (3) has shown the original KDDCUP1999 dataset will become after transformation as display in figure (3).
Figure 3
Pre-processing Original KDDCUP1999 dataset before and after transformation
Normalization
After we do the labeledfor all dataset feature space, we can do the dataset Normalization by using the whole KDD010% corrected0 datasets at it shown0in the screens shots which is located0in the second cell of the wholes datasets.
KDD'99 as an input dataset includes characteristic numbers and theses are in0different style. Somesare numbers of style and others are in character style. Consequently, in this stage this various style dataset is transformed into samestyle to be extracted0to thes next phases. Sinces theres are some KDD CUP 99 dataset features are continuous, therefore a normalize process is done on these features to become more suitablefor the DM classification algorithms. Normalization is utilized for preprocessing the data, where the characteristic data arerangeas to be in a tiny definite scaled for example 0.00 to 1.00 or-1.00 to 1.00. Normalizing0the0input values for every characteristic measuredsin thestraining patterns willsaid speedsup the learningsphase.
Features Extraction0 and Dimensionality0 Reduction 0of the0 KDD990
Features extraction0and dimensionality0reduction method is done by eliminatin gredundant and irrelevant features. Irrelevantis that features have little connection with class labels. The redundant features have robust relationship with picked features. in this suggested system we employed three various algorithms which are Singular value decomposition (SVD), Principal Component Analysis (PCA),and Linear Discriminant Analysis (LDA) approaches. We used these techniques for extracting appropriate characteristics from dataset, and reduce dimensions of the KDD as wellthen are given as an input to a next step.
Principal Component Analysis (PCA)
PCA is a convenient statistical approach that has found systems in fields for instance image compression and face recognition, and is a popular method for definition samples in high dimension data. Theswhole object ofsstatistics is depend on about the conception which big data set, and examine set describes of the relations betweensthe separate pointssin thatsset [20].The objective of PCAsis to limit the dimensionalitysof the dataswhile preserving as far as probable of the variance current in thesoriginalsdataset. It is asway of categorizing samples insdata, and term the data in as a technique as to focus their differences and similarities [21].
Singular-Value Decomposition( SVD )
Another approach we use it in this system which is Singular-Value Decomposition (SVD). the figure(4) explain the form of a SVD,Let X be an m × n matrix, and let the rank of X be r. The rank of a matrix is the biggest number of rows "or equally columns" we can selectfor which numberof non zero linear set of the rows is the vector 0 (all-zero) in this case a set of such columns or rows is independent. Also, the Figure (1.4) displays the matrices U, Σ, and V as with the following properties: Figure 4 The form of a singular-value decomposition 1.
is an × column0S-Sorthonormal matrix0S; that0is, each of its0Scolumns is a unit0 vector and the dot product of any two column0S is 0.
Linear discriminant analysis
Linear discernment analysis (LDA) is a different technique that employed for reduction of dimensionality and feature extraction. LDA requires reducing dimensionality while maintain as much of the class distinctive information.
Steps0 10: Computes thes between class scatters using completes features samples.
Step 2 Calculates the Total classs scatter matrixs Step 3 Computes Eigenvaluess and Eigenvectors using Eigens equation fors LDA.
STX =⋋ Si X
Step 4 Computes the Eigenvectorss corresponding to Eigenvalues such that and Eigenvectors: X1, X2, X3 … XN where N represents dimensionality of feature vector and N in our case Step 5 Evaluate the contribution of each feature vector Step 6 Sort the features vectors in descending orders corresponding to their impact or contributions.
Step 7 The dimensionality reductions phase based on largest0 eigenvalues is skipped as the selection of optimum subsets of linear components
ClassificationSupport Vector Machine Algorithms (SVM)
Support Vector Machine (SVM) is a machine learning method setsem ployed for regression and classification. SVM is depending on the idea of decisionplanes that describe decision boundaries. A decision plane is one that splits between a set of matters having various class memberships.Our suggested insertion detection system depend on dimensionality reduction which PCA and SVD, LDA algorithm which has employed one classification outcomes.
Performance Evaluation
The insertion detection system efficiency is evaluatedsby its capacity tosmake precise estimates. Accordingsto the realsnature of a grant eventscompared tosthe forecast fromsthe IDS, four probable results are presented insTable(4), famous assthe confusionsmatrix [4]. Detection Rates (DR) or True Negative Rates (TNR),True Positive Rates (TPR), FalsesPositive Rates (FPR) or FalsesAlarm Rates (FAR) and FalsesNegative Rates (FNR) are gauges that cansbe practical to quantifys the execution of IDS [4] depend ons the above confusionsmatrix We has gotten accuracy by recognition0rate is illustrate such as the ratio0between thes correct recognition numbers decisionsto the number0of total.
Results and discussion
Tables (5) and (6) displays the overall performance results of Support Vector Machines(SVM) on KDDsCup 99 dataset based on testingsand training by utilizing three various algorithms (0PCA and LDA, SVD0) that0 we have0 offered in our system0.
Experiential Results using the Whole Dataset Samples
Big data analysis is a major change theses day, so in term of dealing with a huge number of data samples (records).In our proposed system we design anapproach to transact with intrusion detection classification difficult. Also we used a huge number of data examples (494,201) with entire feature numbers (42 features). To execute and solve the problem of 10% KDD classification, we propose a data folds segmentation. In this part, we tried to display the experimental results employing the entire data samples which are (494,201). These experimental results of intrusion detection proposed system which is the intrusion0detection0classifications systems depend on various features reductions algorithms0 on the KDD Cup 99 dataset.
Each record of dataset labels includes one of the 5 type of attacks. Since the 494,201 is a big data analysis especially in our proposal which we employed three various algorithms to do the dimensionality reduction and feature selection. In each one we used for0reducing the042 features of the(0 KDD data sets) and two classification0 algorithmss to detect the four types of IDs attackss.
Our methodology proposed system for dealing with this type of big data analysis is to divide the entire dataset sample (494,201) to k-folds (k-folder). Each fold (folder) has n-sample numbers from the dataset. Each sample has been picked in term that no fold (folder) has the same data sample as another fold.
In experimental result of proposed system, we divide the dataset to 25-folds. The 24th folds, each one has 20000 data samples, and the last one has 14021 samples so all these shown in Table (7) below. The experimental results were examined and discussed to exemplify the proposed ID system. In this case, we described three major parts. The first part is the essential features by employing three algorithms that have selected from the entire feature space which is 42 features. The second part, we explained in this step the result of dimensionality reduction imprudent and feature selection algorithm by selecting the feature space (7). The last part is a part of comparing between IDS proposal experimental results and the previous works.
In this implementation system,we trust on scoring the Eigen value score to reorder the feature from the highest score ( the most significant one ) to the smallest one
Classification Experimental Results
classify the kinds of attackon the 10% of KDD Cup 99 dataset, we employed. Support Vector Machine (SVM) classification algorithm in this phase. This algorithm has been utilized with the reduction dimension space features.
Comparing our Classification Results
Compare the performance results of SVM classifiers by employing the whole data samples with all three dimensionality reduction methods that we have suggested. We can see that by employing the SVM classifier to categorize the entire data samples according to the attack kinds, our method for PCA and LDA, SVD gives a higher accuracy in testing and training.
Figure 7
Accuracy Support vector machine (SVM) classification with three Algorithms Dimensionality Reduction Attack Detection
Comparing our Classification Results with Other studies
There were great number studies that have been done to classify the attack kinds using 10% of KDD Cup 99 dataset. In this part, we will compare our results utilizing reduction algorithm (PCA and LDA, SVD) with the other studies that have been done on the same dataset. Table ( 9) shows a briefly comparison between our proposed system's result and the other methods according to the performance results for the overall accuracy for testing and training.
Conclusion
Today, a large amount of threat attacks network and information security. In this paper, we proposed an intrusion detection system that reduces the set of features and classifies attack types. The reduction of features is performed by us also then the classification which the proposed algorithm is a combination of features selection. Reduced features for intrusive detection system and increased attack detection rate to the SVM applied classification algorithm, which gives the highest resolution. The cup1999 kdd selection attacks are identified with less Error rate and high accuracy. The feature selection and their reduction have both affected the performance of the classification algorithm. In the future swarm optimization function dynamically reduces the number of unused feature attribute of traffic data. | 4,949 | 2021-02-28T00:00:00.000 | [
"Computer Science"
] |
sparseHessianFD : An R Package for Estimating Sparse Hessian Matrices
Sparse Hessian matrices occur often in statistics, and their fast and accurate estimation can improve efficiency of numerical optimization and sampling algorithms. By exploiting the known sparsity pattern of a Hessian, methods in the sparseHessianFD package require many fewer function or gradient evaluations than would be required if the Hessian were treated as dense. The package implements established graph coloring and linear substitution algorithms that were previously unavailable to R users, and is most useful when other numerical, symbolic or algorithmic methods are impractical, inefficient or unavailable.
The Hessian matrix of a log likelihood function or log posterior density function plays an important role in statistics. From a frequentist point of view, the inverse of the negative Hessian is the asymptotic covariance of the sampling distribution of a maximum likelihood estimator. In Bayesian analysis, when evaluated at the posterior mode, it is the covariance of a Gaussian approximation to the posterior distribution. More broadly, many numerical optimization algorithms require repeated computation, estimation or approximation of the Hessian or its inverse; see Nocedal and Wright (2006).
The Hessian of an objective function with M variables has M 2 elements, of which M (M +1)/2 are unique. Thus, the storage requirements of the Hessian, and computational cost of many linear algebra operations on it, grow quadratically with the number of decision variables. For applications with hundreds of thousands of variables, computing the Hessian even once might not be practical under time, storage or processor constraints. Hierarchical models, in which each additional heterogeneous unit is associated with its own subset of variables, are particularly vulnerable to this curse of dimensionality However, for many problems, the Hessian is sparse, meaning that the proportion of Şstruc-turalŤ zeros (matrix elements that are always zero, regardless of the value at which function is estimated) is high. Consider a log posterior density in a Bayesian hierarchical model. If the outcomes across units are conditionally independent, the cross-partial derivatives of heterogeneous variables across units are zero. As the number of units increases, the size of the Hessian still grows quadratically, but the number of non-zero elements grows only linearly; the Hessian becomes increasingly sparse. The row and column indices of the non-zero elements comprise the sparsity pattern of the Hessian, and are typically known in advance, before computing the values of those elements. R packages such as trustOptim (Braun 2014), sparseMVN (Braun 2015) and ipoptr (Wächter and Biegler 2006) have the capability to accept Hessians in a compressed sparse format.
The sparseHessianFD package is a tool for estimating sparse Hessians numerically, using either Ąnite differences or complex perturbations of gradients. Section 1.1 will cover the speciĄcs, but the basic idea is as follows. Consider a real-valued function f (x), its gradient ∇f (x), and its Hessian Hf (x), for x ∈ R M . DeĄne the derivative vector as the transpose of the gradient, and a vector of partial derivatives, so Df (x) = ∇f (x) ⊤ = (D 1 , . . . , D M ). (Throughout the paper, we will try to reduce notational clutter by referring to the derivative and Hessian as D and H, respectively, without the f (x) symbol). Let e m be a vector of zeros, except with a 1 in the mth element, and let δ be a sufficiently small scalar constant. A ŞĄnite differenceŤ linear approximation to the mth column of the Hessian is H m ≈ (∇f (x + δe m ) − ∇f (x)) /δ. Estimating a dense Hessian in this way involves at least M + 1 calculations of the gradient: one for the gradient at x, and one after perturbing each of the M elements of x, one at a time. Under certain conditions, a more accurate approximation is the Şcomplex stepŤ method: H m ≈ Im (∇f (x + iδe m )) /δ, where i = √ −1 and Im returns the imaginary part of a complex number (Squire and Trapp 1998). Regardless of the approximation method used, if the Hessian is sparse, most of the elements are constrained to zero. Depending on the sparsity pattern of the Hessian, those constraints may let us recover the Hessian with fewer gradient evaluations by perturbing multiple elements of x together. For some sparsity patterns, estimating a Hessian in this way can be profoundly efficient. In fact, for the hierarchical models that we consider in this paper, the number of gradient evaluations does not increase with the number of additional heterogeneous units.
The package deĄnes the sparseHessianFD class, whose initializer requires the user to provide functions that compute an objective function, its gradient (as accurately as possible, to machine precision), and the sparsity pattern of its Hessian matrix. The sparsity pattern (e.g., location of structural zeros) must be known in advance, and cannot vary across the domain of the objective function. The only functions and methods of the class that the end user should need to use are the initializer, methods that return the Hessian in a sparse compressed format, and perhaps some utility functions that simplify the construction of the sparsity pattern. The class also deĄnes methods that partition the variables into groups that can be perturbed together in a Ąnite differencing step, and recovers the elements of the Hessian via linear substitution. Those methods perform most of the work, but should be invisible to the user.
As with any computing method or algorithm, there are boundaries around the space of applications for which sparseHessianFD is the right tool for the job. In general, numerical approximations are not ŞĄrst choiceŤ methods because the result is not exact, so sparseHes-sianFD should not be used when the application cannot tolerate any error, no matter how small. Also, we admit that some users might balk at having to provide an exact gradient, even though the Hessian will be estimated numerically. 1 However, deriving a vector of Ąrst derivatives, and writing R functions to compute them, is a lot easier than doing the same for a matrix of second derivatives, and more accurate than computing second-order approximations from the objective function. Even when we have derived the Hessian symbolically, in practice it may still be faster to estimate the Hessian using sparseHessianFD than coding it directly. These are the situations in which sparseHessianFD adds the most value to the statisticianŠs toolbox.
This article proceeds as follows. First, we present some background information about numerical differentiation, and sparse matrices in R, in Section 1. In Section 2, we explain how to use the package. Section 3 explains the underlying algorithms, and Section 4 demonstrates the scalability of those algorithms.
Background
Before describing how to use the package, we present two short background notes. The Ąrst note is an informal mathematical explanation of numerical estimation of the Hessian matrix, with an illustration of how the number of gradient estimates can be reduced by exploiting the sparsity pattern and symmetric structure. This note borrows heavily from, and uses the notation in, Magnus and Neudecker (2007, Chapter 6). The second note is a summary of some of the sparse matrix classes that are deĄned in the Matrix package (Bates and Maechler 2015), which are used extensively in sparseHessianFD.
Numerical differentiation of sparse Hessians
The partial derivative of a real scalar-valued function f (x) with respect to x j (the jth component of x ∈ R M ) is deĄned as For a sufficiently small δ, this deĄnition allows for a linear approximation to D j f (x). The derivative of f (x) is the vector of all M partial derivatives.
We deĄne the second-order partial derivative as and the Hessian as The Hessian is symmetric, so D 2 ij = D 2 ji .
Approximation using Ąnite differences
To estimate the mth column of H using Ąnite differences, we choose a sufficiently small δ, and compute For M = 2, our estimate of a general Hf (x) would be This estimate requires three evaluations of the gradient to get Df (x 1 , x 2 ), Df (x 1 + δ, x 2 ), and Df (x 1 , x 2 + δ). Now suppose that the Hessian is sparse, and that the off-diagonal elements are zero. That means that If the identity in Equation 7 holds for x 1 , it must also hold for x 1 + δ, and if Equation 8 holds for x 2 , it must also hold for x 2 + δ. Therefore, Only two gradients, Df (x 1 , x 2 ) and Df (x 1 + δ, x 2 + δ), are needed. Being able to reduce the number of gradient evaluations from 3 to 2 depends on knowing that the cross-partial derivatives are zero.
Approximation using complex steps
If f (x) is deĄned over a complex domain and is holomorphic, then we can we can approximate Df (x) and Hf (x) at real values of x using the complex step method. This method comes from a Taylor series expansion of f (x) in the imaginary direction of the complex plane (Squire and Trapp 1998). After rearranging terms, and taking the imaginary parts of both sides, Estimating a Ąrst derivative using the complex step method does not require a differencing operation, so there is no subtraction operation that might generate roundoff errors. Thus, the approximation can be made arbitrarily precise as δ → 0 (Lai and Crassidis 2008). This is not the case for second-order approximations of the Hessian (Abreu, Stich, and Morales 2013). However, when the gradient can be computed exactly, we can compute a Ąrst-order approximation to Hessian by treating it as the Jacobian of a vector-valued function (Lai and Crassidis 2008).
If this matrix were dense, we would need two evaluations of the Df (x) to estimate it. If the matrix were sparse, with the same sparsity pattern as the Hessian in Equation 9, and we assume that structural zeros remain zero for all complex x ∈ Z M , then we need only one evaluation. Suppose we were to subtract Im(Df (x 1 , x 2 )) from each column of Hf (x). When x is real, the imaginary part of the gradient is zero, so this operation has no effect on the value of the Hessian. But the sparsity constraints ensure that the following identities hold all complex x.
As with the Ąnite difference method, because Equation 13 holds for x 1 , it must also hold for x 1 + iδ, and because Equation 14 holds for x 2 , it must also hold for x 2 + iδ. Thus, for real x. Only one evaluation of the gradient is required.
Perturbing groups of variables Curtis, Powell, and Reid (1974) describe a method of estimating sparse Jacobian matrices by perturbing groups of variables together. Powell and Toint (1979) extend this idea to the general case of sparse Hessians. This method partitions the decision variables into C mutually exclusive groups so that the number of gradient evaluations is reduced. DeĄne G ∈ R M ×C where G mc = δ if variable m belongs to group c, and zero otherwise. DeĄne G c ∈ R M as the cth column of G.
Next, deĄne Y ∈ R M ×C such that each column is either a difference in gradients, or the imaginary part of a complex-valued gradient, depending on the chosen method.
If C = M , then G is a diagonal matrix with δ in each diagonal element. The matrix equation HG = Y represents the linear approximation H im δ ≈ y im , and we can solve for all elements of H just by computing Y. But if C < M , there must be at least one G c with δ in at least two rows. The corresponding column Y c is computed by perturbing multiple variables at once, so we cannot solve for any H im without further constraints.
These constraints come from the sparsity pattern and symmetry of the Hessian. Consider an example with the following values and sparsity pattern.
Suppose C = 2, and deĄne group membership of the Ąve variables through the following G matrix.
Variables 1, 2 and 5 are in group 1, while variables 3 and 4 are in group 2.
Next, compute the columns of Y using Equation 16. We now have the following system of linear equations from HG = Y.
Note that this system is overdetermined. Both h 31 = y 12 and h 53 = y 52 can be determined directly, but h 31 + h 53 = y 31 may not necessarily hold, and h 42 could be either y 41 or y 22 . Powell and Toint (1979) prove that it is sufficient to solve LG = Y instead via a substitution method, where L is the lower triangular part of H. This has the effect of removing the equations h 42 = y 22 and h 31 = y 12 from the system, but retaining h 53 = y 52 . We can then solve for h 31 = y 31 − y 52 . Thus, we have determined a 5 × 5 Hessian with only three gradient evaluations, in contrast with the six that would have been needed had H been treated as dense.
The sparseHessianFD algorithms assign variables to groups before computing the values of the Hessian. This is why the sparsity pattern needs to be provided in advance. If a non-zero element is omitted from the sparsity pattern, the resulting estimate of the Hessian will be incorrect. The only problems with erroneously including a zero element in the sparsity pattern are a possible lack of efficiency (e.g., an increase in the number of gradient evaluations), and that the estimated value might be close to, but not exactly, zero. The algorithms for assigning decision variables to groups, and for extracting nonzero Hessian elements via substitution, are described in Section 3.
Sparse matrices and the Matrix package
The sparseHessianFD package uses the sparse matrix classes that are deĄned in the Matrix package (Bates and Maechler 2015). All of these classes are subclasses of sparseMatrix. Only the row and column indices (or pointers to them), the non-zero values, and some metadata, are stored; unreferenced elements are assumed to be zero. Class names, summarized in Table 1, depend on the data type, matrix structure, and storage format. Values in numeric and logical matrices correspond to the R data types of the same names. Pattern matrices contain row and column information for the non-zero elements, but no values. The storage format refers to the internal ordering of the indices and values, and the layout deĄnes a matrix as symmetric (so duplicated values are stored only once), triangular, or general. The levels of these three factors determine the preĄx of letters in each class name. For example, a triangular sparse matrix of numeric (double precision) data, stored in column-compressed format, has a class dtCMatrix. Matrix also deĄnes some other classes of sparse and dense matrices that we will not discuss here. The Matrix package uses the as function to convert sparse matrices from one format to another, and to convert a base R matrix to one of the Matrix classes.
The distinctions among sparse matrix classes is important because sparseHessianFDŠs hessian method returns a dgCMatrix, even though the Hessian is symmetric. Depending on how the Hessian is used, it might be useful to coerce the Hessian into a dsCMatrix object. Also, the utility functions in Table 2 expect or return certain classes of matrices, so some degree of coercion of input and output might be necessary. Another useful Matrix function is tril, which extracts the lower triangle of a general or symmetric matrix.
Using the package
In this section, we demonstrate how to use the sparseHessianFD, using a hierarchical binary choice model as an example. Then, we discuss the sparsity pattern of the Hessian, and estimate the Hessian values.
Example model: hierarchical binary choice
Suppose we have a dataset of N households, each with T opportunities to purchase a particular product. Let y i be the number of times household i purchases the product, out of the T purchase opportunities, and let p i be the probability of purchase. The heterogeneous parameter p i is the same for all T opportunities, so y i is a binomial random variable.
Let β i ∈ R k be a heterogeneous coefficient vector that is speciĄc to household i, such that β i = (β i1 , . . . , β ik ). Similarly, z i ∈ R k is a vector of household-speciĄc covariates. DeĄne each p i such that the log odds of p i is a linear function of β i and z i , but does not depend directly on β j and z j for another household j ̸ = i.
The coefficient vectors β i are distributed across the population of households following a multivariate normal distribution with mean µ ∈ R k and covariance Σ ∈ R k×k . Assume that we know Σ, but not µ, so we place a multivariate normal prior on µ, with mean 0 and covariance Ω ∈ R k×k . Thus, the parameter vector x ∈ R (N +1)k consists of the N k elements in the N β i vectors, and the k elements in µ.
The log posterior density, ignoring any normalization constants, is
Sparsity patterns
Let x 1 and x 2 be two subsets of elements of x. DeĄne D 2 x 1 ,x 2 as the product set of crosspartial derivatives between all elements in x 1 and all elements in x 2 . From the log posterior density in Equation 21, we can see that D 2 β i ,β i ̸ = 0 (one element of β i could be correlated with another element of β i ), and that, for all i, D 2 β i ,µ ̸ = 0 (because µ is the prior mean of each β i ). However, since the β i and β j are independently distributed, and the y i are conditionally independent, the cross-partial derivatives D 2 β i ,β j = 0 for all i ̸ = j. When N is much greater than k, there will be many more zero cross-partial derivatives than non-zero. Each D 2 is mapped to a submatrix of H, most of which with be zero. The resulting Hessian of the log posterior density will be sparse.
The sparsity pattern depends on the indexing function; that is, on how the variables are ordered in x. One such ordering is to group all of the coefficients in the β i for each unit together. β 11 , . . . , β 1k , β 21 , . . . , β 2k , . . . , β N 1 , . . . , β N k , µ 1 , . . . , µ k (22) In this case, the Hessian has a Şblock-arrowŤ structure. For example, if N = 5 and k = 2, then there are 12 total variables, and the Hessian will have the pattern in Figure 1a.
In both cases, the number of non-zeros is the same. There are 144 elements in this symmetric matrix, but only 64 are non-zero, and only 38 values are unique. Although the reduction in RAM from using a sparse matrix structure for the Hessian may be modest, consider what would happen if N = 1, 000 instead. In that case, there are 2,002 variables in the problem, and more than 4 million elements in the Hessian. However, only 12, 004 of those elements are non-zero. If we work with only the lower triangle of the Hessian, then we need to work with only 7,003 values.
The sparsity pattern required by sparseHessianFD consists of the row and column indices of the non-zero elements in the lower triangle the Hessian, and it is the responsibility of the user to ensure that the pattern is correct. In practice, rather than trying to keep track of the row and column indices directly, it might be easier to construct a pattern matrix Ąrst, check visually that the matrix has the right pattern, and then extract the indices. The package Figure 1: Two examples of sparsity patterns for a hierarchical model.
Matrix.to.Coord
Returns a list of vectors containing row and column indices of the non-zero elements of a matrix.
Matrix.to.Pointers
Returns indices and pointers from a sparse matrix.
Coord.to.Pointers
Converts list of row and column indices (triplet format) to a list of indices and pointers (compressed format). (Table 2) to convert between sparse matrices, and the vectors of row and column indices required by the sparseHessianFD initializer.
The Matrix.to.Coord function extracts row and column indices from a sparse matrix. The following code constructs a logical block diagonal matrix, converts it to a sparse matrix, and prints the sparsity pattern of its lower triangle. R> library("sparseHessianFD") R> bd <-kronecker(diag(3), matrix(TRUE, 2, 2)) R> Mat <-as(bd, "nMatrix") R> printSpMatrix(tril(Mat)) [ If there is uncertainty about whether an element is a structural zero or not, one should err on the side of it being non-zero, and include that element in the sparsity pattern. There might be a loss of efficiency if the element really is a structural zero, but the result will still be correct. All that would happen is that the numerical estimate for that element would be zero (within machine precision). On the other hand, excluding a non-zero element from the sparsity pattern will likely lead to an incorrect estimate of the Hessian.
x A numeric vector, with length M at which the object will be initialized and tested.
fn,gr R Functions that return the value of the objective function, and its gradient. The Ąrst argument is the numeric variable vector. Other named arguments can be passed to fn and gr as well (see the ... argument below).
rows, cols Sparsity pattern: integer vectors of the row and column indices of the nonzero elements in the lower triangle of the Hessian.
delta The perturbation amount for Ąnite differencing of the gradient to compute the Hessian (the δ in Section 1.1). Defaults to sqrt(.Machine$double.eps).
index1 If TRUE (the default), row and col use one-based indexing. If FALSE, zerobased indexing is used.
complex If TRUE, the complex step method is used. If FALSE (the default), a simple Ąnite differencing of gradients is used.
... Additional arguments to be passed to fn and gr.
The sparseHessianFD class
The function sparseHessianFD is an initializer that returns a reference to a sparseHessianFD object. The initializer determines an appropriate permutation and partitioning of the variables, and performs some additional validation tests. The arguments to the initializer are described in Table 3.
To create a sparseHessianFD object, just call sparseHessianFD. Applying the default values for the optional arguments, the usage syntax to create a sparseHessianFD object is obj <-sparseHessianFD(x, fn, gr, rows, cols, . ..) where ... represents all other arguments that are passed to fn and gr.
The fn, gr and hessian methods respectively evaluate the function, gradient and Hessian at a variable vector x. The fngr method returns the function and gradient as a list. The fngrhs method includes the Hessian as well.
An example
Now we can estimate the Hessian for the log posterior density of the model from Section 2.1. For demonstration purposes, sparseHessianFD includes functions that compute the value (binary.f), the gradient (binary.grad) and the Hessian (binary.hess) of this model. We will treat the result from binary.hess as a ŞtrueŤ value against which we will compare the numerical estimates.
To start, we load the data, set some dimension parameters, set prior values for Σ −1 and Ω −1 , and simulate a vector of variables at which to evaluate the function. The binary.f and binary.grad functions take the data and priors as lists. The data(binary) call adds the appropriate data list to the environment, but we need to construct the prior list ourselves. 1 1 1 1 1 1 1 2 2 ...
Finally, we create an instance of a sparseHessianFD object. Evaluations of the function and gradient using the fn and gr methods will always give the same results as the true values because they are computed using the same functions. The default choice of method is complex=FALSE, so the evaluation of the Hessian is a Ąnite differenced approximation, so it is very close to, but not identical to, the true value, in terms of mean relative difference.
R> hs <-obj$hessian(P) R> mean(abs(hs-true.hess))/mean(abs(hs)) [1] 2.33571e-09 If complex=TRUE in the initializer, the call to the hessian method will apply the complex step method. To use this method, the functions passed as fn and gr must both accept a complex argument, and return a complex result, even though we are differentiating a real-valued function. Although base R supports complex arguments for most basic mathematical functions, many common functions (e.g., gamma, log1p, expm1, and the probability distribution functions) do not have complex implementations. Furthermore, the complex step method is valid only if the function is holomorphic. The methods in sparseHessianFD do not check that this is the case for the function at hand. We convey the following warning from the documentation of the numDeriv package (Gilbert and Varadhan 2012), which also implements the complex step method: Şavoid this method if you do not know that your function is suitable. Your mistake may not be caught and the results will be spurious.Ť Fortunately for demonstration purposes, the log posterior density in Equation 21 is holomorphic, so we can estimate its Hessian using the complex step method, and compute the mean relative difference from the true Hessian.
Algorithms
In this section, we explain how sparseHessianFD works. The algorithms are adapted from Coleman, Garbow, and Moré (1985b), who provided Fortran implementations as Coleman, Garbow, and Moré (1985a). Earlier versions of sparseHessianFD included licensed copies of the Coleman et al. (1985a) code, on which the current version no longer depends. Although newer partitioning algorithms have been proposed (e.g., Gebremedhin, Manne, and Pothen 2005;Gebremedhin, Tarafdar, Pothen, and Walther 2009), mainly in the context of automatic differentiation, we have chosen to implement established algorithms that are known to work well, and are likely optimal for the hierarchical models that many statisticians will encounter.
Partitioning the variables
Finding consistent, efficient partitions can be characterized as a vertex coloring problem from graph theory (Coleman and Moré 1984). In this sense, each variable is a vertex in an undirected graph, and an edge connects two vertices i and j if and only if H ij f (x) ̸ = 0. The sparsity pattern of the Hessian is the adjacency matrix of the graph. By Şcolor,Ť we mean nothing more than group assignment; if a variable is in a group, then its vertex has the color associated with that group. A ŞproperŤ coloring of a graph is one in which two vertices with a common edge do not have the same color. Coleman and Moré (1984) deĄne a Ştriangular coloringŤ as a proper coloring with the additional condition that common neighbors of a vertex do not have the same color. A triangular coloring is a special case of an Şcyclic coloring,Ť in which any cycle in the graph uses at least three colors (Gebremedhin, Tarafdar, Manne, and Pothen 2007).
An Şintersection setŤ contains characteristics that are common to two vertices, and an Şintersection graphŤ connects vertices whose intersection set is not empty. In our context, the set in question is the row indices of the non-zero elements in each column of L. In the intersection graph, two vertices are connected if the corresponding columns in L have at least one non-zero element in a common row. Powell and Toint (1979) write that a partitioning is consistent with a substitution method if and only if no columns of the of lower triangle of the Hessian that are in the same group have a non-zero element in the same row. An equivalent statement is that no two adjacent vertices in the intersection graph can have the same color. Thus, we can partition the variables by creating a proper coloring of the intersection graph of L. This intersection graph, and the number of colors needed to color it, are not invariant to permutation of the rows and columns of H. Let π represent such a permutation, and let L π be the lower triangle of πHπ ⊤ . Coleman and Moré (1984, Theorem 6.1) show that a coloring is triangular if and only if it is also a proper coloring of the intersection graph of L π . Furthermore, Coleman and Cai (1986) prove that a partitioning is consistent with a substitution method if and only if it is an acyclic coloring of the graph of the sparsity pattern of the Hessian. Therefore, Ąnding an optimal partitioning of the variables involves Ąnding an optimal combination of a permutation π, and coloring algorithm for the intersection graph of L π .
These ideas are illustrated in Figures 2 and 3. Figure 2a shows the sparsity pattern of the lower triangle of a Hessian as an adjacency matrix, and Figure 2b is the associated graph with a proper vertex coloring. Every column (and thus, every pair of columns) in Figure 2a has a non-zero element in row 7, so there are no non-empty intersection sets across the columns. All vertices are connected to each other in the intersection graph (Figure 2c), which requires seven colors for a proper coloring. Estimating a sparse Hessian with this partitioning scheme would be no more efficient than treating the Hessian as if it were dense. Now suppose we were to rearrange H so the last row and and column were moved to the front.
In Figure 3a, all columns share at least one non-zero row with the column for variable 7, but variable groups ¶2, 4, 6♢ and ¶1, 3, 5♢ have empty intersection sets. The intersection graph in Figure 3c has fewer edges than Figure 2c, and can be colored with only three colors.
The practical implication of all of this is that by permuting the rows and columns of the Hessian, we may be able to reduce the number of colors needed for a cyclic coloring of the graph of the sparsity pattern. Fewer colors means fewer partitions of the variables, and that means fewer gradient evaluations to estimate the Hessian.
The sparseHessianFD class Ąnds a permutation, and partitions the variables, when it is initialized. The problem of Ąnding a cyclic coloring of the graph of the sparsity pattern is NPcomplete (Coleman and Cai 1986), so the partitioning may not be truly optimal. Fortunately, we just need the partitioning to be reasonably good, to make the effort worth our while. A plethora of vertex coloring heuristics have been proposed, and we make no claims that any of the algorithms in sparseHessianFD are even Şbest availableŤ for all situations.
The Ąrst step is to permute the rows and columns of the Hessian. A reasonable choice is the Şsmallest-lastŤ ordering that sorts the rows and columns in decreasing order of the number of elements (Coleman and Moré 1984, Theorem 6.2). To justify this permutation, suppose non-zeros within a row are randomly distributed across columns. If the row is near the top of the matrix, there is a higher probability that any non-zero element is in the upper triangle, not in the lower. By putting sparser rows near the bottom, we do not change the number of non-zeros in the lower triangle, but we should come close to minimizing the number of non-zeros in each row. Thus, we would expect the number of columns with non-zero elements in common rows to be smaller, and the intersection graph to be sparser (Gebremedhin et al. 2007).
The adjacency matrix of the intersection graph of the permuted matrix is the Boolean crossproduct, L ⊤ π L π . Algorithm 1 is a ŞgreedyŤ vertex coloring algorithm, in which vertices are colored sequentially. The result is a cyclic coloring on the sparsity graph, which in turn is a consistent partitioning of the variables.
Computing the Hessian by substitution
The cycling coloring of the sparsity graph deĄnes the G matrix from Section 1.1. We then estimate Y using Equation 16. Let C m be the assigned color to variable m. The substitution method is deĄned in Coleman and Moré (1984, Equation 6.1).
We implement the substitution method using Algorithm 2. This algorithm completes the bottom row of the lower triangle, copies values to the corresponding column in the upper triangle, and advances upwards.
Software libraries
The coloring and substitution algorithms use the Eigen numerical library (Guennebaud, Jacob et al. 2010 Table 4: Computation times (milliseconds) for computing Hessians using the numDeriv and sparseHessianFD packages, and the Ąnite difference and complex step methods, across 500 replications. Rows are ordered by the number of variables.
Speed and scalability
As far as we know, numDeriv (Gilbert and Varadhan 2012) is the only other R package that computes numerical approximations to derivatives. Like sparseHessianFD it includes functions to compute Hessians from user-supplied gradients (through the jacobian function), and implements both the Ąnite differencing and complex step methods. Its most important distinction from sparseHessianFD is that it treats all Hessians as dense. Thus, we will use numDeriv as the baseline against which we can compare the performance of sparseHessianFD.
To prepare Table 4, we estimated Hessians of the log posterior density in Equation 21 with different numbers of heterogeneous units (N ) and within-unit parameters (k). The total number of variables is M = (N + 1)k. Table 4 shows the mean and standard deviations (across 500 replications) for the time (in milliseconds) to compute a Hessian using functions for both the Ąnite difference and complex step methods from each package. Times were generated on a compute node running ScientiĄc Linux 6 (64-bit) with an 8-core Intel Xeon X5560 processor (2.80 GHz) with 24 GB of RAM, and collected using the microbenchmark package (Mersmann 2014). Code to replicate Table 4 is available in the doc/ directory of the installed package, and the vignettes/ directory of the source package. In Table 4 we see that computation times using sparseHessianFD and considerably shorter than those using numDeriv.
To help us understand just how scalable sparseHessianFD is, we ran another set of simulations, for the same hierarchical model, for different values of N and k. We then recorded the run times for different steps in the sparse Hessian estimation, across 200 replications. The steps are summarized in Table 5. The times were generated on an Apple Mac Pro with a 12-core Intel Xeon E5-2697 processor (2.7 GHz) with 64 GB of RAM.
In the plots in Figure 4, the number of heterogeneous units (N ) is on the x-axis, and median run time, in milliseconds, is on the y-axis. Each panel shows the relationship between N and run time for a different step in the algorithm, and each curve in a panel represents a different
Measure Description
Function estimating the objective function Gradient estimating the gradient Hessian computing the Hessian (not including initialization or partitioning time) Partitioning Ąnding a consistent partitioning of the variables (the vertex coloring problem) Initialization total setup time (including the partitioning time) Table 5: Summary of timing tests (see Figure 4).
number of within-unit parameters (k).
Computation times for the function and gradient, as well as the setup and partitioning times for the sparseHessianFD object, grow linearly with the number of heterogenous units. The time for the Hessian grows linearly as well, and that might be partially surprising. We saw in Section 3.1 that adding additional heterogeneous units in a hierarchical model does not increase the number of required gradient evaluations. So we might think that the time to compute a Hessian should not increase with N at all. The reason it does is that each gradient evaluation takes longer. Nevertheless, we can conclude that the sparseHessianFD algorithms are quite efficient and scalable for hierarchical models. | 8,416.8 | 2017-12-04T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
CTM and QFD analysis: Framework for fintech adoption priority in commercial banks
As financial technology (fintech) is developing rapidly, many commercial banks experience difficulty deciding what kind of fintech to primarily focus on when managing their business. Owing to limited resources and assets, there is a practical need for guidelines for banks’ investments in fintech. This study provides a systemic procedure to identify promising fintech groups and their investment priorities. We propose a QFD-based decision support framework for banks by considering both aspects of the emerging fintech push identified using patent topic modeling and the market pull of banking services obtained from a survey of the literature and experts. An empirical application of the proposed QFD framework to major South Korean banks shows that transaction support technology, secure transactions, and trading platforms are the three most important fintech categories. The QFD results are utilized to guide individual banks for further investment strategies such as mergers and acquisitions, strategic partnerships, and spin-off operations. The proposed framework can be generalized and applied to other financial service firms.
Introduction
As digital transformation technology is evolving, the banking sector has been actively adapting to advanced financial technology (fintech) to offer new banking products and services and gain competitiveness for better market positions.The literature notes that technological innovation in the banking sector influences numerous services.From a banking system perspective, fintech has been undergoing intensive discussions following the rapid growth of patents related to machine learning, blockchain, and robot advisers [1,2].Fintech-driven innovation has gained attention for its complementary effect on traditional commercial banks, which already have a large amount of data related to customer information, including customer transaction behavior [3].Big data, which are owned by traditional commercial banks, are highly accurate, complete, and reliable, and provide a unique advantage because they can be utilized to reduce credit risk and predict borrowers' behavior [4].Commercial banks, as a backbone of the banking system, are advised to integrate modern technologies to retain users and remain competitive [5,6].Major global banks, such as the Bank of America, JP Morgan, and Wells Fargo, hold many patented technologies.Nevertheless, these banks have not created innovative new business models using these technologies.This indicates that integrating new technologies and services is more complex than simply developing technology.
The challenges facing Korean banks are more threatening than those of global banks.Mobile applications created by Fintech-based startups with large-scale investments have secured monthly active users.They exceed those of commercial bank mobile apps in a short period of time, and the gap is gradually widening [7].A greater threat is the emergence of Internet-based banks.In particular, the rapid asset growth of Kakao Bank, a subsidiary of Kakao, which controls more than 95% of the Korean mobile messenger market, could be a factor in declining market share of banking industry.Korean commercial banks that lack ITbased technology and manpower are in a rush to imitate the rapid service launches with modern technologies of fintech-based startups and internet-only banks [8].Therefore, Korean commercial banks are aware that it is essential to secure fintech to become a leader, not a follower.However, as can be seen from the examples of major global banks that have sufficient technology, more complex considerations are needed because simply securing technology does not create nor guarantee a new business model.
The complexity of integrating new technologies and services leads to potential ambiguity for many banking firms in setting up their plans to employ fintech to transform financial services to those that are cost-effective, convenient, reliable, and meet customer needs [9].Rooted in the technology adoption theories, the adoption of fintech is yet to be realized due to the remaining concerns of users regarding fraud, data protection and cyber-attacks [10].In such situations, practical guidelines for configuring fintech priorities should be employed.Specifically, a decision-making support framework is necessary for banks that consider the aspects of the fintech push of related emerging technologies and the market pull of banking services.However, most fintech or digital technology-related studies for banks have concentrated on verifying the positive impact of digital finance in stimulating innovation [11], diffusing mobile-based branchless banking services [12], encouraging user participation in risky financial markets [13], identifying failure factors of banking information systems [14], and gaining users' trust and adoption intention of Internet banking [15].
Different technologies may be required to improve the services offered by banks.For example, to refine the loan interest rate, it is necessary to accurately estimate the default rate of borrowers.Deep learning and text-mining algorithms can be used for this purpose.Conversely, a specific technology can be applied to multiple services.For instance, deep learning can be used to estimate the corporate default rate, but it can also be used to recommend an appropriate portfolio to customers.While recent waves of digital innovations have led to a positive outlook for various fintech application projects, traditional and commercial bank managers and executives face challenging tasks in prioritizing investments and developments [16].Therefore, it is necessary to understand the interrelationship between the various services that banks offer and the different types of fintech applications that are available.
Based on the aforementioned challenges, we provide a systemic procedure that can be used to identify promising fintech groups and their investment priorities.Our proposed Quality function deployment (QFD)-based decision support framework considers the fintech push and market pull of banking services.To consider the market pull aspect, we classified the services provided by commercial banks through the literature and expert surveys.For the fintech push, contextualized subject modeling is applied to patents to identify emerging fintech groups.We then applied the proposed framework to patent data filed at the United States Patent and Trademark Office (USPTO) with a survey of major South Korean commercial banks.Subsequently, fintech investment strategies are proposed based on the empirical results.This study uniquely contributes to banks' efficiency in terms of fintech adoption by proposing an approach that fulfills research and practical needs.
The remainder of this paper is organized as follows.Section 2 provides a literature review of traditional commercial banks' perspectives on fintech.Section 3 provides an overview of the research methodology of QFD along with contextualized topic modeling (CTM) analyses and the survey of major South Korean commercial banks.The analyses results, implications, and the investment strategies for fintech adoption by banks are discussed in Section 4. Lastly, the conclusions are discussed in Section 5.
Literature review
This section reviews research on fintech adoption and the relationship between innovation and performance from the perspective of commercial banks.
Choices of fintech adoption
Fintech and its adoption in the banking industry have brought innovations in various financial services, such as credit services, deposit services, financial market trading and brokerage, financial product advisory and retail sales, and transfer and global remittance [17][18][19][20][21]. Based on the predicated positive influence of fintech adoption in the commercial banking business model, various suggestions have been made regarding the scope of choices.For example, Thakor [17] proposed the following four services for fintech innovation: (i) credit, deposit, and capital-raising services (i.e., crowdfunding, lending marketplaces, mobile banks, and credit scoring); (ii) payments, clearing, and settlement services (i.e., mobile wallets and digital exchange); (iii) investment management services (i.e., e-trading and robo-advice); and (iv) insurance services (i.e., data-driven risk pricing and contracts).
However, the advent of fintech can also provide an unexpected challenge in sustaining market demand for commercial banks with traditional business models focusing on the traditional financial market.Grobys et al. [22] found that fintech-embedded lending services can improve financial intermediation in mortgage markets.While traditional banks generally provide lending services that charge minorities higher fees for purchase and refinance mortgages, the recently proposed fintech-embedded service can significantly reduce potential discrimination using algorithms compared with in-person services [23].Baker and Wurgler [24] also noted that independent mobile payments can lower overall costs by utilizing cloud computing to store and manage user data efficiently and, ultimately, offer faster payment processes.
Despite breakthroughs in technology development, traditional commercial banks' conservative and less strategic approaches to technology adoption have suffered from losing new market opportunities to fintech startups or new market entrants.Bunnell et al. [25] noted that implementing fintech should lead to potential solutions to the challenges faced by traditional financial advisory services, thereby ensuring improved services from both the service provider and user perspectives.
Thus, commercial banks with traditional business models must prioritize fintech applications based on market demand, target services, and patent-based technology readiness.
Strategies for fintech adoption
The general adoption or investment choice for which financial services should be improved depends on technology readiness and service strategies.Therefore, commercial banks must consider appropriate investment strategies for a desirable outcome given the intended scope of the services.Recent trends in commercial banks' efforts to acquire and develop intellectual capital can be explained from two perspectives: internal efforts (i.e., hiring data scientists and operating internal projects for innovation) and external efforts (i.e., funding or participating in joint ventures or mergers and acquisition of technology companies).Brandl and Hornuf [26] differentiated investments from three perspectives: full integration of another company, strategic partnership between firms, and spin-off operations led by banks with traditional business models.They identified that banks with traditional business models must consider the possibilities of technology-driven digitalized financial services and coordinate common technological standards and banking functions to realize appropriate performance.
Intellectual capital includes intangible elements, such as knowledge, skill, information, and organizational structure, and tangible elements, such as patents, licenses, trademarks, and trade secrets [27].IT investments are often a unique investment in acquiring intellectual capital in the banking industry [28].The banking industry heavily invests in new technologies to satisfy service users' expectations and improve their overall experience [29].Wang et al. [28] noted that small banks' tendency to overinvest and large banks' tendency to underinvest in technological development negatively or insignificantly impacts intellectual capital.The degree of investment in digital transformation or IT infrastructure requires strategic planning to achieve desirable performance.
Investment planning and strategic approaches have been proposed in several studies.Daim et al. [9] adopted patent co-citation analysis to better understand the emerging Internet of Things (IoT), cybersecurity, and blockchain technologies.They noted that the strategy of patent layouts and the development speed of innovative technologies are considered critical elements in determining the overall performance of IoT, blockchain, and cybersecurity.Baumann et al. [30] utilized patent documents to determine which countries invest in specific technologies and identify potential innovation trends for energy technologies.They suggest that firms use strategic patenting to demonstrate their technological strategy for marketing purposes.Furthermore, they noted that the suggested analytical approach could provide information on patenting strategies between national and international patenting activities.Lastly, Duho and Onumah [31] emphasized the critical role of decision support units in achieving investment efficiency, as they positively drive intellectual capital performance.
Relationship between technological innovation and performance
Theoretical foundations of technology adoption research share two viewpoints: adopters (user-level) and service provider (firm-level).The most prominent theories include theory acceptance model (TAM), unified theory of acceptance and user of technology (UTAUT), diffusion of innovation (DOI), and dynamic capability to name a few [32][33][34][35].Among the theories, firm-level viewpoint often relies on DOI and dynamic capability [35,36].Similarly, financial service firms (including both traditional and internet-only banks) anticipate the growth of capabilities leading to the innovation outcome and the potential performance improvement [37].The dynamic capability view elaborates how firms can utilize capabilities and the external trend to gain competitive advantage in the market [38].Based on the view, the technology adoption in financial services can be regarded as a modification or a complete renewal of existing service capabilities to fulfill the market's needs.
Technology-driven financial service development approaches produce innovative outcomes for new business models, applications, and process or products [39].Consequently, commercial banks initiate and accelerate various types of research and development activities for patent acquisition.For example, Wang et al. [40] suggested that the levels of fintech innovation outcomes (reduction in bank operating costs, service efficiency improvement, strengthened risk control capabilities, and enhanced customer-oriented business models) depend on the bank's use of technological innovation.Wang et al. [28] empirically validated that a positive impact of IT investments on intellectual capital can lead to competitive advantage, contingent on firm type, size, positioning, and location.However, because of the nature of patents which take time to come into effect, commercial banks often need government support and selective investments for desirable outcomes.For example, investment-driven technological innovations require time to actualize and may not show immediate results at initial financing [41].Investments in internal projects alone do not directly lead to investment performance, and government regulations must be considered for a fintech boom to become apparent [42].Haddad and Hornuf [43] noted that the availability of Internet server security, mobile subscription, and labor force also affect the development of fintech-driven markets.Moreover, the diffusion of financial service platform is often affected by the adoption intentions of the user groups [44].Based on various efforts to understand the relationship between the development of intellectual capital (such as patents), banking competitiveness, and performance, internal and external factors must be considered [28].
There remains a critical decision support question regarding the financial services that technology must be strategically applied to in an orderly fashion to appropriately fulfill target customer needs from commercial banks anticipating the digital transformation of traditional business models.
Research methodology and data analysis
Based on a thorough literature survey, we propose a patent-based QFD framework by first identifying the areas for financial services with sublevels.Second, emerging technologies in the banking industry are classified into several areas by applying CTM to the abstracts of fintech-related patents.QFD is applied to identify the priorities of emerging fintech areas concerning the prioritized needs of the financial service categories.Empirical results were obtained by applying the proposed framework to patent data filed at the USPTO, along with a survey of major South Korean commercial banks.Fintech employment strategies are proposed based on this analysis.
QFD is a systematic framework originally developed for enhancing overall product and service design qualities by setting design targets based on the user's needs and requirements [45].The QFD application has proven useful in engineering and management for its usefulness in resolving design improvement solutions from a what and how perspective [46].Specifically, for a complex problem related to technology development trends and changing dynamics in service characteristics, QFD can simplify decision-making problems.QFD has been applied to identify emerging robot technologies [46], innovative services in the healthcare industry [45], and technology implementation orders [37], thereby proving its usefulness and applicability in forecasting technologies.
As case companies for commercial banks, four of the largest banks ranked by asset value in South Korea were utilized in this study.Namely, Woori, Shinhan, KB, and Hana financial groups are the only nationwide companies headquartered in South Korea.There are 19 commercial banks in Korea.Among them, 5 are special purpose banks owned by the government, 6 are local banks that can operate only in the provinces, 3 are Internet based banks, and 2 are foreign banks headquartered overseas.The four banks selected for the study are privately owned, nationally operational banks.In other words, only the four banks selected in the study are commercial banks that operate freely without any particular purpose.These firms have approximately over 305 billion USD (400 trillion KRW) total assets with over 25,000 employees.They represent traditional and commercial banks in South Korea for their historic establishment, going back to the early 1900s, with hundreds of branches in South Korea and global branches in other countries.These firms offer retail, corporate, and international banking; credit card operations; foreign exchange; and other services.Other commercial banks with different headquarters but available in South Korea are excluded from the study for the accuracy of the classification process by field practitioners (Section 3.1).
Fig 1 illustrates
the derivation procedure of the fintech priorities for commercial banks using QFD framework.First, we describe how we classify the financial services of commercial banks.Second, we explain how the banking industry's fintech technology can be divided using CTM.Third, we apply the QFD methodology to derive "fintech priority" based on financerelated patent data and opinions collected from commercial bank practitioners.
Identification of financial services of commercial banks (WHAT)
As displayed in Table 1, five distinct types of financial services are categorized, followed by examples of details of services that are provided, impacts of technology, and keywords that represent financial services well.This information was obtained from the perspective of commercial banks.
Classification of fintech categories (HOW)
This study classifies fintech using the latest topic modeling technique, CTM analysis.CTM provides keywords in each classification cluster (or topic cluster), providing baseline data to understand technology characteristics, such as readiness for application in financial services.This study utilizes patent data to identify technical characteristics or classifications to create a relationship matrix.Utilizing US patent data and the collection of abstracts, 12 topics were extracted by applying CTM.The details of each step are as follows: Step 1) Collection of patents using International Patent Classification (IPC) code Concerning development and patent registrations, the USPTO has been gaining attention for its leadership in the global trend [48].In a recent study by Liu and Qiao [49], the USPTO demonstrated a significant degree of leadership in the patent subject and proportion of profitable patents.Therefore, we collected patents filed at USPTO, an appropriate representation of the fintech development trend, with IPC codes G06Q 20 or G06Q 40 from 1972 to 2020, as displayed in Fig 2 .We used the G06Q 20 and G06Q 40 because they directly relate to the financial system.The former concerns "payment architectures, schemes or protocols," and the latter concerns "finance; insurance; tax strategies; processing of corporate or income taxes."We conducted research on patents filed after 2011 because most of the patents have been applied for since 2011 (the proportion of patents filed after 2011 is 90%).In addition, patents prior to 2011 were excluded as they were too outdated for use in this study using the latest fintech.
Step 2) Extraction of topics CTM was performed based on the abstract of the patent using Python.Before applying topic modeling, punctuation and insignificant words were removed.Specifically, we removed "stopword," which frequently appears in sentences but contributes little to semantic analysis.A graphical analysis method was used to obtain the appropriate number of topics.Using the dimension reduction method (PCA) and keyword extraction, we visually extracted the distance by topic and the number of overlapping classified topics.Using this approach, the number of topics with minimal overlap was selected while changing the number of topics from 10 to 20.Through this process, we set the number of topics to 12. Owing to the CTM, we obtained relevant topic words for the 12 topics, along with the probability that each patent belongs to a specific topic.Table 2 lists the 12 topics and relevant topic words. Step
3) Overview of anticipated technology-driven outcomes
To classify each topic as a technology, 10 patents with the highest probability of being included in each topic were selected.The abstracts of the corresponding patents were carefully read and analyzed.Subsequently, each topic was classified based on technology by comprehensively considering the relevant topic words and technologies included when reading the abstract of a patent (see S1 Appendix; The availability of detailed datasets can be discussed upon request under the consideration of license status.).
QFD analysis
QFD is a systematic framework that can be utilized to prioritize the order of investment in varying fintech.The framework consists of a matrix-like structure and provides decision-making support for identifying the relationship between customer requirements (i.e., financial services) and technical solutions (i.e., fintech topics).Using the interrelationship between the WHAT and HOW Lists and the weight of the HOW and WHAT Lists, QFD is applied to obtain the priority of the HOW list. Step
1) Identification of interrelationships
The core services of commercial banks were divided into five categories, and the keywords that represented them were selected (see Section 3.1).If one or more keywords for a specific core service (WHAT List) existed in the patent abstract, that patent was considered a specific "core service"-related patent.
Utilizing the CTM results, we could also identify the probability that each patent refers to a certain technology (HOW List).For example, if we expanded the keywords of a specific core service (WHAT List) matching the patents that were investigated in the study, we could identify the degree of coverage of the core service (WHAT List) in a specific technology (HOW List).
In other words, patents were initially classified into core service-related patent groups, each corresponding to a core service.Subsequently, by evaluating the probability of which technology (HOW List) the patents belong to, we could estimate the degree of interrelationship of the "core service (i)" and "technology (j)" based on the probabilities of patents belonging to both groups (p k ij ).Finally, we examined how the group is organized by technology by adding the probabilities of all the patents belonging to the group and referring to them as the interrelationship between i and j, IR ij .Specifically, IR ij was obtained by adding p k ij , the probability that patent k belongs to core service i and technology j, where N is the total number of patents used in this study.p k ij is 0 if patent k is related to neither service i nor technology j.The overall summary of the standardized interrelationships, STðIR ij Þ, is displayed in Table 3 and is derived as follows: Step
2) Identification of weight for financial services (WHAT) and fintech topics (HOW)
The weight of subjectiveness (W i sub , Weight of WHAT) was determined by five practitioners and experts (three general managers, one IT manager, and one division leader) from the case companies, and the weights were normalized.Pang et al. [50] emphasized the integration
Topics
No.
The weight of importance (W j imp , the weight of HOW) was computed using citation information, such as the average number of citations and the number of patents for each topic.Specifically, we identified the total number of patents for each fintech topic j extracted from the CTM and the number of citations for each of these patents.Subsequently, the average number of citations (C j avg ) was obtained by dividing the total number of forward citations by topic by the expected total number of patents with topic j.Forward citation information was obtained using a patented field until July 2021.The standardized interrelationship was derived using the following equation: The overall weight of importance is displayed in Table 5.The weight of urgency (W j urg , the weight of HOW) was calculated using the number of patents related to fintech topics (HOW List) from 2011 to 2020 (Table 6).The exponentially weighted moving average (EWMA) was used to assign more weight to the recently applied patents.Table 6 also shows the EWMA (λ = 0.5) values by year and the weight of urgency using the EWMA value for the data year 2020.
Results of QFD analysis
We adopted the QFD with interrelationships and WHAT's weight to obtain a more sophisticated "fintech priority."Table 7 shows the results of the QFD with interrelationships and weights applied.Topics 3 (transaction support technology), 1 (secure transactions), and 4 (trading platforms) were evaluated as the top three high priorities in the QFD analysis.By contrast, Topics 9 (insurance-related tech), 6 (financial product valuation and design), and 11 (hardware configuration) were assigned low priorities.Consequently, Topics 3, 1, and 4 were considered important technologies that needed to be acquired in a timely manner.Notably, for some specific fintech topics, there is a noticeable difference in the priority order when only subjective weight (W sub ) is applied compared with when both subjective weight (W sub ) and urgency weight (W urg ) are applied.For example, in the case of Topic 6 (financial product valuation and design), when only the subjective weight was applied, it was derived as the second-most important topic, but when both the subjective and urgency weights were applied, its priority was reclassified as the second-least important topic.
Conversely, in the case of Topics 1 (secure transaction) and 2 (mobile transaction), when only the subjective weight was applied, they were derived as being ninth and tenth in importance, respectively; however, when both subjective and urgency weights were considered, they were reclassified as the second-and fourth-most important topics.These results suggest that if technology priorities are derived by reflecting only the opinions of experts from commercial banks, the findings may not capture a holistic view of managerial and technical trends.The proposed methodology should include comprehensive inputs from practitioners and development trends of fintech technologies.
Investment strategy for commercial banks
As recently noted by Brandl and Hornuf [26], traditional and commercial banks can utilize various investment strategies to achieve digitalized financial services, such as full integration of another company, strategic partnerships between firms, and spin-off operations initiated by banks.This study provides complementary decision support guidelines that practitioners can use for investment decisions and partnership approaches for specific technologies.Specifically, practitioners can utilize the weights assigned by experts and patent information to make integrative decisions regarding technology acquisition.
Fig 3 presents the potential classification and relevant strategic guidelines for each technology topic.For example, Group I may be considered for full integration or strategic partnerships among firms or competitors.If patent owners are emerging technology or start-up fintech companies that seek to merge opportunities, the traditional bank may benefit from the consideration of full acquisition.However, if patent owners are large national companies (e.g., Bank of America) or cannot be considered for acquisition for geopolitical reasons, then the strategic partnership may be a more intuitive choice.
For Group II technologies, integrative spin-off operations with internal and external development efforts are needed.Internal development offers various benefits in fostering dynamic capabilities related to digital servitization, but also requires consideration of external factors, such as environmental contingencies [35].Based on the extent of patents that exist for these technology topics, a joint collaboration between banks and fintech firms can create a finetuned service, thereby providing synergistic performance to satisfy market needs [51].Finally, for Group III it is necessary to adopt conservative approaches in both investment and development strategies.Strategic partnership is recommended for the technology topics under this group, which will require a high level of collaboration for internal and external technology development for a desirable investment outcome.Results of the grouping and its implications are summarized in Table 8.We further interpret the abovementioned findings in the context of commercial banks in South Korea.
Conclusion
This study primarily aimed to resolve the absence of a decision framework for investment in technology, specifically from the perspective of commercial banks.Rooted in the technology adoption theory, customer-centric financial services were classified based on the integrative views of the literature and field practitioners.Then, the emerging topics of technological trends were identified using a patent database.CTM and QFD analyses were applied to extract technology investment priorities and recommendations for acquisition strategies for each fintech topic.This study has significant implications according to the findings.First, technology investment priorities must be determined based on the overall financial service strategy.R&D and technology investments are likely to lead to superior innovation capabilities which can thereby enhance new core competency based on the theory of dynamic capabilities [36,44].However, the recent trends in prioritizing technology investment have mainly investigated the investment productivity utilizing efficiency measure approaches such as data envelopment analysis [52].Our findings instead highlight the importance of the identification of service strategy prior to deciding the investment priorities as the results are differentiated based on the urgency of service development.Banks must decide on appropriate investment strategies and develop internal and external resources instead of depending on the technology advancement for the investment decision.
Second, our findings reshape technology acquisition strategies of commercial banks.Insights obtained from our study includes (1) transaction support technology, secure transactions, and trading platforms are commonly evaluated as the most critical technology topics; and (2) commercial banks are recommended to make investment strategies such as M&As, strategic partnerships, and spin-off operations considering the number of patents, importance of technology, and size of patent-owned fintech firms.Despite various literatures emphasizing the importance of technology-empowered personalized services [53], our study well aligns with Cao et al.'s [21] study in that transaction security is the utmost priority.This may be due to the high level of collaboration required for security technology.Thus, the proposed approaches is novel in comparison to the recent trend in emphasis of the fintech adoption (without consideration of the investment orders), our approach demonstrates the practical ways to reflect both.
Lastly, the alignment between financial service development and fintech trends can be improved through the decision support framework proposed in this study.Most importantly, this framework can be generalized and applied to other firms.The outcomes of this research approach can support and enable practitioners to make strategic decisions to enhance the productivity of fintech applications in meeting financial services.To the best of our knowledge, this is the first study to use a QFD structure to identify fintech investment orders using objective technological trends based on patent information and subjective financial It should be noted that without support from government regulations and policies, it is difficult for companies to achieve financial innovation or make sustainable investment decisions.When the Financial Services Commissions in South Korea proposed supportive policies, such as the Special Act on Support for Financial Innovation in 2019 and the Electronic Financial Transaction Act in 2017, various companies, including big tech and e-commerce companies, began various investment activities with a positive outlook at the entrance into the financial industry.This appears to be well aligned with the global trend in fostering the ecosystem of fintech startups and commercial banks in the financial sector [54].
However, a vague regulatory and policy stance remains regarding developing comprehensive and advanced payment and settlement services.For example, the regulatory system is currently ambiguous regarding allowing fintech firms to acquire and handle personal data or restricting them to traditional banks only.Furthermore, regarding virtual asset-related businesses, the current South Korean government maintains a conservative view and stands by a policy prohibiting financial services from engaging in transactions involving virtual assets.For example, the recently announced Act on Reporting and Using Specified Financial Transaction Information requires individuals or firms to report cryptocurrency-related transactions.
Based on the overall trend of fintech development and the involvement of various companies, regulators and policymakers need to actively consider how to support effective collaboration with appropriate incentive schemes.This can be further explained by observing the top 20 firms that own patents for Topics 3 and 4 (see S1 Appendix).Some patents related to Topic 3 are owned by global financial companies, whereas others are owned by individuals or smallsized companies.As most patents related to Topic 4 are owned by large exchanges and global financial firms, it is difficult to acquire technology through M&A.Therefore, to introduce and develop Topic 4-related technologies, it is necessary to consider paying a fee and forming a technology alliance.
Globally, particularly in South Korea, policymakers tend to separate the management of financial and nonfinancial corporations.Specifically, in the case of commercial banks, various direct or indirect regulations tend to disturb the rapid introduction of fintech by commercial banks.Most commercial banks worldwide do not speed up digital transformation because of regulations.For the long-term development of the financial system, it is necessary to quickly introduce developing fintech technology into traditional commercial banks, and active support from policymakers is required.
While this study provides commercial banks' acquisition strategy for fintech, it also intends to stimulate greater interest in understanding fintech applications from the perspective of traditional or commercial banks.To this end, this study proposes three research avenues to foster synergistic collaboration for greater performance in the financial industry.First, the classification of financial services and their subjective importance can benefit from inputs from other regions.While this study provides an integrative perspective of the literature and field practitioners from South Korea, it does not provide as comprehensive a perspective as the patent database.Second, the QFD framework with several subjective perspectives can be evaluated and verified in different contexts.For example, the framework's validity in resolving the potential gap between several subjective views can be further investigated from a decision-support system perspective, as other complementary approaches can indefinitely support practitioners in making better investment decisions.Third, the time that takes until commercialization from patent application should be considered when making strategic planning for the banks.For example, Broekel [55] and Daiha [56] noted a time lag for patent applications to come into effect.Finally, internal and external factors in fintech application-based financial services should be considered for long-term planning.For example, certain technologies may have higher volatility in terms of providing stable technologies and services to end customers.For sustainable investment planning and strategy, both market readiness and technological uncertainty should be incorporated in the decision-making process.These areas are left for further studies.
Fig 2 .
Fig 2. Distribution of the patents filed at USPTO with IPC codes G06Q 20 or G06Q 40.https://doi.org/10.1371/journal.pone.0287826.g002 Probability that patent k belongs to Fintech Topic jTotal number of Citations j ¼ ¼ number of forward citation of patent k
Table 1 . Classification of financial services based on the literature. Core financial services Examples of services Examples of technology disruption References
https://doi.org/10.1371/journal.pone.0287826.t001 | 7,596.4 | 2023-11-01T00:00:00.000 | [
"Business",
"Computer Science"
] |
A signature invariant geometric algebra framework for spacetime physics and its applications in relativistic dynamics of a massive particle and gyroscopic precession
A signature invariant geometric algebra framework for spacetime physics is formulated. By following the original idea of David Hestenes in the spacetime algebra of signature \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(+,-,-,-)$$\end{document}(+,-,-,-), the techniques related to relative vector and spacetime split are built up in the spacetime algebra of signature \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(-,+,+,+)$$\end{document}(-,+,+,+). The even subalgebras of the spacetime algebras of signatures \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\pm ,\mp ,\mp ,\mp )$$\end{document}(±,∓,∓,∓) share the same operation rules, so that they could be treated as one algebraic formalism, in which spacetime physics is described in a signature invariant form. Based on the two spacetime algebras and their “common” even subalgebra, rotor techniques on Lorentz transformation and relativistic dynamics of a massive particle in curved spacetime are constructed. A signature invariant treatment of the general Lorentz boost with velocity in an arbitrary direction and the general spatial rotation in an arbitrary plane is presented. For a massive particle, the spacetime splits of the velocity, acceleration, momentum, and force four-vectors with the normalized four-velocity of the fiducial observer, at rest in the coordinate system of the spacetime metric, are given, where the proper time of the fiducial observer is identified, and the contribution of the bivector connection is considered, and with these results, a three-dimensional analogue of Newton’s second law for this particle in curved spacetime is achieved. Finally, as a comprehensive application of the techniques constructed in this paper, a geometric algebra approach to gyroscopic precession is provided, where for a gyroscope moving in the Lense-Thirring spacetime, the precessional angular velocity of its spin is derived in a signature invariant manner.
• For two multivectors A and B in spacetime, their geometric product, inner product, outer product, and commutator product are represented by AB, A · B, A ∧ B , and A × B , respectively; • For a multivector M in spacetime, M and �M� p (p = 0, 1, 2, 3, 4) denote its reverse and p-vector part, respectively, where M 0 is abbreviated as M ; • The Greek letters, denoting the spacetime indices, range from 0 to 3, whereas the Latin letters, denoting the space indices, range from 1 to 3; • The sum should be taken over, when repeated indices appear within a term; • The international system of units is used.
Since the even subalgebras of the two STAs share the same operation rules, we will no longer distinguish them strictly and treat them as one algebraic formalism hereafter. In Appendix B of this paper, a detailed presentation of this algebraic formalism is given. It will be shown that the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) actually provides a signature invariant GA framework for spacetime physics. In order to give an application paradigm of the two STAs and their "common" even subalgebra, we need to make use of them to study some specific problems in spacetime physics, and gyroscopic precession is such a typical topic.
According to the prediction of General Relativity, the spin of a gyroscope precesses relative to the asymptotic inertial frames as it moves around a rotating spherical source 22 . The conventional method to describe gyroscopic precession under the weak-field and slow-motion (WFSM) approximation in tensor language is presented in Refs. 21,22 . For a uniformly rotating spherical source, the external gravitational field is stationary, and only the leading pole moments need to be considered, so that the spacetime geometry is described by the Lense-Thirring metric 30 . As a result, the corresponding spacetime is known as the Lense-Thirring spacetime. When a torquefree gyroscope is moving in this spacetime, there exist three types of precession for its spin, namely, the de Sitter precession, the Lense-Thirring precession, and the Thomas precession, where these phenomena are, respectively, resulted from gyroscopic motion through the spacetime curved by the mass of the source, rotation of the source, and gyroscopic non-geodesic motion 31 .
In the traditional description for gyroscopic precession based on tensor language, one always needs to work with the components of some tensor in a chosen coordinate frame, which often leads to many equations with a low degree of clarity. The language of STA could provide a physically clear approach to dealing with this topic, since one just involves geometric objects during calculation 32 . As a preliminary attempt, another purpose of the present paper is to handle gyroscopic precession by applying the STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra, so that for a gyroscope moving in the Lense-Thirring spacetime, a signature invariant derivation of the precessional angular velocity of its spin could be achieved. For brevity, in later applications, the signs "±" associated with multivectors and operators will be suppressed, and for equalities like A = F(±B) and C = G(∓D) , the signs " + " and "−" in the former equation correspond to the cases in the signatures (+, −, −, −) and (−, +, +, +) , respectively, and the situation in the latter equation is reverse.
Before analyzing gyroscopic precession, rotor techniques on Lorentz transformation and relativistic dynamics of a massive particle in curved spacetime need to be addressed in the two STAs. Rotor techniques are available in the STA of signature (+, −, −, −) [17][18][19][20] , and however, since the STA of signature (−, +, +, +) is rarely employed, these techniques have not been fully developed in this algebraic formalism, where in particular the expressions of the rotors inducing Lorentz boost and spatial rotation should be clearly established. Being the third purpose of this paper, by virtue of the rotors constructed in the "common" even subalgebra of the two STAs, the general Lorentz boost with velocity in an arbitrary direction and the general spatial rotation in an arbitrary plane are handled in a signature invariant manner. How to study physics in curved spacetime based on STA is a fundamental problem. By following GA techniques for General Relativity formulated in Ref. 33 , the treatment of gyroscopic precession in this paper is able to be put on a solid theoretical footing. To generate the STAs of signatures (±, ∓, ∓, ∓) in a curved spacetime, one just needs to define a local orthonormal tetrad {γ α } by the orthonormalization of a coordinate frame (in either signature), and then, by applying these two STAs and their "common" even subalgebra, the relevant topics in spacetime physics can be dealt with.
Relativistic dynamics of a massive particle in curved spacetime should be studied so as to describe the motion of a gyroscope moving around a gravitating source 34 . We assume that a collection of fiducial observers Scientific Reports | (2022) 12:3981 | https://doi.org/10.1038/s41598-022-06895-0 www.nature.com/scientificreports/ is distributed over space, and each fiducial observer is at rest in the coordinate system of the spacetime metric. For a massive particle, the spacetime splits of the velocity, acceleration, momentum, and force four-vectors with the normalized four-velocity γ 0 of the fiducial observer need to be derived, which is easy when spacetime is flat. However, in curved spacetime, some subtleties appear and ought to be seriously analyzed. For instance, the proper time of fiducial observers should be identified, and the contribution of the bivector connection ω(u) associated with {γ α } (cf. Ref. 33 ) should also be considered. In this paper, after overcoming these difficulties, the results are given, and with them, a three-dimensional analogue of Newton's second law for the particle in curved spacetime is achieved, which is the fourth purpose of the present paper. Besides, the Fermi-Walker derivatives presented in tensor language are recast in the STAs of signatures (±, ∓, ∓, ∓) so that the motion of the spin of a gyroscope can be depicted in these two STAs 21 .
With the aid of the GA techniques constructed before, an efficient treatment of gyroscopic precession could be provided in the two STAs. Considering a gyroscope moving in the Lense-Thirring spacetime, some significant results like the three-dimensional generalized equation of motion for the gyroscope are first given on the basis of relativistic dynamics of a massive particle. Then, the rotor techniques are employed to handle the spin of the gyroscope, and the direct result shows that a bivector field Ω(τ ) along its worldline completely determines the motion of its spin, where τ is the proper time. The bivector field Ω(τ ) is dependent on the rotor L generating the pure Lorentz boost from the gyroscope's four-velocity u to the fiducial observer's four-velocity cγ 0 and the bivector connection ω(u) associated with {γ α } , where c is the velocity of light in vacuum. Just like the Faraday bivector, namely the electromagnetic field strength, the bivector field Ω(τ ) can also be decomposed into the electric part Ω (E) (τ ) and the magnetic part Ω (B) (τ ) . Let {γ β } be the reciprocal tetrad of {γ α } , and technically, if the condition L aLγ 0 = cΩ (E) (τ ) is fulfilled, the spin of the gyroscope always precesses relative to its comoving frame, determined by the pure Lorentz boost generated by the rotor L , with Ω (B) (τ ) as the precessional angular velocity.
The key point is to write down signature invariant expression of the bivector field Ω(τ ) and the spacetime split of the gyroscope's four-acceleration a with the normalized four-velocity γ 0 of the fiducial observer based on the "common" even subalgebra of the two STAs. According to Refs. 33,35 , the bivector connection ω(u) associated with {γ α } can be directly derived, and then, by recasting it in terms of the relative vectors {σ k } , its signature invariant expression and those of its electric part ω (E) (u) and magnetic part ω (B) (u) are obtained. Moreover, by applying the rotor techniques, the pure Lorentz boost L from u to cγ 0 can also be derived. Thus, as noted before, the signature invariant expression of Ω(τ ) and those of Ω (E) (τ ) and Ω (B) (τ ) are completely determined. As to a, its spacetime split with γ 0 could be directly obtained from the relevant conclusion in relativistic dynamics of a massive particle. Thus, with a, L , and Ω (E) (τ ) , one is capable of verifying that the condition L aLγ 0 = cΩ (E) (τ ) holds by means of various operations in the "common" even subalgebra of the two STAs, and hence, the spin of the gyroscope indeed precesses in the comoving frame with Ω (B) (τ ) as the precessional angular velocity. After expanding Ω (B) (τ ) up to 1/c 3 order with 1/c as the WFSM parameter 36 , the gyroscope spin's angular velocities of the de Sitter precession, the Lense-Thirring precession, and the Thomas precession are able to be read out, and their expressions, in the form of geometric objects, are equivalent to their conventional ones in component form, respectively.
The whole derivation implies that the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) does provide a signature invariant GA framework for spacetime physics, and the rotors, presented in a signature invariant form, can be used to generate Lorentz transformations in these two STAs. The treatment of relativistic dynamics of a massive particle and gyroscopic precession intuitively displays the basic method of dealing with specific topics in curved spacetime within the signature invariant GA framework, which suggests that the GA techniques established in this paper are efficient and reliable. No doubt, if these techniques are directly applied to gyroscopic precession in alternate theories of gravity, such as f(R) gravity 30,37-39 , f (R, G) gravity 40,41 , and f(X, Y, Z) gravity 42 , they will definitely facilitate the relevant studies, where G is the Gauss-Bonnet invariant, X := R is the Ricci scalar, Y := R µν R µν is the quadratic contraction of two Ricci tensors, and Z := R µνσρ R µνσρ is the quadratic contraction of two Riemann tensors. Furthermore, by developing other types of techniques, the method in this paper could also be applied to more fields, and in fact, some topics in classical mechanics and electrodynamics have been described in such a manner. The applications of this method will be expected to be extended to a wider range in the future, so that the study of spacetime physics in the language of GA could be greatly promoted.
This paper is organized as follows. In "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra", the STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra are formulated. In "Rotor techniques on Lorentz transformation and relativistic dynamics of a massive particle in curved spacetime", rotor techniques on Lorentz transformation and relativistic dynamics of a massive particle in curved spacetime are constructed. In "A GA approach to gyroscopic precession in the Lense-Thirring spacetime", a GA approach to gyroscopic precession in the Lense-Thirring spacetime is given. In "Summary and discussions", some concluding remarks will be made. In Appendix A, operation rules of blades in the STAs of signatures (±, ∓, ∓, ∓) are summarized. In Appendix B, the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) is introduced in detail. In Appendix C, a local orthonormal tetrad {γ α } and the bivector connection ω(u) associated with it in the Lense-Thirring spacetime are derived.
STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra STA, introduced in the classical literature Space-Time Algebra by David Hestenes (1966), can provide a synthetic framework for relativistic physics 17 , so it has attracted widespread attention in the physical community. Since the establishment of STA, the signature (+, −, −, −) has been widely used, and however, in relativistic physics, one of the main application fields of STA, the opposite signature (−, +, +, +) is often adopted 17,23 . Thus, when one intends to apply STA to relativistic physics, the change of signature from one to another will cause inconvenience even though these two signatures differ only by a minus sign. In fact, the STA of signature (−, +, +, +) www.nature.com/scientificreports/ was also used [24][25][26][27][28][29] , but a lack of long-term attention to it results in that the techniques related to relative vector and spacetime split have not been developed in this algebraic formalism so that its applications are quite limited. In this section, by following the original idea of David Hestenes, we will build up these techniques in the STA of signature (−, +, +, +) so that a more convenient approach to relativistic physics could be given in the language of GA. For the ease of writing, we will directly formulate the STAs of signatures (±, ∓, ∓, ∓) , and analyze the operation rules of multivectors. In spacetime, the STAs of signatures (±, ∓, ∓, ∓) can be generated by corresponding orthogonal vectors {γ ± α } satisfying respectively, where η ± αβ are the Minkowski metrics in the two signatures. With these vector generators {γ ± α } , explicit bases for both the STAs are defined, namely where, in either signature, one scalar, four vectors, six bivectors, four trivectors, and one pseudoscalar are contained. One can perform operations between any two multivectors in spacetime by expanding them in a basis, once operation rules of blades of different grades are given, where the term "blade" here denotes a multivector written as the outer product of a set of vectors (cf. Ref. 17 ). In Appendix A of this paper, a detail list of operation rules of blades in the two STAs is presented, and based on these rules, the "common" even subalgebra of these two STAs will be constructed in the following.
According to Eqs. (A1) and (A7), the orthogonality between the vector generators {γ ± α } implies that the bases (2) can be rewritten as where the geometric products of {γ ± α } are obviously anticommutative, By making use of the anticommutation of {γ ± α } , the pseudoscalars I ± also have the expressions, with ǫ ijk as the three-dimensional Levi-Cività symbol. Among the basis blades, those of even grade, form bases for the even subalgebras of the two STAs. Now, we will first discuss some properties of the bivectors {γ ± 0 γ ± k } . With Eqs. (1), (4), and (A14), one can directly derive the following equalities, where δ ij is the Kronecker symbol, and in the second step of (8), Eqs. (5), (A5), and (A10) have been used. These equalities show that relative vectors, spanning the relative spaces orthogonal to the timelike vectors γ ± 0 , could be defined as {σ ± k = ∓γ ± 0 γ ± k = γ ± k γ 0 ± } with {γ α ± } as the reciprocal frames of {γ ± α } , so that they have the similar algebraic properties to the Pauli matrices, www.nature.com/scientificreports/ and then, by inserting Eqs. (13) and (14) into Eqs. (7) and (8), respectively, we get which prove once again that the algebraic properties of {σ ± k } are similar to those of the Pauli matrices. In fact, as mentioned in Ref. 32 , {σ + k } or {σ − k } provide a representation-free version of the Pauli matrices. Equations (10) and (12) show that the relative spaces orthogonal to γ ± 0 are both the Euclidean spaces of dimension 3 with {σ ± k } and I ± as orthonormal bases and pseudoscalars, respectively. In relative space, a relative vector, although being a bivector in STA, is actually treated as a multivector of grade 1, and thus, in this sense, the inner product and the cross product between two relative vectors can be defined. Let a ± = a ± i σ ± i and b ± = b ± j σ ± j be relative vectors, and then, with the help of Eqs. (10) and (11), the inner products and the cross products between a ± and b ± are defined as where the commutator products between a ± and b ± , and have been used. Obviously, the above definitions of inner product and cross product are identical to their conventional ones, respectively. The cross products defined in Eqs. (19) determine the handedness of {σ ± k } , and by applying them, one easily gets which clearly suggest that {σ ± k } are both right-handed bases. Next, we will employ relative vectors to reconstruct bases of the even subalgebras of the STAs of signatures (±, ∓, ∓, ∓) . The definitions of {σ ± k } provide and then, by further using Eqs. (1) and (4), there are After inserting Eqs. (23), (24), and (12) into (6), we know that bases of the even subalgebras of the two STAs can be reconstructed as which indicates that {σ ± k } are actually the vector generators of the two subalgebras. Eqs. (11) and (17) imply that equalities hold, and thus, the anticommutation of {σ ± k }, is explicitly obtained. As a consequence, there exist three types of basic homogeneous multivectors (cf. Ref. 17 ) in the even subalgebras of the two STAs, namely, www.nature.com/scientificreports/ In view of (12), a ± × b ± ∧ c ± in (30) are able to be written in the form of multiplications of the pseudoscalars I ± by real numbers, and in fact, from the bases (25), all multivectors of grade 4 could be expressed in such a form. Thus, Eq. (A5) states that the geometric product between any multivector and a pseudoscalar is equivalent to their inner product. Keep this conclusion in mind, and then, with the help of the following formulas, one gets a convenient way to carry out operations involving multivectors of grade 4, where in the derivation of (31), Eqs. (20) and (21) have been used. Eqs. (23) and (8) show that both σ ± k and σ ± i × σ ± j (i � = j) are bivectors in the two STAs, where the former contain timelike components, whereas the latter do not. The geometric products of them also need to be derived, where according to Eq. (A2), we have By further using Eqs. (1), (4), (A8), and (A15), the terms on the right-hand sides of Eqs. (33), (34), and (35) are achieved, With the aid of the above operation rules of the basic homogeneous multivectors, namely Eqs. (31)- (39), one can carry out operations of any two multivectors in the even subalgebras of the STAs of signatures (±, ∓, ∓, ∓) . Evidently, as shown in these formulas, the two even subalgebras share the same operation rules, and thus, when dealing with specific problems, such as relativistic dynamics of a massive particle and gyroscopic precession in the next two sections, we will no longer distinguish them strictly and treat them as one algebraic formalism. In Appendix B of the present paper, a detailed presentation of this "common" even subalgebra of the two STAs is given. It will be shown that this algebraic formalism provides a signature invariant GA framework for spacetime physics.
When STA is used to describe relativistic physics, the techniques on spacetime split are also of significance, where in the STA of signature (+, −, −, −) , these techniques provide an extremely efficient tool for comparing physical effects in different frames 2,17 . Of course, these techniques can also be constructed in the STA of signature (−, +, +, +) . Let b ± = b α ± γ ± α be vectors in spacetime, and the spacetime splits of b ± with γ ± 0 are defined as where b ± = b i ± σ ± i are called the relative vectors of b ± . Besides, as for operators ∂ ± := γ α ± ∂ α , their spacetime splits with γ ± 0 are given by www.nature.com/scientificreports/ where ∂ µ := ∂/∂x µ and ∇ ± := σ k ± ∂ k with x µ and {σ k ± := γ ± 0 γ k ± } as coordinates in spacetime and the reciprocal frames of {σ ± k } in the relative spaces, respectively. As clearly shown, the spacetime splits of b + and ∂ + are indeed the same as those introduced in the STA of signature (+, −, −, −) 2,17 , and the spacetime splits of b − and ∂ − are those defined in the STA of signature (−, +, +, +).
The timelike vectors cγ ± 0 could be recognized as the four-velocities of some observer, so the spacetime split introduced above is observer dependent, and consequently, one of the most powerful applications of the techniques on spacetime split is that they can greatly simplify the study of effects involving different observers 2,17 . Technically, spacetime split actually encodes the crucial geometric relationship between STA and its even subalgebra 2 , where with these techniques, many calculations between vectors in spacetime are able to be transformed into those in the even subalgebra of STA. As a result, based on various operations in this algebraic formalism, a large number of specific problems could be solved efficiently. Moreover, since the even subalgebras of the STAs of signatures (±, ∓, ∓, ∓) share the same operation rules, by resorting to the techniques on spacetime split, one is capable of managing to acquire a signature invariant approach to these problems. We will see that the above advantages of spacetime split play a key role in the following treatment of relevant topics.
Rotor techniques on Lorentz transformation and relativistic dynamics of a massive particle in curved spacetime
It is well known that one of the remarkable advantages of STA is that Lorentz boost and spatial rotation can be handled with rotor techniques in an elegant and highly condensed manner [17][18][19][20] . As shown in classical literatures 21,22 , a knowledge of Lorentz boost and spatial rotation is heavily involved in the description of gyroscopic precession, and hence, it could be expected that a more efficient approach to dealing with this topic will be found in the language of STA. Besides, in "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra", it is claimed that the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) provides a signature invariant GA framework for spacetime physics, and thus, when this framework is applied to gyroscopic precession, a signature invariant GA derivation of the precessional angular velocity of the gyroscope spin could be achieved. Therefore, as a preliminary attempt, making use of the two STAs and their "common" even subalgebra to study gyroscopic precession is one objective of the present paper, which, if successful, will definitely become an application paradigm of STA. In view that many relevant techniques need to be constructed in this section, the detailed treatment of gyroscopic precession will be left to the next section.
In the analyse of gyroscopic precession, rotor techniques on Lorentz boost and spatial rotation are widely used, and therefore, these techniques need to be specifically addressed in the two STAs. Rotor techniques are available in the STA of signature (+, −, −, −) [17][18][19][20] , and however, since the STA of signature (−, +, +, +) is rarely employed, these techniques have not been fully developed in this algebraic formalism, where in particular the expressions of the rotors inducing Lorentz boost and spatial rotation should be clearly established. In this section, by constructing the rotors on the basis of the exponential function defined on the "common" even subalgebra of the two STAs, the general Lorentz boost with velocity in an arbitrary direction and the general spatial rotation in an arbitrary plane are handled in a signature invariant manner. In addition, relativistic dynamics of a massive particle in curved spacetime ought be studied so as to describe the motion of a gyroscope moving around a gravitating source 34 . To this end, for a massive particle, the spacetime splits of the velocity, acceleration, momentum, and force four-vectors with the normalized four-velocity of the fiducial observer, at rest in the coordinate system of the spacetime metric, are first derived, and then with these results, a three-dimensional analogue of Newton's second law for this particle in curved spacetime is achieved. Furthermore, in order to describe the motion of the spin of a gyroscope, the Fermi-Walker derivative in the STA of signature (−, +, +, +) is also constructed by following the way in the (+, −, −, −) signature.
In Appendix B of this paper, the signs "±" associated with multivectors have been omitted in the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) , so that all the formulas in this algebraic formalism are presented in a neat form. Inspired by this, when formulas in the two STAs are involved hereafter, the following convention will be adopted for brevity: The signs "±" associated with multivectors and operators are suppressed, and for equalities like A = F(±B) and C = G(∓D) , the signs " + " and "−" in the former equation correspond to the cases in the signatures (+, −, −, −) and (−, +, +, +) , respectively, and the situation in the latter equation is reverse.
Rotor techniques on Lorentz boost and spatial rotation . In GA, a rotor R is defined as an even multivector satisfying RR = 1 and the property that the map defined by b → RbR transforms any vector into another one 17 . Rotors encode an important geometric object and can provide a more elegant scheme for performing orthogonal transformations in spaces of arbitrary signature, where mathematically, rotor group, formed by the set of rotors, provides a double-cover representation of the connected subgroup of the special orthogonal group. In the present paper, we are only interested in rotors in spacetime, and in such a case, the rotor group in spacetime is a representation of the group of proper orthochronous Lorentz transformations 17 .
In the STA of signature (+, −, −, −) , rotor techniques on Lorentz boost and spatial rotation have been established [17][18][19][20] , which greatly promotes the application of STA in spacetime physics. Of course, in order to complete the necessary discussion on gyroscopic precession in a signature invariant manner, these techniques also need to be explicitly constructed in the STA of signature (−, +, +, +) . To facilitate the writing, as in "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra", we will directly build up rotor techniques in the STAs of signatures (±, ∓, ∓, ∓).
In Appendix B of this paper, a simple method to construct rotor is presented, and it has been shown that for a real number α and a unit 2-blade B, e αB is a rotor. Here, we will make use of e αB to handle Lorentz boost and spatial rotation in the two STAs. From Eqs. (8) and (23) Clearly, both σ k and σ i × σ j (i � = j) are unit 2-blades, and the signs of their squares are different, which suggests that there are two types of unit 2-blades in spacetime. It is based on the exponential functions of these two types of unit 2-blades that the rotors inducing Lorentz boost and spatial rotation can be constructed. Let v = v k σ k , m = m i σ i , and n = n j σ j be three arbitrary relative vectors. Consider the bivectors v and m × n , and the following results can be easily given by means of Eqs. (42a)-(43b): and The former two equations indicate that both v and m × n are 2-blades, and thus, with the latter two equations, two unit 2-blades are derived, where a direct calculation verifies that According to Ref. 17 , a proper orthochronous Lorentz transformation can be generated by a rotor R in spactime, and under this transformation, a general multivector M will be transformed double-sidedly as M → R −1 MR . Let θ and ϕ be two real numbers, and the corresponding rotors associated with e v and I 2 are constructed as e θ 2 e v and e ϕ 2 I 2 , respectively. When they act on vectors x and y, two new vectors x ′ and y ′ are obtained, In order to analyze the generated Lorentz transformations in the "common" even subalgebra of the two STAs, the techniques on spacetime split need to be applied. From Eqs. (44a)-(46b) and (A1), the orthogonality and anticommutation of {γ α } imply and then, with the help of Eq. (B39), one gets The spacetime splits of x, y, x ′ , and y ′ with γ 0 are provided by applying Eq. (40), www.nature.com/scientificreports/ As stated in "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra", the relative space is an Euclidean space of dimension 3, and a relative vector, although being a spacetime bivector, could be treated as a multivector of grade 1, which implies that in terms of the three-dimensional geometric meaning, a relative vector is just a vector 17 . Similarly, the commutator product of two relative vectors, referred to as the relative bivector, also has three-dimensional geometric meaning. After comparing Eq. (B8) with Eq. (A1), one is able to find that the commutator product of two relative vectors serves as the role of the wedge product of two vectors in general finite dimensional GA, and thus, in the three-dimensional relative space, it encodes an oriented plane 3,17 . In such a sense, Eqs. (57a) and (57b) indicate that x � (x ′ � ) and x ⊥ (x ′ ⊥ ) are, respectively, the components of x (x ′ ) parallel and perpendicular to e v .
Of course, I 2 also defines an oriented plane in the relative space. Let and then, together with Eqs. which explicitly show that y � (y ′ � ) and y ⊥ (y ′ ⊥ ) are, respectively, the components of y (y ′ ) parallel and perpendicular to the plane defined by I 2 .
When the relative vectors x, x ′ , y , and y ′ in Eqs. (52a) and (52b) are replaced by their decompositions, namely Eqs. (56a)-(56d), it will be seen that a clear physical explanation of the Lorentz transformations induced by e v and I 2 in Eqs. (48a) and (48b) is able to be achieved. To this end, the following properties of the components of x, x ′ , y , and y ′ need to be first derived by means of the combination of Eqs. In order to handle these two equations, e −θ e v and e −ϕI 2 should be rewritten as (64a) and (64b), respectively, and then, by using the grade operator �· · · � and the orthogonal projection operator successively, we finally arrive at In the above derivation, Eqs. (57a) and (B8) have been employed, and besides, one also needs to note that in view of Eqs. Here, in order to reasonably interpret relevant equations obtained in this subsection, the active view for Lorentz transformation needs to be adopted [5][6][7] . Moreover, it also needs to be stressed that for the spatial rotation, Eq. (71b) shows that if ϕ > 0 , the relative bivector y � × y ′ � has the same orientation as I 2 in the threedimensional geometry. Let us recall that the relative vectors v, m , and n were chosen arbitrarily in the beginning, and therefore, with the rotors e θ 2 e v and e ϕ 2 I 2 , the general Lorentz boost with velocity in an arbitrary direction and the general spatial rotation in an arbitrary plane can be handled. Furthermore, considering that Eqs. (67a), (67b), (70), (71a), and (71b) are derived in the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) , all of these equations are presented in a signature invariant form.
According to the previous discussion, the Lorentz boost and the spatial rotation have been first generated in Eqs. (48a) and (48b), and however, until these two equations were transformed into those in the "common" even subalgebra of the two STAs, their physical explanations were achieved in the three-dimensional geometry. In this process, the techniques on spacetime split have been employed, which implies that the intuitive pictures formed in the relative space are observer dependent. In addition, one may also have found that it is since the (66a) β : = tanh θ , www.nature.com/scientificreports/ "common" even subalgebra of the two STAs are independent of the signatures that the original equation (48a) or (48b) has the same three-dimensional meaning in the two signatures, and thus, a signature invariant method for handling Lorentz boost and spatial rotation is gained. In fact, many topics in spacetime physics can be dealt with in such a manner, and inspired by this, we will apply this method to studying gyroscopic precession in the next section, so that a signature invariant GA derivation of the precessional angular velocity of the gyroscope spin could be found.
As the final task of this subsection, the pure Lorentz boost (cf. Ref. 17 ) between two vectors of the same magnitude will be discussed based on the previous results. Assuming that x ′ = cγ 0 , Eqs. x is mapped to x ′ by According to Ref. 17 , the above L in the (+, −, −, −) signature is exactly the rotor that determines the pure Lorentz boost between x and x ′ , and motivated by this, we claim that the above L in the (−, +, +, +) signature also plays the same role. It should be noted that the validity of Eq. (76) is able to be directly verified only by Eqs. (73a) and (75), which does not depend on the selection of the frame {γ α } . In the treatment of gyroscopic precession in the next section, Eqs. (75) and (76) will be used to generate the pure Lorentz boost between a comoving orthonormal frame of the gyroscope and a local orthonormal tetrad at rest in the coordinate system of the spacetime metric, which greatly improves the computational efficiency.
Relativistic dynamics of a massive particle in curved spacetime. As mentioned previously, the description of the motion of a gyroscope requires that relativistic dynamics of a massive particle in curved spacetime should be studied 34 , and to this end, a brief introduction to relevant GA techniques for General Relativity formulated in Ref. 33 needs to be given, so that the treatment of gyroscopic precession in the following can be put on a solid theoretical footing. In order to develop a GA description of curved spacetime, one should define a local orthonormal tetrad {γ α } by the orthonormalization of a coordinate frame and then generate the corresponding STA. Let x µ and {g µ } be local coordinates in a curved spacetime and the associated coordinate frame, respectively. Assume that a collection of fiducial observers is distributed over space, and each fiducial observer is at rest in the coordinate system. Then, the components of the metric with respect to the coordinate frame {g µ }, satisfy the conditions 44 (77) g µν : = g µ · g ν , (78a) ±g 0 · g 0 = ± g 00 > 0, (78b) −(g 1 ∧ g 0 ) · (g 0 ∧ g 1 ) = − det g 00 , g 01 g 10 , g 11 > 0, (78c) ±(g 2 ∧ g 1 ∧ g 0 ) · (g 0 ∧ g 1 ∧ g 2 ) = ± det g 00 , g 01 , g 02 g 10 , g 11 , g 12 g 20 , g 21 45 , where one of its important properties is that it will reduce to ∂ when acting on scalar functions. Suppose that ∇ is the unique torsion-free and metric-compatible derivative operator. Then, according to Ref. 33 , the covariant derivative of a multivector A along a vector b is evaluated by the formula Here, the operator b · ∂ satisfies with {γ β } and φ as the reciprocal tetrad of {γ α } and a scalar field in spacetime, respectively. ω(b) , being the bivector connection associated with {γ α } , is defined by where if b = b µ g µ , the expression of ω(b) is given by 33,35 with With the aid of the corresponding GA technique 3 , {g ν } , as the reciprocal frame of {g µ } , is constructed as where from Eq. (79), the coordinate frame {g µ } can be expanded in the local orthonormal tetrad {γ α }, g 00 , g 01 , g 02 , g 03 g 10 , g 11 , g 12 , g 13 g 20 , g 21 , g 22 , g 23 g 30 , g 31 , g 32 , g 33 . www.nature.com/scientificreports/ Because only the knowledge of covariant derivative and bivector connection will be involved in the discussion of gyroscopic precession, other GA techniques for General Relativity will not be covered here, and the reader wishing to go into more details may consult Ref. 33 . Next, for a massive particle, the spacetime splits of the velocity, acceleration, momentum, and force fourvectors with the normalized four-velocity of the fiducial observer will be discussed, so that relativistic dynamics of this particle in curved spacetime can be studied. Let us first identify the proper time of fiducial observers. As indicated earlier, fiducial observers are at rest in the coordinate system x µ , which means that their worldlines are the coordinate curves with x i = const. (i = 1, 2, 3) , namely, t := x 0 /c coordinate curves. As a consequence, if we let t 0 denote the proper time of each fiducial observer, ±c 2 (dt 0 ) 2 = g 00 c 2 (dt) 2 hold along his worldline, and then, Assuming that x µ (τ ) is the worldline of a massive particle with τ as the proper time, the four-velocity of the particle can be rewritten as 22,30 We will prove that Consider an event P on the particle's worldline. The t coordinate curve with x i = x i (P) (i = 1, 2, 3) passes through P and is the worldline of a fiducial observer. Based on the orthonormal tetrad {γ µ | x i =x i (P) } carried by this fiducial observer, his proper reference frame can be defined, and thus, a local coordinate system y 0 =: ct 0 , y 1 , y 2 , y 3 covering a finite domain near his worldline can also be defined. In this coordinate system, if the worldline of the particle is y µ (τ ) , its four-velocity at the event P is Comparing Eq. (92) with Eq. (90), we get P is an arbitrary event on the particle's worldline, and due to Eq. (89), does not depend on the selection of the coordinate system y µ , so Eq. (91) holds. By applying Eq. (40), the spacetime split of the four-velocity of the particle with γ 0 yields where because of ( uγ 0 ) · (uγ 0 ) = ±u 2 = c 2 , one is able to achieve Since cγ 0 could be identified as the four-velocity of some fiducial observer, u is actually the relative velocity measured in his orthonormal tetrad, which is also able to be inferred from Eq. (92).
(89) dt 0 dt = ±g 00 . www.nature.com/scientificreports/ After clarifying many concepts, we are in a position to derive the spacetime split of the four-acceleration of the particle with γ 0 , which is an essential ingredient in the formalism of relativistic dynamics. The four-acceleration of the particle, a = Du/dτ = u · ∇u , is immediately gained from Eq. (82), and then, by employing Eq. (40), its spacetime split with γ 0 is provided, The first term is in which, Eqs. Let m be the rest mass of the particle. The spacetime splits of its four-momentum p = mu and the four-force f = Dp/dτ = u · ∇p acting on it also need to be evaluated so that a three-dimensional analogue of Newton's second law in curved spacetime will be achieved. Starting from Eq. (94), the spacetime split of the particle's four-momentum p with γ 0 is where are the energy and the relative momentum of theparticle measured by the fiducial observer (cf. Ref. 17 ), respectively. The relationship between E and p can be directly obtained from ( pγ 0 ) · (pγ 0 ) = m 2 c 2 , which is exactly the same as that in Special Relativity. Assuming that the particle's rest mass remains unchanged as it moves, namely dm/dτ = 0 , the four-force f acting on it is able be expressed as When the spacetime is flat and x µ are coordinates in an inertial frame of reference with g µν = η µν , by definition, fiducial observers reduce to inertial observers. In such a case, Eq. (89) suggests that dt 0 = dt , and the relative force f = f i σ i acting on the particle should be given by f i = dp i /dt 46 . Thus, using σ i = γ i γ 0 , one is capable of recasting f as In curved spacetime, we claim that the corresponding relative force f measured by the fiducial observer is related to Dp/dt 0 in the same way, (108) is a three-dimensional analogue of Newton's second law in curved spacetime, which constitutes the core content of relativistic dynamics of a massive particle. In the above discussion, the key point is that the relative velocity, relative acceleration, relative momentum, and relative force for the particle could be reasonably defined in the orthonormal tetrad carried by the fiducial observer. Evidently, in terms of the (102c) (110) www.nature.com/scientificreports/ three-dimensional geometric meaning in the relative space, these relative vectors ought to be interpreted as their corresponding three-vectors in tensor language. When the spacetime is flat, the bivector connection ω(u) and its electric part ω (E) (u) and magnetic part ω (B) (u) vanish. In this case, via considering the components of these relative vectors in the rest frame of the fiducial observer, namely {σ k } , one is able to verify that all the above results reduce to those in Special Relativity. Therefore, the formalism of relativistic dynamics of a massive particle constructed in this subsection is an elegant generalization of the classical one in flat spacetime.
In the tetrad formalism of General Relativity 47 , the covariant derivative of a vector b = b α γ α along the coordinate frame vector g µ is given by where are the spin connection coefficients, and due to the metric compatibility condition, they satisfy 48 Using Eqs. (84) and (A14), one obtains which means that the bivector connection ω(g µ ) can be expressed as The above discussion suggests that it could be expected that when the relative vectors in Eq. (108) are expanded in the frame {σ k } , the corresponding generalization of Newton's second law in the tetrad formalism will also be acquired. Compared with those results in the tetrad formalism, the results in this paper are presented in the form of geometric objects, so they are endowed with a higher degree of clarity. Besides, as highlighted before, since the operations in the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) are independent of the signatures, the relevant results like Eqs. (95), (106), (108), and (109) are able to be handled in a signature invariant manner. As a primary application of the signature invariant GA framework provided by the "common" even subalgebra of the two STAs, the treatment of relativistic dynamics of a massive particle in this subsection provides a paradigm on how to achieve a signature invariant approach to spacetime physics in curved spacetime.
In order to depict the motion of the spin of a gyroscope, the behaviors of vector fields along the worldline of the particle also need to be studied, and here, we only focus our attention on the Fermi-Walker derivatives in the (±, ∓, ∓, ∓) signatures. In fact, their classical forms written in tensor language have been available in Refs. 49,50 , and recasting them in the STAs of the two signatures is a straightforward task. Hence, the results are directly provided as follows: The Fermi-Walker derivatives of a vector field p(τ ) along the particle's worldline in the STAs of signatures (±, ∓, ∓, ∓) are where if D F p(τ )/dτ = 0 , the vector field p(τ ) is said to be Fermi-Walker transported along the particle's worldline. For a torque-free gyroscope moving in spacetime, any nongravitational forces acting on it are applied at its center of mass, and in this case, the spin of the gyroscope experiences the Fermi-Walker transport along its worldline 21 . In the next section, we will regard the transport equation satisfied by the gyroscope spin as the starting point for the discussion of gyroscopic precession. Interestingly, by means of the Leibniz rule and the formula 3 with B as a bivector in spacetime, the above forms of Fermi-Walker derivative can readily be extended to a multivector field A(τ ) along the worldline of the particle, namely, and readers who are interested in this conclusion could attempt to prove it.
A GA approach to gyroscopic precession in the Lense-Thirring spacetime
According to the prediction of General Relativity, the spin of a gyroscope precesses relative to the asymptotic inertial frames as it moves around a rotating spherical source 22 . Conventionally, by following the standard method in tensor language 21,22 , the precessional angular velocity of the gyroscope spin is able to be evaluated under the WFSM approximation. In General Relativity, the time-dependent metric, presented in the form of multipole expansion, for the external gravitational field of a spatially compact supported source is derived under the WFSM approximation in Ref. 30 . Since we are only interested in uniformly rotating spherical sources like the Earth in this paper, the spacetime is stationary, and only the leading pole moments of the source need to be considered. Consequently, in such a case, the metric reduces to the Lense-Thirring metric 30 , and the spacetime is (114) ω µαβ = g µ · ∇γ β · γ α = ω(g µ ) · γ β · γ α = ω(g µ ) · γ β ∧ γ α ), www.nature.com/scientificreports/ accordingly known as the Lense-Thirring spacetime. When a torque-free gyroscope is moving in this spacetime, there exist three types of precession for its spin, namely, the de Sitter precession, the Lense-Thirring precession, and the Thomas precession, where these phenomena are, respectively, resulted from gyroscopic motion through the spacetime curved by the mass of the source, rotation of the source, and gyroscopic non-geodesic motion 31 . Today, the type of experiments designed according to these effects of gyroscopic precession have become an important method to test gravitational theories.
In the traditional description for gyroscopic precession based on tensor language, since one always needs to work with the components of some tensor in a chosen coordinate frame, many equations are given a low degree of clarity. In the language of STA, it could be expected that a physically clear approach to handling this topic will be found, since one just involves geometric objects during calculation 32 . In this section, as a comprehensive application of the STAs of signatures (±, ∓, ∓, ∓) formulated in "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra" and the GA techniques constructed in "Rotor techniques on Lorentz transformation and relativistic dynamics of a massive particle in curved spacetime", a GA approach to gyroscopic precession will be provided, where for a gyroscope moving in the Lense-Thirring spacetime, the precessional angular velocity of its spin will be derived in a signature invariant manner. The GA description of curved spacetime and the relevant GA techniques for General Relativity introduced at the beginning of "Relativistic dynamics of a massive particle in curved spacetime" will still be adopted, and here, we let x µ and {g µ } be local coordinates in the Lense-Thirring spacetime and the associated coordinate frame, respectively. In addition, it should be pointed out that some physical quantities in this section and Appendix C are presented in the form of the 1/c expansion, where 1/c is used as the WFSM parameter 36 . Since the Lense-Thirring metric is only expanded up to 1/c 3 order, the framework of the linearized General Relativity is sufficient to analyze gyroscopic precession 30,[37][38][39] , and in such a case, the coordinates (x µ ) =: (ct, x i ) are treated as though they were the Minkowski coordinates in flat space 51,52 .
Consider a torque-free gyroscope moving in the Lense-Thirring spacetime, and denote x µ (τ ) as its worldline with τ as the proper time. Assuming that the four-force acting on the gyroscope is f, from Eq. (107), its fouracceleration a is determined by with m as its rest mass. In fact, Eq. (119) should be derived from the Mathisson-Papapetrou-Tulczyjew-Dixon (MPTD) equations, where the term related to the curvature tensor has been omitted because the gyroscope scale is very much smaller than the characteristic dimensions of the gravitational field 34 . In accordance with Refs. 21,22 , the spin s of the gyroscope (i.e., its angular momentum vector) is always orthogonal to its four-velocity u and experiences Fermi-Walker transport along its worldline, It will be seen that starting from the above three equations, the precessional angular velocity of the gyroscope spin can be derived. Besides, gyroscopic precession can also be discussed based on MPTD equations, and interested readers may consult Refs. 53,54 . Since the four-velocity of the gyroscope satisfies by use of Eqs. (A6) and (A7), Eq. (121) is equivalent to Thus, Eqs. (120) and (123) directly result in which means that s 2 remains fixed along the worldline of the gyroscope.
As shown in Appendix C, the Lense-Thirring metric satisfies Eqs. (78a)-(78d), which implies that we are capable of assuming that there exists a collection of fiducial observers who are distributed over space and at rest in the coordinate system x µ , and as a consequence, a local orthonormal tetrad {γ α } in the Lense-Thirring spacetime could be directly defined by means of the corresponding formulas in "Relativistic dynamics of a massive particle in curved spacetime". Based on the detailed calculation in Appendix C, the tetrad {γ α } determined up to 1/c 3 order is given by where the potentials U and U i are, respectively, (122) u 2 = ±c 2 ⇒ u · a = 0, (123) u · ∇s = ∓ 1 c 2 (a · s)u.
(124) ds 2 dτ = u · ∇s 2 = 2s · (u · ∇s) = 0, www.nature.com/scientificreports/ Here, G is the gravitational constant, M and J are the mass and the conserved angular momentum of the gravitating source, respectively, and r := √ x i x i . Before analyzing the motion of the spin s of the gyroscope, its relativistic dynamics needs to be discussed. Let t 0 be the proper time of the fiducial observer, which is related to the coordinate time t by Eq. (89), and from Eq. (C1), the expression of dt 0 /dt up to 1/c 3 order is As in Eqs. (90) and (91), the four-velocity u of the gyroscope can be expanded in the tetrad {γ α }, and then, Eq. (94) indicates that its spacetime split with γ 0 yields where u := u i σ i is the relative velocity measured in the orthonormal tetrad of the fiducial observer. Due to u 2 = ±c 2 , the Lorentz factor γ u has the expression (95), and thus, by expanding it up to 1/c 3 order, one gets Furthermore, based on Eqs. (103) and (108)-(110), the spacetime splits of the four-acceleration of the gyroscope and the four-force acting on it are able to be given, respectively, and in view of Eq. (119), we only give the result of the four-force, In the Lense-Thirring spacetime, after inserting Eqs. (C14) and (C15) into Eqs. (108) and (109), the expressions of the relative force f exerted on the gyroscope and the corresponding power f · u delivered by it up to 1/c 3 order are derived, and with ∇ := σ k ∂ k and V := V i σ i . It could be verified that these two equations are compatible. By plugging the potential U into Eq. (132), one will find that −m∇U is the Newtonian gravitational force acting on the gyroscope, and hence, at the leading order, Eqs. (132) and (133) reduce to the corresponding results in Newtonian gravity, which means that Eq. (132) is a three-dimensional analogue of Newton's second law for the gyroscope in the Lense-Thirring spacetime. Evidently, the terms at the next-leading order fall into three classes that depend on U, V , and f , respectively, and as implied from Eq. (126), they should be resulted from gyroscopic motion through the spacetime curved by the mass of the source, rotation of the source, and gyroscopic non-geodesic motion. It will be seen that due to the same reasons, the spin of the gyroscope also experiences three types of precession. In Eqs. (132) and (133), the corrections to the results in Newtonian gravity are presented in a very elegant way, which intuitively displays the powerful potential of the signature invariant GA framework formulated in "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra" for application in spacetime physics.
Next, we begin to review the basis process of evaluating the precessional angular velocity of the gyroscope spin in the language of STA. Let {γ (α) } be a local orthonormal frame comoving with the gyroscope, and by definition, the timelike vector γ (0) is given by γ (0) = u/c . In order to determine the other three spacelike vectors γ (i) of {γ (α) } , the pure Lorentz boost between the gyroscope's four-velocity u and the fiducial observer's four-velocity cγ 0 needs to be presented. According to Eqs. (75) and (76), under the pure Lorentz boost generated by the rotor the vector u is mapped to cγ 0 by (131) (138) LL =LL = 1. This result indicates that the motion of the spin of the gyroscope relative to the comoving frame {γ (α) } is completely determined by the bivector field Ω(τ ) along its worldline, where Ω(τ ) is dependent on the rotor L generating the pure Lorentz boost from the gyroscope's four-velocity u to the fiducial observer's four-velocity cγ 0 and the bivector connection ω(u) associated with the tetrad {γ α } . Like the bivector connection ω(u) in Eqs. (100a)-(101c), the bivector Ω(τ ) is also able to be decomposed into the electric part Ω (E) (τ ) and the magnetic part Ω (B) (τ ) , which clearly suggests that −Ω (B) (τ )I , as a relative vector, is the precessional angular velocity of s ′ in the conventional sense, and because the cross product (denoted by × 3 ) is rarely employed in GA, the relative bivector Ω (B) (τ ) could be regarded as the precessional angular velocity of s ′ . That is to say, in the comoving frame {γ (α) } of the gyroscope, its spin always precesses with Ω (B) (τ ) as the precessional angular velocity. In addition, one should also note that since (164) or (166) has been represented in the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) , a signature invariant GA derivation of the precessional angular velocity of the gyroscope spin could be found. In order to make a further analysis on Eq. (164), we need to derive the expressions of Ω (E) (τ ) , Ω (B) (τ ) , and a ′ . Let us first evaluate the corresponding results of Ω (E) (τ ) and Ω (B) (τ ) , and as shown in Eqs. (161a)-(161c), their expressions can directly be read out from that of the bivector Ω(τ ) . By plugging Eqs. (136) and (100c) into Eq. (160), the bivector Ω(τ ) is able to be expressed as (172a) LT (τ ) are resulted from gyroscopic motion through the spacetime curved by the mass of the source and rotation of the source, respectively, and hence, they should be the de Sitter precession and the Lense-Thirring precession 22 . Besides, Ω (B) T (τ ) , associated with the relative force f acting on the gyroscope, explicitly represents the Thomas precession of its spin, which is caused by gyroscopic non-geodesic motion. In the fine structure of atomic spectra, Thomas precession plays a significant role 21 .
Recall that the three-dimensional operator ∇ appearing in Eq. (182) is defined by ∇ = σ k ∂ k (cf. (41)), and on the basis of it, the expression of Ω where r := x i σ i is the relative position vector of the gyroscope, and due to r = √ x i x i , there is r = √ r 2 . In order to deduce the expression of Ω (B) LT (τ ) , some tricks need to be applied. In the language of GA, the relative angular momentum bivector J , is more convenient to describe the rotation of the source. Equations (126) and (C1) indicate that the source is rotating around the x 3 axis, so its relative angular momentum vector is and then, from Eqs. (B14), (B20), (B2), and (B6), its relative angular momentum bivector should be Thus, via Eqs. (126) and (B15), one is able to express V as Keeping in mind that the relative angular momentum bivector J of the source is conserved, the following identity holds, T (τ ) are capable of being directly transformed into their corresponding expressions in the conventional sense by multiplying −I , namely, Although these expressions presented here seem to be identical to those in Refs. 30,55 , one still needs to note that since the relative velocity u in −Ω T (τ )I is measured in the orthonormal tetrad {γ α } of the fiducial observer instead of in the coordinate frame {g µ } , their above expressions are slightly different from those obtained in tensor language. In despite of this, a straightforward calculation 22,30 shows that the difference between the gyroscope's velocities measured in {γ α } and in {g µ } is at least at 1/c 2 order, so the above −Ω T (τ )I are essentially equivalent to their conventional expressions. These computations in the final part of this section display in detail how to give a signature invariant GA derivation of the precessional angular velocity of the gyroscope spin within the framework provided by the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) , which could stand as a successful paradigm of the application of this framework in spacetime physics.
In this section, based on the STAs of signatures (±, ∓, ∓, ∓) formulated in "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra" and the GA techniques constructed in "Rotor techniques on Lorentz transformation and relativistic dynamics of a massive particle in curved spacetime", an efficient treatment of gyroscopic precession is achieved. One significant advantage of GA approach is that only geometric objects are involved during calculation, and thus, many equations are given a degree of clarity which is lost in tensor language. A typical example is that the relationship between the gyroscope spin s and its components s (i) in the comoving frame {γ (α) } is clearly shown by the equation s (i) = s · γ (i) , which could help readers understand that instead of s, it is the spin s (1) , s (2) , s (3) in the frame {γ (α) } that experiences a spatial rotation. However, in the classical derivation with tensor, since one always needs to work with the components of some tensor, the role of s is usually played by its components in the coordinate frame {g µ } , and thus, the above equation is replaced by the corresponding component equations 22,30 , from which, the relationship between s and s (i) can not be explicitly reflected.
It should be noted that the application of the rotor techniques is also very crucial in simplifying the derivation. In the beginning, Eqs. (142)-(144) imply that in order to obtain the precessional angular velocity of the gyroscope spin s (1) , s (2) , s (3) in the frame {γ (α) } , the expression of ds (i) /dτ needs to be given. Then as in Eq. (146), by employing the rotor techniques, the effect of the pure Lorentz boost generated by the rotor L is transformed from γ (i) to s ′ , and as a result, one can deal with the geometric object ds ′ /dτ = ds (i) /dτ γ i rather than ds (i) /dτ . Being a common trick in STA, such an approach is extremely useful for computations. The STAs of signatures (±, ∓, ∓, ∓) and the GA techniques for General Relativity formulated in Ref. 33 are organically integrated in "Relativistic dynamics of a massive particle in curved spacetime", so that physics in curved spacetime is able to be discussed within the signature invariant framework provided in "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra", which is perhaps the most easily overlooked contribution of the present paper. It is based on the results presented in "Relativistic dynamics of a massive particle in curved spacetime" that relativistic dynamics of the gyroscope and the precession of its spin can be studied in the two STAs. In particular, within the framework provided by the "common" even subalgebra of the two STAs, the three-dimensional generalized equation of motion for the gyroscope and the precessional angular velocity of its spin are able to be derived in a signature invariant manner. The treatment of gyroscopic precession in this section intuitively displays the basic method of dealing with specific problems in curved spacetime within the signature invariant framework. In the future, if the applications of this method could be extended to a wider range, the study of spacetime physics in the language of GA will be greatly promoted.
Summary and discussions
Since the establishment of STA by David Hestenes, the signature (+, −, −, −) has been widely used 2,17 , which may cause inconvenience to the application of STA in relativistic physics because plenty of literatures on relativity adopt the opposite signature (−, +, +, +) . Although the STA of signature (−, +, +, +) was also used 24-29 , a lack of long-term attention to it results in that its applications are quite limited. In this paper, by following the original idea of Hestenes, the techniques related to relative vector and spacetime split are built up in the STA of signature (−, +, +, +) , so that a more convenient approach to relativistic physics could be given in the language of GA. The further research suggests that the two even subalgebras of the STAs of signatures (±, ∓, ∓, ∓) share the same operation rules, so that they could be treated as one algebraic formalism. Consequently, many calculations between vectors involved in a large number of specific problems can be transformed into those in this "common" even subalgebra of the two STAs through the techniques on spacetime split, and then be solved efficiently in a signature invariant manner with the help of various operations provided in Appendix B. Thus, the "common" even subalgebra of the two STAs provides a signature invariant GA framework for spacetime physics. When orthogonal transformations in spaces of arbitrary signature are performed, calculations with rotors are demonstrably more efficient than calculations with matrices, which is a remarkable advantage of GA. Therefore, the topic of rotor techniques on Lorentz transformation should be specifically addressed in the STAs of signatures (±, ∓, ∓, ∓) , and what needs to be pointed out is that since rotor techniques have not been fully developed in the STA of signature (−, +, +, +) , it is significant to explicitly elaborate how to construct the rotors inducing Lorentz boost and spatial rotation in this algebraic formalism. In the present paper, by constructing the rotors on the basis of the exponential function defined on the "common" even subalgebra of the two STAs, the general Lorentz boost with velocity in an arbitrary direction and the general spatial rotation in an arbitrary plane are handled in a signature invariant manner.
Relativistic dynamics of a massive particle in curved spacetime is also studied so as to describe the motion of a gyroscope moving around a gravitating source 34 . To this end, the two STAs and their "common" even subalgebra are first generated by a local orthonormal tetrad, and thus, the corresponding signature invariant GA framework can be set up. Then, after organically integrating the STAs of signatures (±, ∓, ∓, ∓) and the GA techniques for General Relativity formulated in Ref. 33 , physics in curved spacetime is able to be discussed within the signature invariant framework provided in "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra", which lays the foundation for dealing with gyroscope precession hereafter. With these preparations, for a massive particle, the spacetime splits of the velocity, acceleration, momentum, and force four-vectors with the normalized four-velocity of the fiducial observer are derived, and as a consequence, a three-dimensional analogue of Newton's second law for this particle in curved spacetime is achieved. Since the result is derived in a comoving orthonormal tetrad of the fiducial observer and is presented in the form of geometric objects, it is an elegant generalization of the classical one in flat spacetime.
As a comprehensive application of the GA techniques constructed before, the last task of this paper is to provide an efficient treatment of gyroscopic precession in the STAs of signatures (±, ∓, ∓, ∓) . For a gyroscope moving in the Lense-Thirring spacetime, its relativistic dynamics is first discussed, and some significant results like the three-dimensional generalized equation of motion for the gyroscope are given. Then, by applying the rotor techniques, the geometric object ds ′ /dτ = ds (i) /dτ γ i is able to be directly dealt with instead of ds (i) /dτ , which greatly simplifies the following derivation. The result suggests that if Eq. (165) holds, the spin of the gyroscope always precesses relative to its comoving frame {γ (α) } with Ω (B) (τ ) as the precessional angular velocity. Within the framework provided by the "common" even subalgebra of the two STAs, signature invariant expressions of the relevant physical quantities involved in Eq. (164) are deduced, which clearly indicates that Eq. (165) holds, and therefore, the gyroscope spin indeed precesses in the frame {γ (α) } . After expanding Ω (B) (τ ) up to 1/c 3 order, the gyroscope spin's angular velocities of the de Sitter precession, the Lense-Thirring precession, and the Thomas precession are all directly read out, and their expressions, in the form of geometric objects, are equivalent to their conventional ones in component form, respectively.
All physical laws should be independent of the choice of signature, which implies that many significant techniques constructed in the STA of signature (+, −, −, −) can also be introduced to the STA of signature (−, +, +, +) , and starting from this motivation, we find that the "common" even subalgebra of the two STAs provides a signature invariant GA framework for spacetime physics. In order to pave the way for the applications of these two STAs and their "common" even subalgebra, we elaborate in detail the rotor techniques on Lorentz transformation and the method of handling physics in curved spacetime within the signature invariant framework, and they are of theoretical significance and of practical worth. As two successful paradigms, the treatment of relativistic dynamics of a massive particle and gyroscopic precession clearly shows that the GA techniques constructed in this paper are efficient and reliable. Being straightforward generalizations, these techniques could also be applied to gyroscopic precession in alternative theories of gravity, such as f(R) gravity 30,37-39 , f (R, G) gravity 40,41 , and f(X, Y, Z) gravity 42 . However, since these topics are usually explored by making use of some complicated mathematical tools (e.g., the symmetric and trace-free formalism in terms of the irreducible Cartesian tensors 30 ), it is crucial to develop new techniques to apply these tools in STA. In fact, by generalizing various GA techniques in STA of signature (+, −, −, −) [2][3][4]17 , the approach in this paper could also be applied to other fields, and it has been verified that some topics in classical mechanics and electrodynamics can be described in such a manner. We expect that the applications of this approach will be extended to a wider range in the future, so that the study of spacetime physics in the language of GA could be greatly promoted. | 16,815.4 | 2021-11-14T00:00:00.000 | [
"Physics"
] |
A Study on Suitability of EAF Oxidizing Slag in Concrete: An Eco-Friendly and Sustainable Replacement for Natural Coarse Aggregate
Environmental and economic factors increasingly encourage higher utility of industrial by-products. The basic objective of this study was to identify alternative source for good quality aggregates which is depleting very fast due to fast pace of construction activities in India. EAF oxidizing slag as a by-product obtained during the process in steel making industry provides great opportunity to utilize it as an alternative to normally available coarse aggregates. The primary aim of this research was to evaluate the physical, mechanical, and durability properties of concrete made with EAF oxidizing slag in addition to supplementary cementing material fly ash. This study presents the experimental investigations carried out on concrete grades of M20 and M30 with three mixes: (i) Mix A, conventional concrete mix with no material substitution, (ii) Mix B, 30% replacement of cement with fly ash, and (iii) Mix C, 30% replacement of cement with fly ash and 50% replacement of coarse aggregate with EAF oxidizing slag. Tests were conducted to determine mechanical and durability properties up to the age of 90 days. The test results concluded that concrete made with EAF oxidizing slag and fly ash (Mix C) had greater strength and durability characteristics when compared to Mix A and Mix B. Based on the overall observations, it could be recommended that EAF oxidizing slag and fly ash could be effectively utilized as coarse aggregate replacement and cement replacement in all concrete applications.
Introduction
Concrete being the largest man made material used on earth is requiring good quality of aggregates in large volumes. The availability of natural coarse aggregate is depleting day by day due to tremendous demand in Indian infrastructure industry. Aggregates are the main ingredient of concrete occupying 70-80% of its volume and exert a significant influence in concrete properties. A need was felt to identify potential alternative source of coarse aggregate to fulfil the future growth aspiration of Indian infrastructure industry [1].
Use of by-products such as slag, dust, or sludge from the metallurgical industries as filler materials in concrete helps to conserve natural resources as an economically positive option.
Slag, an industrial by-product of steel and iron smelting operations, must be recycled because it has increased proportionately with the development of the steel industry [2,3].
Big steel plants in India generate about 29 million of tonnes of waste material annually. Slag reduces porosity and permeability of soil, thus increasing the water logging problem. Since large quantities of these wastes are generated daily, they are considered problematic and hazardous for both the factories and the environment. Problem of disposing this slag is very serious which can be reduced by utilizing steel slag for concrete production [4].
Steel making slag specifically generated from EAFs, BOFs, and BFs during the iron and steel making process has many important and environmental uses. In many applications, due to its unique physical structure, slag outperforms the natural aggregate for which it is used as a replacement [5]. Several studies proved that the use of steel slag in concrete as aggregate improves the mechanical and durability properties .
A very few researches have been performed regarding the utilization of EAF oxidizing slag in concrete. EAF oxidizing 2 The Scientific World Journal slag is an industrial by-product obtained from the steel manufacturing industry. It is produced in large quantities during the steel making operation which utilizes Electric Arc Furnace oxidation process. EAF oxidizing slag has a high specific gravity; it can produce heavy weight concrete if used as an aggregate for structural concrete. Since most heavy weight aggregates are obtained through quarrying, substitute aggregates must be developed for environmental preservation and protection. From this point of view, the use of EAF oxidizing slag as aggregates holds great significance [30,31].
According to recent studies, an increase in the compressive strength of concrete was reported if EAF oxidizing slag aggregates are used for structural concrete. The use of EAF oxidizing slag as aggregates for structural concrete not only protects the environment but also reduces costs [32][33][34][35]. Kim et al. carried out flexural test on simply supported RC beams to estimate the flexural behaviour of RC beams with EAF oxidizing slag aggregate, and the experimental results were compared with the flexural performance of RC beam with natural aggregates [36].
Kim et al. conducted a research on the characteristics of concrete with EAF oxidizing slag as an aggregate and that evaluated the applicability of the slag for reinforced concrete (RC) members. The study performed bond performance between the steel bar and the concrete with EAF oxidizing slag aggregates which were evaluated in order to use this new material in RC members [37].
Kim et al. conducted results and characteristics of EAF oxidizing slag as an aggregate for structural concrete. The experimental results showed that applying EAF oxidizing slag aggregates to PHC piles enhances the compressive strength, saves energy, lowers carbon dioxide emissions, reduces the amount of cement used, and helps to cut costs [38].
This present paper examines EAF oxidizing slag as coarse aggregates contributing to environment and enhancing greater strength.
Nowadays, most concrete mixture containing supplementary cementing material fly ash to replace certain amount of cement, thus, is reducing the cost of using Portland cement. Fly ash is the most common supplementary cementing material used in concrete. It has been used successfully to replace Portland cement up to 30% by mass, without adversely affecting the strength and durability of concrete [39].
Several laboratory and field investigations involving concrete containing fly ash had reported to exhibit excellent mechanical and durability properties. The pozzolanic reaction of fly ash is a slow process; its contribution towards the strength development occurs only at later ages [39].
The study was conducted to define and analyse the physical, mechanical, and durability properties of eco-friendly concrete made with 50% EAF oxidizing slag aggregate in addition to 30% fly ash (Mix C) compared with conventional concrete mix (Mix A) and concrete with 30% fly ash (Mix B) of two grades M20 and M30.
EAFOS Aggregate.
The slag used in the present investigation was collected from Salem Steel Plant (SSP), Tamil Nadu. The slag had greyish black colour, stone-like appearance, cubical shape, and rough surface texture and more durable than natural aggregate. The roughness and hardness nature of the slag makes it reliable for coarse aggregate. In that study the slag passing through IS Sieve 20 mm was used. EAF oxidizing slag offers high applicability as an aggregate for concrete due to its CaO and SiO 2 content. The slag is mainly composed of oxides which are similar to the natural rocks and has alkaline properties such as cement products [40]. The physical properties of EAF oxidizing slag are superior to natural coarse aggregate as shown in Table 1. The slag had high density, high alkalinity, higher abrasion resistance, higher crushing strength, and low water absorption. These characteristics give EAF oxidizing slag great potential as an alternative coarse aggregate. The EAF oxidizing slag is shown in Figures 1 and 2.
Fly Ash.
In this investigation low calcium fly ash obtained from Mettur Thermal Power Plant was used.
The Scientific World Journal 3
Mix Proportioning
The mix design of M20 and M30 grade concretes was designed as per IS 10262:2009. The mix proportioning of M20 and M30 grades is shown in Table 2
Coefficient of Water Absorption.
Coefficient of water absorption is considered as a measure of permeability of water [43]. This was measured by determining the rate of water uptake by dry concrete in a period of 1 h [44]. The concrete samples were dried at 110 ∘ C in an oven for one week until they reached constant weight and then were cooled in a sealed container for one day. The sides of the samples were covered with epoxy resin and were placed partially immersed in water to a depth of 5 mm at one end while the rest of the portions were kept exposed to the laboratory air. The amount of water absorbed during the first 60 min was calculated for Mixes A, B, and C for 28 and 90 days [45]: where Ka is the coefficient of water absorption (m 2 /s), is the quantity of water absorbed (m 3 ) by the oven-dried specimen in time ( ), is 3600 s, and is the surface area (m 2 ) of concrete specimen through which water penetrates.
Sorptivity.
Sorptivity is a measure of the capillary forces exerted by the pore structure causing fluids to be drawn into the body of the material [46]. In this experiment, the speed of water absorption by concrete cubes was considered by measuring the increase in the mass of samples due to water absorption at certain times when only one surface of the specimen is exposed to water. Concrete samples were dried in an oven at 50 ∘ C for 3 days and then cooled in a sealed container at 23 ∘ C for 15 days as per ASTM C1585 after 28 and 90 days of moist curing [46]. The sides of the concrete samples were covered with epoxy resin in order to allow the flow of water in one direction. The initial mass of the samples was taken after which they were kept partially immersed to a depth of 5 mm in water. The readings were started with initial mass of the sample at selected times after first contact with water (typically 1, 5, 10, 20, 30, 60, 110, and 120 min) [45], the samples were removed, and excess water was blotted off using paper towel and then weighed. Then they were replaced again in water for the chosen time period. The gain in mass per unit area over the density of water was plotted versus the square root of the elapsed time. The slope of the line of best fit of these points was taken as the sorptivity value as per the following equation [47]: where is the cumulative water absorption per unit area of inflow surface (m 3 /m 2 ), is the sorptivity (m/s 1/2 ), and is the time elapsed (s). The sorptivity test is shown in Figure 3.
Water Penetration.
The water penetration test, which is most commonly used to evaluate the permeability of concrete, is the one specified by BS EN-12390-8:2000 [48]. In this test, water was applied on face of the 150 mm concrete cubes specimen under a pressure of 0.5 Mpa. This pressure was maintained constant for a period of 72 hours. After the completion of the test, the specimens were taken out and split open into two halves. The water penetration profile on the concrete surface was then marked and the maximum depth of water penetration in specimens was recorded and considered as an indicator of the water penetration.
Rapid Chloride Permeability Test (RCPT).
The resistance of concrete to salt attack was assessed by Rapid Chloride Permeability Test (RCPT) at 28 and 90 days of water curing in conformity with ASTM C-1202 [49]. Three specimens of 100 mm in diameter and 50 mm in thickness which had been conditioned according to the standard were subjected to a 60-V potential for 6 h. The charge pass through the concrete specimens was determined and used to evaluate the chloride permeability of each concrete mixture. The RCPT test is shown in Figure 4.
Alkalinity Test and Resistance to Sulphate Attack.
For the alkalinity test, the concrete cubes after curing were dried in an oven for 24 h at 105 ∘ C. After cooling in room temperature, the specimens were broken to separate the mortar from the concrete. The mortar is powdered and sieved in 150 m sieve. 10 g is taken and diluted in distilled water by stirring. The pH value of the solution is noted with a pH meter. The alkalinity test is shown in Figure 5. The resistance to sulphate attack was studied by immersing the 28 days cured standard cube specimens (150 × 150 × 150 mm) in a solution containing 7.5% magnesium sulphate for 28, 60, and 90 days. The concentration of the solution was maintained throughout the period by changing the solution periodically. The change in weight during the period of 28, 60, and 90 days was determined [50].
Result and Discussion
The compressive test is the most important test that can be used to assure the engineering quality in the application of building material. In M20 grade, at the age of 7 days the compressive strength of Mix C progressively increased by 8.69% and 47% when compared to Mix A and Mix B, respectively. But the strength development of Mix A increased by 35.29% when compared to Mix B at the earlier age. At the age of 28 and 90 days, there is a continuous improvement in the strength performance of all the concrete mixtures. Mix C increased by 11.07% and 15.2% at 28 days and 11.11% and 12.04% at 90 days ( Figure 6) when compared to Mix A and Mix B. Mix B gained similar strength of Mix A and the strength increased by 52.7% at the later age. Similarly M30 grade of concrete for Mix C progressively increased with all ages of loading and the strength increased by 6.50% and 7.92% at 28 days and 6.9% and 8.5% at 90 days when compared to Mix A and Mix B. From the results, It was observed that the compressive strength of M20 and M30 grade indicated that the higher compressive strength was measured in Mix C (concrete made with EAFOS aggregate and fly ash) in all the ages when compared to Mix A and Mix B.
The splitting test is well known indirect test for determining the tensile strength of concrete. In M20 grade, at the age of 7 days the split tensile strength of Mix C progressively increased by 9.75% and 50% when compared to Mix A and Mix B, respectively. But the strength development of Mix A increased by 36.66% when compared to Mix B at the earlier age. At the age of 28 and 90 days (Figure 7) of Mix C and the strength increased by 60.32% and 66.54% at the later age. It was observed that the split tensile strength of M20 and M30 grade indicated that the higher split tensile strength was measured in Mix C at the earlier age and similar strength was measured at the later age. Flexural test is intended to give the flexural strength of concrete in tension. In M20 grade, at the age of 7 days the flexural strength of Mix C progressively increased by 6.25% and 21.42% when compared to Mix A and Mix B, respectively. At the age of 28 and 90 days, the flexural strength increased in all the concrete mixtures. Mix C increased by 6.89% and 8.77% at 28 days and 7.8% and 13.1% at 90 days ( Figure 8) Mix C (concrete made with EAFOS aggregate and fly ash) in all the ages when compared to Mix A and Mix B. The coefficient of water absorption is suggested as a measure of permeability of water. Results indicated that there is significant reduction in the coefficient of water absorption which was measured in Mix C (concrete made with EAF oxidizing slag and fly ash) when compared to Mix A and Mix B for both M20 and M30 grade as shown in Figure 9.
The sorptivity test measures capillary suction of concrete when it comes in contact with water. From the results of M20 and M30 grade, at the age of 28 days, Mix C (concrete made with EAFOS aggregate and fly ash) resulted in lesser sorptivity coefficient of 5.65 (10 −6 ) (m/s 0.5 ) and 4.63 (10 −6 ) (m/s 0.5 ) when compared to Mix A (conventional concrete) and Mix B (concrete with fly ash), whereas Mix B showed higher sorption and maximum sorptivity coefficient of 7.02 (10 −6 ) (m/s 0.5 ) and 6.67 (10 −6 ) (m/s 0.5 ) as compared to Mix A and Mix C. At the age of 90 days (Figure 10) Water penetration test was used to evaluate the permeability of concrete. From the results, at the age of 28 days, Mix C resulted in lower water penetration depth of 9.5 mm and 12.9 mm as compared to Mix A (11 mm and 13.5 mm) and Mix B (13.5 mm and 15.7 mm) of M20 and M30 grade concrete, respectively. At the curing of 90 days (Figure 11), lower penetration depth was obtained for all concrete mixes and Mix C provided lower water penetration depth of 4.4 mm and 7.2 mm as compared to Mix A and Mix B of M20 and M30 grade concrete, respectively. From the results, it was observed that Mix C, concrete made with EAFOS aggregate and fly ash, would be safe against water permeability.
RCPT is an electrical indication to measure the ability of concrete to resist the penetration of chloride ions. The results of M20 and M30 grade indicated all the values of Mix A, Mix B, and Mix C obtained were between 100 and 1000 and hence the chloride ion permeability is "very low" as per the code. The important observation is that concrete made with EAF oxidizing slag and fly ash (Mix C) makes it less permeable to chloride ions when compared to Mix A and Mix B (Figure 12). 28 Charge passed in coulombs Alkalinity test results indicated that the pH value of concrete made with EAF oxidizing slag and fly ash (Mix C) is within the limits about 12 to 13 and hence the potential for corrosion is low, similar to that of Mix A and Mix B ( Figure 13). For sulphate attack, it was observed that Mix C got good resistance against sulphate attack when compared to Mix A and Mix B (Figure 14).
Conclusion
Based on overall results and observations concrete made with EAF oxidizing slag and fly ash exhibited greater strength and durability characteristics compared to conventional concrete mix considered. Thus EAF oxidizing slag, a logical choice for sustaining the environment, eliminates quarrying of natural aggregates and avoids landfill of slags. It could be recommended for all construction activities in India. | 4,247.6 | 2015-09-02T00:00:00.000 | [
"Materials Science"
] |
A Multi-Sensor Fusion Framework Based on Coupled Residual Convolutional Neural Networks
: Multi-sensor remote sensing image classification has been considerably improved by deep learning feature extraction and classification networks. In this paper, we propose a novel multi-sensor fusion framework for the fusion of diverse remote sensing data sources. The novelty of this paper is grounded in three important design innovations: 1- a unique adaptation of the coupled residual networks to address multi-sensor data classification; 2- a smart auxiliary training via adjusting the loss function to address classifications with limited samples; and 3- a unique design of the residual blocks to reduce the computational complexity while preserving the discriminative characteristics of multi-sensor features. The proposed classification framework is evaluated using three different remote sensing datasets: the urban Houston university datasets (including Houston 2013 and the training portion of Houston 2018) and the rural Trento dataset. The proposed framework achieves high overall accuracies of 93.57%, 81.20%, and 98.81% on Houston 2013, the training portion of Houston 2018, and Trento datasets, respectively. Additionally, the experimental results demonstrate considerable improvements in classification accuracies compared with the existing state-of-the-art methods.
Introduction
Multi-sensor image analysis of remotely sensed data has become a growing area of research in recent years. Space and airborne remote sensing data streams are providing increasingly abundant data suited for earth observation and environmental monitoring [1]. The spatial, temporal and spectral capabilities of optical remote sensing systems are also increasing over time. Besides the evolution of multispectral imaging (MSI), hyperspectral imaging (HSI) [2][3][4] and light detection and ranging (LiDAR) observation platforms have also gained relevance [5][6][7]. An increasing diversity of platforms of HSI and LiDAR acquisition systems are available for terrestrial, space and airborne-based data collection. While MSI and HSI rely on solar radiance as a passive illumination source, LiDAR devices emit their own source of active radiance for measurement. MSI and HSI systems produce pixels representing two-dimensional bands of their respective wavelengths while LiDAR systems measure structure via point clouds organized in a three-dimensional sphere for their respective wavelengths. Combining such data at image or feature level yields both opportunities and challenges. For instance, fusion of HSI and LiDAR data of the same event in space offers a rich feature space allowing distinct separation of observed objects based on their spectral signature and elevation characteristics [8,9]. Meanwhile, multi-sensor datasets can contain sophisticated heterogeneous data structures and different data formats or characteristics (e.g., asymptotic properties, spatial and spectral resolutions etc.). Given the increasing availability and complexity of multi-sensor data, fusion techniques are evolving to address meaningful data exploitation to cope with multi-source inputs. This paper is addressing the large potential volume of existing combined multi-sensor data on classification algorithms. Depending on the study site and classification scheme, multi-sensor feature spaces can possess unique hybrid properties introducing new challenges for the production and deployment of appropriate training data. Sources of accurate training data are often scarce, and the production is expensive, particularly for novel hybrid multi-sensor feature spaces. Therefore, conventional classification systems and networks often become less efficient for such diverse and complicated datasets. Hence, the effective fusion of heterogeneous multi-sensor data for classification applications is essential to our remote sensing research.
A wide variety of multi-sensor data fusion methods have been developed to leverage the use of heterogeneous data sources, most prominently for HSI and LiDAR data fusion [10][11][12][13][14][15][16][17]. In [10], morphological-level features, specifically attribute profiles (APs), were embedded with a subspace multinominal logistic regression model for the fusion of HSI and LiDAR data. The capability of APs in extracting discriminating spatial features was again confirmed in [11], where extended attribute profiles (EAPs) were used to extract features from HSI and LiDAR data, respectively. Moreover, morphological extinction profiles (EPs) have been proposed to overcome the threshold determination difficulties of APs and further boost the performance of feature extraction [12]. EPs have been successfully applied to fuse HSI and LiDAR data with a total variation subspace model in [13]. Regarding various supervised fusion algorithms, a high number of research works have been dedicated towards the development of more robust models, for instance, a generalized graph-based fusion model in [14]; a spare and low-rank component model in [15]; a multi-sensor composite kernel model in [16]; a decision-level fusion model based on a differential evolution method in [17]; semi-supervised graph-based fusion in [18]; and discriminant correlation analysis in [19]. One mutual objective of these fusion algorithms is to simultaneously determine the optimized classification decision boundary by considering heterogeneous feature spaces. Nevertheless, their success often requires a comprehensive understanding of sensor systems and individual domain expertise, and hand-crafted morphological features are naturally redundant and may still suffer from problems such as the curse of dimensionality, which is also termed as Hughes phenomenon [20].
More recently, the rapid development of deep learning techniques has led to an explosive growth in the field of remote sensing image processing, especially the classification of HSI [21]. Deep learning models, especially convolutional neural networks (CNNs), open up a new possibility for invariant feature learning of HSI data, from hand-crafted to end-to-end, from manual configurations to fully automatic, from shallow to deep [22].
At the same time, there are various research efforts developing novel multi-sensor fusion approaches based on deep learning [23][24][25][26][27][28]. Among the first studies, in [23], a deep fusion model was designed for the fusion of HSI and LiDAR data, where CNNs performed as both feature extractor and classifier. In [24], the joint use of HSI and LiDAR data was further explored by combining morphological EPs and high-level deep features via a composite kernel (CK) technique. In [25], a dual-branch CNN was proposed to learn spectral-spatial and elevation features from HSI and LiDAR, respectively, then all features were fused via a cascaded network. Besides the fusion of HSI and LiDAR data, the similar superior performance of deep learning models was also confirmed in [26], where Landsat-8 and Sentinel-2 satellite images were fed into a two branched residual convolutional neural networks (ResNet) for local climate zone classification. However, the training of such deep learning fusion models might be challenging, with problems arising from the fact that deep fusion models mostly require sophisticated network designs with more parameters to simultaneously handle multi-sensor inputs, while the network training will become more difficult when the network becomes deeper [29].
Fortunately, these issues can be mitigated using the residual learning technique, where low-level features are successively passed to deeper layers via identity mapping [30]. Based on this approach, we propose a novel multi-sensor fusion framework via designing multi-branched coupled residual convolutional neural networks, namely CResNet. Moreover, the proposed framework is designed to be a generalized deep fusion framework, where the inputs are not limited to specific sensor systems. To this end, the proposed framework is designed to automatically fuse different types of multi-sensor datasets.
The proposed CResNet mainly consists of three individual ResNet branches along with coupled fully connected layers for data fusion. Different to [24], which requires a separate training step of CK classifiers, the proposed CResNet is trained in an end-to-end manner which lowers the computational complexity during data fusion. To highlight the generalized fusion capability of CResNet, we test the proposed framework on three distinct multi-sensor datasets with inputs ranging from HSI, RGB to LiDAR feature spaces, and various land cover classes. The major contributions of this paper are summarized as threefold: 1.
The proposed CResNet adopts novel residual blocks (RBs) with identity mapping to address the gradient vanishing phenomenon and promotes the discriminant feature learning from multi-sensor datasets.
2.
The design of coupling individual ResNet with auxiliary loss enables the CResNet to simultaneously learn representative features from each dataset by considering an adjusted loss function, and fuse them in a fully automatic end-to-end manner.
3.
Considering that CResNet is highly modularized and flexible, the proposed framework leads to competitive data fusion performance on three commonly used multi-sensor datasets, where the state-of-the-art classification accuracy are achieved using limited training samples.
Section 2 describes the concept of residual feature learning and introduces the detailed architecture of the CResNet. The data descriptions and experimental setups are reported in Section 3. Then, Section 4 is devoted to the discussion of experiment results on three multi-sensor datasets. The main conclusions are summarized in Section 5.
Methodology
We present the structure of the proposed CResNet as shown in Figure 1. The fusion framework can be divided into three main components: feature learning via residual blocks, multi-sensor data fusion via coupled ResNet, and auxiliary training via an adjusted loss function. Although there is no limit in the number of datasets being fused using the proposed method, we evaluate the framework by applying it on three co-registered datasets for multi-sensor data fusion and classification.
Feature Learning via Residual Blocks
Recently, ResNet has become a popular deep learning technique [29], and has achieved significant classification performance on heterogeneous remote sensing datasets [31,32], where multi-sensor data sources (e.g., HSI, MSI, LiDAR) have been intensively investigated. Residual blocks (RBs), as the characterized architecture of ResNet, are proposed to alleviate the gradient vanishing and explosion issues of CNNs during training [29]. By solving the optimization degradation issue, such blocks are found to be helpful in terms of training accuracy, which is a prerequisite for testing and validation accuracies. In this paper, ResNet with multiple RBs are selected as the base feature learning networks, which are lately aggregated together as a generalized multi-branched data fusion network. As shown in Figure 2, a residual block can be considered to be an extension of several convolutional layers, where gradients in the deeper layers could be intuitively propagated back to the lower layers via identity mapping. To be noticed, identity mapping was proposed in [30] to further improve the training and regularization of origin design of ResNet in [29].
Within each RB, we follow the design in [30] and have three successive convolutional layers with kernel sizes of 1 × 1 × m, 5 × 5 × m, and 1 × 1 × m, respectively, where m refers to the number of feature maps. In addition, such successive layers are also named bottleneck designs consisting of a 1 × 1 × m layer for dimension reduction, a 5 × 5 × m convolution layer, and a 1 × 1 × m layer for restoring dimension, with which we can optimize the model complexity, thus lead to a more efficient model due to computational consideration [29]. X k and X k+1 refer to the input and output feature spaces of RBs, respectively, and their feature sizes are kept unchanged via a valid padding strategy. More importantly, by applying the identity mapping with full pre-activation feature spaces into deeper layers [30], the functionality of RBs is further formulated as follows: where X k refers to feature maps of (k)th layer, and the W k are the weights and biases of (k)th layer in the RBs. The function (F) is the pre-activation function, which combines the batch normalization function (BN) [33] and the nonlinear activation function (ReLUs) [34] in order to improve the speed and stability of the proposed CResNet. Figure 2 shows how the full pre-activation shortcut connection is a direct channel for the gradient to propagate in both directions, forward and backward. Hence, the training process of such RBs is simplified and leads to improved generalization capabilities. One of the key characteristics of the full pre-activation shortcut would become more obvious, when multiple RBs are trained successively, thus we could recursively formulate the feature spaces as follows: where W k are the weights and biases of (k)th layer in the RBs. Next, based on these recursive feature spaces, Equation (1) evolves as follows: Hence, the feature space of any deeper layers (L) can be formulated as the feature space of any lower layers (k) plus a collection of convolutional functions ∑ L−1 l=k F. Moreover, this characteristic ensures the backward propagation of model gradients into lower layers as well, benefitting the overall feature learning with heterogeneous remote sensing datasets. For more detailed description of full pre-activation identity mapping, please refer to [30].
Here, the ResNet consisting of RBs with identity mapping is able to learn discriminative multi-sensor features from heterogeneous data sets due to their simplified training process, which further leads to better generalization capabilities. In this work, heterogeneous deep features are then fused with a coupled fully connected layer and a SoftMax layer (shown in Figure 3) for classification purpose. Regarding comprehensive investigations of deep learning feature extraction technique (i.e., HSI), one can further refer to [22,35].
Multi-Sensor Data Fusion via Coupled ResNets
In this paper, multi-sensor datasets are fused via coupled three-branched ResNets as shown in Figure 3. Given a set of heterogeneous input datasets Y a ∈ n×m×a , Y b ∈ n×m×b , and Y c ∈ n×m×c , for which various combination of HSI, RGB, (multispectral) LiDAR, and features generated by morphological methods (e.g., extinction profiles [12,36]) are considered in this paper in order to validate the performance of the proposed framework. More in detail, n and m refer to the spatial dimensions of image height and width, and a to c are the number of spectral bands of the input datasets. As illustrated in Figure 1, for each pixel of inputs, a set of image patches y a ∈ s×s×a , y b ∈ s×s×b , and y c ∈ s×s×c centered at the chosen pixel are extracted from Y a , Y b , and Y c , individually. Here, s refers to the neighboring window size, for which we empirically selected 24 according to [24,35]. Then the three multi-sensor image patches are fed into separate ResNets for residual feature learning, where each ResNet consists of three RBs. Regarding the classification tasks of HSI, two major challenges identified when applying supervised deep learning classification methods: the high heterogeneity and nonlinearity of spectral signatures and the few training samples against the high dimensionality of HSI [21]. In this context, the nonlinear spectral signature of corresponding ground surfaces can be better captured by coupling networks with multi-sensor inputs (e.g., LiDAR, HSI, and RGB) [1]. By connecting the lower features through the networks to the deeper layers, the design of such RBs provides an efficient way to train the deep learning classification networks even with limited training samples.
Between each of the RBs of ResNet, a 2D max-pooling layer is attached with a kernel size and a stride of 2 in order to reduce the feature variance as well as the computational complexity, with which the spatial dimension of deep feature from the previous layer is halved. In addition, since we empirically selected 24 as the neighboring window size, each individual ResNet consists of three RBs. With such a design, three RBs are trained successively to learn discriminative multi-sensor features. In addition, we increased the number of feature maps towards deeper blocks, which is doubled after each block. Here, the number of feature maps for all three RBs ranges from {32, 64, 128}. Next, a coupled fully connected layer with the SoftMax function is adopted to fuse the learned feature according to the total amount of classification categories. We use the element-wise maximization to keep the feature number unchanged even after data fusion.
Auxiliary Training via Adjusted Loss Function
Besides the coupled ResNets, an auxiliary training strategy is proposed to compensate the major loss function according to the training progress of each branch during the framework training stage. The auxiliary loss is a common technique used in other deep learning architecture (e.g., Inception network [37]). In our case, given a set of training samples {y i a , y i b , y i c } together with ground-truth labels t i and predicted labelst i , where {i = 1, 2, . . . N} and N is the number of training samples, the main model loss could be computed by the categorical cross-entropy loss function.
Besides the main categorical cross-entropy loss, individual auxiliary loss functions specified for different input branches {y i a , y i b , y i c } are computed in a similar manner, where L a , L b , and L c are designed to guide the training process of each input dataset respectively. Moreover, our auxiliary training strategy further adjusts the main loss using these auxiliary losses as follows: where {ε a , ε b , ε c } are the weights of auxiliary losses in the overall loss function. To set up the weights, there are two main considerations: first, the auxiliary losses should help in passing information through different branches and prevented from disturbing the overall training process; second, the main loss should be the most important, thus the weights of auxiliary losses should be smaller than the main loss.
The auxiliary loss function L AUX could be considered to be an intelligent regularization that helps to make features from individual branches more accurate. More importantly, L AUX only provides complementary information during the training phase of our framework, not affecting the testing phase.
Houston 2013
The Houston 2013 dataset is from an urban area of Houston, USA, which was originally distributed for the 2013 GRSS Data Fusion Contest [38]. The image size of the HSI and LiDAR-derived data are 349 × 1905 with a spatial resolution of 2.5 m. The HSI data includes 144 spectral bands, which range from 0.38 to 1.05 µm. Here, the HSI data are cloud-shadow removed. The Houston 2013 dataset has in total 15 classes in the scheme, which range from different vegetation types to highway features. Figure 4 shows the false color HSI, the LiDAR-derived DSM together with the corresponding training and testing samples. The detailed number of training and test samples are listed in Table 1.
Houston 2018
The Houston 2018 dataset (identified as GRSS_DFC_2018 dataset) captured over the area of the University of Houston, contains HSI, multispectral LiDAR, and very high resolution (VHR) RGB images. This dataset was originally distributed for the 2018 GRSS Data Fusion Contest [39]. In this paper, we used the training portion of the dataset. The HSI dataset was captured using an ITRES CASI 1500 in 48 bands with spectral range 380-1050 nm at a 1 m ground sampling distance (GSD). The multispectral LiDAR data were acquired using an Optech Titam MW (14SEN/CON340), which include point cloud data at 1550, 1064, and 532 nm, intensity raster, and DSMs at a 50 cm GSD. The RGB was acquired with a VHR RGB imager (DiMAC ULTRALIGHT) with a 70 mm focal length. The VHR color image includes Red, Green, and Blue bands at a 5 cm GSD. This co-registered dataset contains 601 × 2384 pixels. Twenty classes of interest were extracted for Houston data and corresponding training and test samples are given in Figure 5. Figure 5 also depicts the LiDAR-derived DSM and the VHR RGB image (downsampled). The number of training and testing samples used in this study are given in Table 2.
Trento
The Trento dataset was captured over a rural area in the south of the city of Trento, Italy. LiDAR and HSI data were acquired by the Optech ALTM 3100EA and the AISA Eagle sensor, respectively. This data has a spatial resolution of 1 m. The size of data is of 600 × 166 pixels in 63 bands ranging from 402.89 to 989.09 nm with the spectral resolution of 9.2 nm. Six classes of interest were extracted for this dataset, including Buildings, Wood, Apple trees, Roads, Vineyard, and Ground. A false color composite of the HSI data and the corresponding training and testing samples are shown in Figure 6. The number of training and testing samples for different classes of interest are given in Table 3.
Experimental Setup
To evaluate generalized performance of the proposed data fusion framework, the aforementioned three datasets, consisting of two or three co-registered multi-sensor inputs are explored in different ways. In detail, as for the Houston 2013 and Trento datasets, the morphological EPs features of HSI and LiDAR are generated to extract the corresponding spatial and elevation information [12], then a single branch ResNet is used to classify HSI, LiDAR, EPs-HSI, and EPs-LiDAR, respectively. As for the Houston 2018 dataset, instead of using morphological features, HSI, LiDAR, and RGB are directly classified with a single branch ResNet, respectively. Next, the combinations of EPs features and HSI are fused with the proposed CResNet for the Houston 2013 and Trento datasets, while a distinct combination of RGB, LiDAR, and HSI are considered with the Houston 2018 dataset in order to validate the proposed framework's generalized capability in handling highly heterogeneous input datasets.
The implementation of CResNet is based on the Tensorflow framework together with the Keras functional API. The Nesterov Adam optimizer is selected as the optimization algorithm for our ResNet due to its faster convergence performance compared with the stand stochastic gradient descent algorithm [26], where default parameters β 1 = 0.9, β 2 = 0.999 are used. The learning rate, training epochs and batch size are set to 0.001, 200, 64, respectively.
We evaluated the classification accuracy of our proposed framework with respect to the overall accuracy (OA), the average accuracy (AA), the Kappa coefficient, and individual class accuracy. Since the Houston 2013 dataset is intensively used in the state-of-the-art data fusion research, we thus compared the performance of our proposed framework with previous analyses on this dataset. Tables 4 and 5 give the results of the fusion of morphological EPs and HSI using CResNet for the Houston 2013 and Trento datasets, respectively. CResNet-AUX denotes to CResNet trained with adjusted auxiliary loss function. The results are compared with the results obtained from EPs-LiDAR-ResNets, EPs-HSI-ResNets, LiDAR-ResNets, and HSI-ResNets.
•
First, it is observed that HSI-ResNet considerably outperforms LiDAR-ResNet for both datasets, which also supports that the redundant spectral-spatial information of HSI has higher discriminative capability than the elevation information of LiDAR data. However, we notice that such discriminative capability of morphological feature (EPs-HSI and EPs-LiDAR) may become relatively uniform, for which EPs-HSI outperforms by 1.24% in the Houston 2013 and EPs-LiDAR outperforms by 2.88% in the Trento dataset. The reason behind this could be that morphological features consist of low-level features based on hand-crafted feature engineering, which not only extracts informative features but also bring high redundancy into feature space, thus the integration of low-level hand-crafted features and high-level deep features can further boost the classification performance [24]. It is suggested that deep learning methods need to go deeper in order to learn discriminative features [21], while the training of such methods can become even more challenging, especially with limited training samples. In this paper, we tackle this problem by construing a novel arrangement of RBs with identity mapping that successively pass the low-level features through the entire networks.
Fusion Performance of RGB, MS LiDAR, and HSI
In this scenario, we do not use EPs. However, we rely on the deep network developed to extracted the spatial, spectral, and elevation features from RGB, HSI, and multispectral LiDAR. Table 6 demonstrates the performance of CResNet for the fusion of HSI, multispectral LiDAR, and RGB. The proposed CResNet fusion framework leads to substantial improvements with respect to HSI (OA: 12.79%), LiDAR (OA: 10.36%), and RGB (OA: 11.09%). Additionally, the results show that the auxiliary training could further improve the OA by 0.58%. To be noticed here, the degradation of individual accuracy in Water class can be potentially attributed to the high imbalance of training sample numbers as listed in Table 2. To summarize, based on the results obtained on the Houston 2018 dataset, we can validate the generalized capability of the proposed multi-sensor fusion framework. Although we use a uniform network architecture, the CResNet-AUX can automatically extract informative features via RBs and simultaneously regularize the data fusion via auxiliary loss fusion. The reason could be due to the fact that our CResNet actually consists of much deeper CNNs layers as shown in Figure 3, which can be fitted to different datasets, and trained through residual learning. In this context, we believe that the proposed CResNet presents a new possibility in developing flexible end-to-end fusion methods even with multiple inputs from different sensor systems.
Comparison to State-of-the-Art
The Houston 2013 dataset is one of the most widely used datasets, comprising a challenging mixture of urban structures. In this context, we compare the classification performance of our proposed framework with the following state-of-the-art methods listed in Table 7: The multiple subspace feature learning method (MLRsub) in [10], the total variation component-based method (OTVCA) in [13], the sparse and low-rank component-based method (SLRCA) in [15], the deep fusion method (DeepFusion) in [23], the extinction profiles fusion via CNNs and graph-based feature fusion method (EPs-CNN) in [8], and the composite kernel-based three-stream CNNs method (CK-CNN) in [24]. All these methods including the proposed method in this paper use the benchmark sets of training and testing samples published with the dataset for the classification purpose and therefore, the classification results are fully comparable. In general, these methods can be classified into two main categories: conventional shallow methods and deep learning-based methods. The highest OA, AA, and Kappa for each of those categories are 92.45%, 92.68%, and 0.9181 obtained by OTVCA and 92.57%, 92.48%, and 0.9193 obtained by CK-CNN, for which the CResNet-AUX improves both methods by around 1% in terms of OA. This performance improvement over the state-of-the-art methods further validates the effectiveness of the proposed multi-sensor framework. In addition, the superior performance compared to existing deep learning-based methods confirmed the effectiveness of the proposed CResNet in mitigating the gradient vanishing phenomenon and discriminant feature learning from heterogeneous datasets. More importantly, with the proposed multi-sensor fusion framework, the data fusion results can be achieved automatically in an end-to-end manner.
The Performance with Respect to the Number of Training Samples
To evaluate the performance of the proposed framework with respect to the number of training samples, we randomly selected 10, 25, 50, or 100 training samples per class and repeat the experiment 10 times on the Houston 2018 dataset. In Figure 10, the means and standard deviations of OA are depicted with respect to different numbers of training samples using CResNet and CResNet+AUX, respectively. In the case of 10 samples, the OAs are less than 50%, which reveals the dependency of the deep learning techniques to the adequate amount of training samples. However, the high achievements of almost 20% in terms of OA for both techniques in the case of 25 samples per class demonstrates the efficacy of the proposed deep learning-based fusion framework in the case of a limited number of samples. Additionally, the steady increase in the slope of the CResNet+AUX's graph compared with the CResNet's graph confirm that the auxiliary training loss function provides robustness in the performance of the CResNet with respect to the number of samples. Moreover, CResNet+AUX outperforms CResNet for all four cases, which supports the advantage of the CResNet+AUX.
Sensitivity Analysis of OA with Respect to the Weights of Auxiliary Losses
As mentioned in Section 2.3, the general network training can benefit from considering auxiliary losses from individual branches. Here, we analyzed the sensitivity of CResNet-AUX with respect to ε i in terms of OA. To test the effect of different {ε i | i = a, b, c}, we compared the classification OA for the Houston 2018 dataset by selecting ε i in the range of {10 −1 , 10 −2 , 10 −3 , 10 −4 , 10 −5 }. In addition, the weights of individual branches are set to be identical, since we assume that no prior knowledge of multi-sensor inputs is available. Figure 11 shows that ε i ≥ 10 −4 is a confident region for the selection of ε i . To this end, we empirically used 10 −4 in this paper.
Computational Cost
In addition to the classification accuracy, Table 8 reports the computational cost for the proposed framework, where training and testing times were given in minutes and seconds, respectively. All experiments were implemented on a workstation with 2 GeForce RTX 2080Ti graphical processing units (GPUs), each with 12 GB memory. As shown in Table 8, CResNet consumes up to three times more processing time than the individual branches since networks are simultaneously learning from multiple inputs. Compared to the sum of individual branches reveals that the training of CResNet is more efficient and faster, saving up to 35% of training time. However, this computational efficiency may slightly decrease through the application of the auxiliary training strategy because the adjusted loss function can lead to additional computation cost. As shown in Figures 10 and 12, by compromising the training time to some extent, the adjusted auxiliary loss function leads to further accuracy improvement for all three datasets. Therefore, the additional computational cost is justified for our proposed framework. More importantly, although the training time may take up to several hours for the feeding forward of testing samples (measured in seconds), the additional cost is negligible. To summarize, the auxiliary training design can improve general multi-sensor fusion accuracy by adjusting the training time within affordable ranges.
Conclusions
In this paper, we presented the development of a novel multi-sensor data fusion framework, which is capable of fusing heterogeneous data types either captured by different sensor systems (e.g., HSI, LiDAR, RGB) or generated by feature extraction algorithms (e.g., extinction profiles). The designed coupled residual neural networks with auxiliary training (i.e., CResNet-AUX) consists of highly modularized residual blocks with identity mapping and an intelligent regularization strategy with adjusted auxiliary loss functions. Extensive experiments were applied on three multi-sensor datasets (i.e., Houston 2013, Trento, and Houston 2018) and based on classification accuracies the following outcomes have been achieved: • The proposed CResNet fusion framework outperforms all the single sensor-based scenarios in the experiments for all three datasets. • Both CResNet and CResNet-AUX outperform the state-of-the-art methods for the Houston 2013 dataset.
•
The auxiliary training function boosts the performance of CResNet for all the datasets even for the case of limited training samples.
•
The proposed CResNet fusion framework shows effective performance when the number of training samples is limited, which is of great importance in the case of applying deep learning techniques for remote sensing datasets.
•
The experiments regarding the computational cost justifies the efficiency of the proposed algorithm considering the achievements in the classification accuracies.
More importantly, the proposed CResNet-AUX is designed to be a fully automatic generalized multi-sensor fusion framework, where the network architecture is largely independent from the input data types and not limited to specific sensor systems. Our framework is applicable to a wide range of multi-sensor datasets in an end-to-end, wall-to-wall manner.
Future works in developing intelligent and robust multi-sensor fusion methods may benefit from the insights we have produced in this paper. In further research we propose to test the performance of our framework on a large-scale application (continental and/or planetary) and include additional types of remote sensing data. | 7,198.8 | 2020-06-26T00:00:00.000 | [
"Computer Science",
"Environmental Science",
"Engineering"
] |
Satellite Interference Source Direction of Arrival (DOA) Estimation Based on Frequency Domain Covariance Matrix Reconstruction
Direction of arrival (DOA) estimation is an effective method for detecting various active interference signals during the satellite navigation process. It can be utilized for both interference detection and anti-interference applications. This paper proposes a DOA estimation algorithm for satellite interference sources based on frequency domain covariance matrix reconstruction (FDCMR) to address various types of active interference that may occur in the satellite navigation positioning process. This algorithm can estimate the DOA of coherent signals from multiple frequency points under low signal-to-noise ratio (SNR) conditions. The signals received from the array are transformed from the time domain to the frequency domain using a fast Fourier transform (FFT). The data corresponding to the frequency point of the target signal is extracted from the signal in the frequency domain. The frequency domain covariance matrix of the received array signals is reconstructed by utilizing its covariance matrix property. The spatial spectrum search method is used for the final DOA estimation. Simulation experiments have shown that the proposed algorithm performs well in the DOA estimation under low SNR conditions and also resolves coherency. Moreover, the algorithm’s effectiveness is verified through comparison with three other algorithms. Finally, the algorithm’s applicability is validated through simulations of various interference scenarios.
Introduction
Satellite navigation technology plays an essential role in various industries, as the world has entered a highly informationized era.Satellite navigation signals are affected by the ionosphere, atmospheric turbulence, and other factors during propagation, which can lead to severe attenuation [1][2][3][4], with minimum power levels as low as −160 dBW.Various types of interference [5] can affect the weak signals.Intentional interference signals received by the Global Navigation Satellite System (GNSS) can be classified into two types: jamming interference and spoofing interference [6,7].Accurately locating the interference source and eliminating its impact is crucial in the field of satellite navigation and communication.Estimating the direction of satellite interference sources can help in identifying multipath interference signals and utilizing beamforming technology to eliminate interference [8][9][10], thus improving the signal's robustness.
In recent years, research on interference monitoring systems in the field of satellite navigation has made some progress based on radio interference monitoring.The mainstream interference monitoring systems can be classified into airborne and ground monitoring platforms [11], with the aim of identifying the source of the interference [12].Some researchers have designed ground interference monitoring platforms consisting of ground monitoring stations and monitoring vehicles, which can effectively measure the direction of interference signals and locate interference signals [13].For airborne interference monitoring platforms, unmanned aerial vehicles are used to locate the interference source, which can eliminate the influence of terrain and effectively approach the interference source to a certain extent [14][15][16].In the field of satellite navigation, research on interference source localization algorithms includes the utilization of adaptive filtering recursive least squares combined with generalized cross-correlation methods for delay estimation, as well as the use of TDOA algorithms for range localization of interference sources [17].In the range localization method, the two-step localization approach is mostly used, and its localization accuracy is affected by parameter estimation.In addition, there is a direct location method (Despreading Direct Position Determination, DS-DPD [18]), which uses navigation signal characteristics to fuse TOA, Doppler frequency shift, and spreading sequences, aiming at scenarios with multiple spoofing interference sources.Although this method has high localization accuracy, it has significant computational complexity.
With the development of array antenna technology [19], research on the direction finding and localization of satellite navigation interference sources has become a popular field.
Currently, space filtering [20][21][22][23], adaptive beamforming [24][25][26], and DOA estimation technology [27][28][29] based on array antenna technology have begun to be applied in the field of satellite navigation interference monitoring and anti-interference.Satellite navigation signals are affected by jamming interference signals, which disrupt the receiver's normal functioning by transmitting high-power noise signals with a certain bandwidth.Additionally, these interference signals are easily detectable [30].Spoofing interference signals simulate actual satellite signals and generate similar signals to mislead the receiver into tracking the fake signal.In addition, spoofing equipment can also forward actual signals, increase signal delay, and achieve the intended deception.Ultimately, these spoofing interference signals can cause errors in the receiver's positioning results [31].Regarding the characteristics of DOA estimation for satellite navigation interference sources, firstly, due to the similarity between spoofing signals and actual signals and the low SNR, DOA estimation algorithms need to work properly under low SNR conditions to measure the direction of the spoofing interference signal.Secondly, satellite navigation signals are multi-frequency signals, which can lead to multiple interference sources being distributed across multiple frequency bands.Furthermore, there may be multiple interference sources, including multipath interference, within a single frequency band.This requires DOA estimation algorithms to have the ability to work with multiple signals and frequencies as well as coherent signals.Considering the above-mentioned characteristics of DOA estimation for satellite navigation interference sources, existing classical DOA estimation algorithms such as the multiple signal classification (MUSIC) algorithm [32] and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm [33] can only achieve high-precision direction finding under conditions of high SNR and multiple snapshots.The traditional MUSIC algorithm processes the array signal in the time domain and eigendecomposes the signal covariance matrix to obtain the signal subspace corresponding to the signal component and the noise subspace orthogonal to the signal component.Utilizing the orthogonality of the two spaces, it estimates the direction of incidence of the signal.However, the algorithm is seriously affected by noise, resulting in poor accuracy of direction finding under low SNR conditions.In addition, since interference sources with multiple frequencies can affect the selection of inter-element spacing and thus affect the estimation accuracy of wave arrival direction, traditional algorithms rarely explore interference sources with multiple sources and frequencies.In recent years, some researchers have adopted sparse reconstruction methods based on compressive sensing theory for DOA estimation signal models with spatial sparsity characteristics, due to low SNR conditions [34].This method has been applied to DOA estimation of interference sources in satellites and does not require prior knowledge of the number of interference sources.It achieves DOA estimation with fewer snapshots and improves estimation accuracy under low SNR conditions.In addition, this method can achieve high accuracy and resolution DOA estimation for multiple interference sources.However, this method does not consider the case where multiple interference sources are located at different frequency points.For multi-frequency signal DOA estimation, a sparse Bayesian learning-based multi-frequency DOA estimation algorithm has been proposed for passive radar [35], and a multi-frequency sparse reconstruction DOA algorithm based on mutual correlation array structure has also been proposed [36].The above algorithms are based on optimization theory and changes in array structure to achieve multi-frequency signal DOA estimation.Regarding the DOA estimation of coherent signals, the spatial smoothing algorithm was initially proposed for uniform linear arrays [37] to achieve coherent signal separation by reducing the degrees of freedom of the array.In addition, a special array antenna model was used to reconstruct the Toeplitz matrix and achieve coherent signal separation [38].Previous methods were unable to simultaneously perform DOA estimation on multi-frequency signals.They performed poorly in low SNR conditions and in estimating the DOA of coherent signals.They were unable to meet the requirements under all three aforementioned conditions simultaneously.The main contributions of this study are as follows: An algorithm for DOA estimation for satellite navigation interference sources under multi-frequency points and low signal-tonoise ratio conditions is proposed.The innovative time-frequency conversion processing of the array-received signal reduces the influence of noise and effectively improves the DOA estimation accuracy of the algorithm under the condition of low SNR, in addition to the excellent DOA estimation results for the same-frequency coherent signal.The feasibility of the algorithm is verified by simulation experiments.
The following sections will discuss our work in detail.Section 2 introduces the basic mathematical model of DOA estimation algorithms and the basic steps of our proposed FDCMR method.In Section 3, the feasibility of our FDCMR method is validated through simulation experiments, and a comparative analysis is conducted with three other algorithms to further demonstrate the superiority of our algorithm.Additionally, different interference scenarios are simulated for algorithm testing, such as scenarios with multiple coherent sources at the same frequency point, scenarios with multiple coherent sources at multiple frequency points, and scenarios with both coherent and incoherent sources at multiple frequency points.The final section provides a discussion and conclusion.
Introduction to the Proposed Algorithm
The paper presents an algorithm for DOA estimating based on frequency domain covariance matrix reconstruction (FDCMR).This approach is designed to address the detection of interfering sources' DOA in satellite navigation scenarios, as shown in Figure 1.First, the array's received signals undergo processing, and the signal is transformed from the time to frequency domain with fast Fourier transform (FFT).The frequency spectrum can identify the peak frequency of the target signal, allowing for the extraction of target signal peak data.This extraction process can efficiently eliminate noise influencing the data.Next, the extracted target signal data is processed to create the frequency domain covariance matrix of the received signals from the array.Finally, spatial spectrum search is utilized to obtain the DOA estimation value.
Signal Model
Assuming the receiving antenna array consists of N linearly arranged antennas, forming a uniform linear array with inter-antenna spacing of d / 2 = , as shown in Figure 2. When Q signals
()
q st (with the wavelength corresponding to the maximum fre- quency signal being ) impinge on the uniform linear array at angles q , q=1, ,Q with respect to the normal direction of the array, the signal received by the i-th antenna in the array can be represented as: where () i nt is the Gaussian white noise signal.
Signal Model
Assuming the receiving antenna array consists of N linearly arranged antennas, forming a uniform linear array with inter-antenna spacing of d = λ/2, as shown in Figure 2.
Signal Model
Assuming the receiving antenna array consists of N linearly arranged antennas, forming a uniform linear array with inter-antenna spacing of d / 2
=
, as shown in Figure 2. When Q signals
()
q st (with the wavelength corresponding to the maximum fre- quency signal being ) impinge on the uniform linear array at angles q , q=1, ,Q with respect to the normal direction of the array, the signal received by the i-th antenna in the array can be represented as: s t e n t When Q signals s q (t) (with the wavelength corresponding to the maximum frequency signal being λ) impinge on the uniform linear array at angles θ q , q = 1, . . ., Q with respect to the normal direction of the array, the signal received by the i-th antenna in the array can be represented as: where n i (t) is the Gaussian white noise signal.The received signals by the array antennas can be represented in matrix form: Sensors 2023, 23, 7575 5 of 16 where x(t), s(t), n(t) are: where A = [a(θ 1 ), . . ., a(θ Q )] is the array antenna steering vector, a(θ q ) = [1, e j 2π λ dsinθ q , . . ., e j 2π λ (N−1)dsinθ q ] T , M is the number of signal samples, also known as snapshots, and the received signal matrix for the array is an N × M dimensional matrix, represented as follows: The covariance matrix of the received signal in the array is represented as where R s , R n are the signal covariance matrix and noise covariance matrix, respectively.The covariance matrix is obtained through maximum likelihood estimation in the actual data-processing process and is represented as follows:
Method in This Paper (FDCMR)
To achieve DOA estimation of interfering signals under multi-frequency and low SNR conditions, this paper introduces a method.This method utilizes FFT on the received signals, reconstructs the covariance matrix of the received signals, and performs a spatial spectrum search to achieve the final DOA estimation.
For the N × M-dimensional array-received signal x(t) in Equation ( 2), where each row corresponds to the signal of M snapshots from each antenna at the time, the received signal is subjected to FFT using the FFT formula, resulting in the frequency domain signal: where M is the number of samples per signal snapshot, N represents the number of antennas, which is the number of rows in the received signal matrix x(t), k = 1, 2, . . ., M, i = 1, 2, . . ., N, the frequency domain array signal reception matrix is obtained through FFT: Sensors 2023, 23, 7575 where X FFT represents the discrete spectral signal, which is composed of the noise spectrum and the signal spectrum.When there are Q different frequencies of incident signals, we can find Q peaks and corresponding useful signal spectra in the frequency domain received signals after FFT at each antenna.Other points are useless noise spectra.
In combination with Equation ( 2), since the matrix steering vectors are independent of time, Equation ( 16) can be written as where . ., Q, and n = 1, . . ., N. a n is the n-th row of array antenna steering vector A, and s i (m) is the i column of target incident signal matrix s(t).So, the discrete spectral signals from the N antennas are completely identical, and the corresponding peak positions in the frequency domain for each incident signal are the same.By selecting the Q columns of the discrete spectral signals corresponding to Q spectral peaks from X FFT in Equation ( 10), we can form the frequency domain signal matrix X F : where f q ∈ 1, 2, . . ., M, q = 1, . . ., Q, so the formed X F is a N × Q-dimensional matrix.Next, we need to reconstruct R, which is the covariance matrix of the array-received signals.The R is a Hermitian matrix and R = R H .The covariance matrix R F is reconstructed using the previously extracted frequency domain signal matrix X F : where N represents the number of array antennas, after reconstructing the covariance matrix R F , the target signal and noise are also mutually independent in the frequency domain.The frequency domain covariance matrix can be expressed as the sum of the autocorrelation of the frequency domain target signal and the frequency domain noise signal: The signal s(t) is transformed into s(f) through FFT, where σ 2 f represents the power component of the noise signal at the frequency points of the target signal in the frequency domain.R s(f) = s(f)s H (f) is a diagonal matrix with diagonal elements representing the power components of the signal at the frequency points in the frequency domain.
Next, the eigenvalue decomposition can be performed on R F : where U F = [e 1 , e 2 , . . ., e N ] is the matrix of eigenvectors, and the diagonal matrix Σ F is composed of the eigenvalues as follows: The eigenvalues in the above equation satisfy the following relationship: Sensors 2023, 23, 7575 7 of 16 Define the following two diagonal matrices: where Σ FS is a diagonal matrix composed of the larger eigenvalues from the set of eigenvalues, and the corresponding eigenvectors In the ideal situation, the frequency domain signal subspace and the frequency domain noise subspace are mutually orthogonal, meaning that the steering vectors are also orthogonal to the frequency domain noise subspace.
The presence of noise causes them to be not completely orthogonal; therefore, the implementation is achieved through a minimization optimization search, that is, Therefore, the spectral estimation formula is given by The above content provides a detailed introduction to the DOA estimation algorithm based on the reconstruction of the frequency domain covariance matrix proposed in this paper.The following Table 1 summarizes the main steps of the FDCMR: Table 1.Main steps of the FDCMR.
Step Description
Step 1 Perform FFT on each row of the received signal matrix x(t) to obtain frequency domain signals X FFT ∈ C N×M .Each row of the matrix x(t) corresponds to the received signal data from each antenna using Equation (9).
Step 2 Extract Q peaks from each row of the frequency spectrum of X FFT to form Reconstruct the covariance matrix R F of the received signals using Equation (16). Step
Results
To validate the performance of the DOA estimation algorithm, the root mean square error (RMSE) is used to evaluate the algorithm performance: where θq,i represents the angle of the q-th incident signal in the i-th Monte Carlo experiment, and a total of D = 300 Monte Carlo experiments are conducted.Q represents the number of incident signal sources.We assume that there are Q = 2 incident signals, with signal frequencies of f 1 = 1575.42MHz for GPS navigation satellite L1 frequency and f 2 = 1268.52MHz for BeiDou navigation satellite B3 frequency.The incident signal angles are θ 1 = 20 • and θ 2 = 50 • .The number of array antennas is N = 8, and the number of snapshots is M = 256.
Experiment 1
A comparison is presented between the spatial spectra of the algorithm proposed in this paper and those of the traditional MUSIC algorithm under low SNR conditions.The conditions of SNR at −15 dB, −5 dB, 0 dB, and 10 dB are compared individually, as demonstrated by Figure 3. .The number of array antennas is N8 = , and the num snapshots is M 256 = .
Experiment 1
A comparison is presented between the spatial spectra of the algorithm propos this paper and those of the traditional MUSIC algorithm under low SNR condition conditions of SNR at −15 dB, −5 dB, 0 dB, and 10 dB are compared individually, as de strated by Figure 3. From Figure 3a, it can be observed that at an SNR of −15 dB, the traditional M algorithm fails to resolve the two incident signals correctly, and the DOA estimati sults are erroneous.However, the algorithm proposed in this paper can clearly distin between the two incident signals.Both algorithms presented in Figure 3b-d From Figure 3a, it can be observed that at an SNR of −15 dB, the traditional MUSIC algorithm fails to resolve the two incident signals correctly, and the DOA estimation results are erroneous.However, the algorithm proposed in this paper can clearly distinguish between the two incident signals.Both algorithms presented in Figure 3b-d successfully resolve the two signals.However, the traditional MUSIC algorithm exhibits wider spatial spectrum peaks, which indicates lower signal resolution compared to the method proposed in this paper.
Experiment 2
To validate the success probability of the algorithm for DOA estimation under various SNR conditions, a series of Monte Carlo experiments are conducted.The SNR is increased gradually from −15 dB to 10 dB with an interval of 2 dB, and each experiment is repeated 300 times.For reference, the success probability of three other methods, namely the conventional MUSIC, ESPRIT, and compressed sensing-based orthogonal matching pursuit algorithm [39], are also calculated under different SNR conditions.The experimental results are shown in Figure 4.
Sensors 2023, 23, 7575 10 of 17 spectrum peaks, which indicates lower signal resolution compared to the method proposed in this paper.
Experiment 2
To validate the success probability of the algorithm for DOA estimation under various SNR conditions, a series of Monte Carlo experiments are conducted.The SNR is increased gradually from −15 dB to 10 dB with an interval of 2 dB, and each experiment is repeated 300 times.For reference, the success probability of three other methods, namely the conventional MUSIC, ESPRIT, and compressed sensing-based orthogonal matching pursuit algorithm [39], are also calculated under different SNR conditions.The experimental results are shown in Figure 4. From Figure 4, it can be observed that the proposed method achieves higher accuracy in DOA estimation under low SNR conditions.In contrast, the traditional MUSIC and ESPRIT algorithms experience a rapid decline in DOA estimation accuracy when the SNR falls below 0 dB.When the SNR drops below −10 dB, these two algorithms become practically ineffective.On the other hand, the compressed sensing-based DOA estimation algorithm offers the advantage of reduced computational time since it only selects a single snapshot signal for DOA estimation.However, its accuracy is compromised under low SNR conditions, resulting in a significant decrease in success probability compared to the other three estimation algorithms.
Experiment 3
In addition to the success rate of DOA estimation, the accuracy of the algorithm's DOA estimation is also a crucial factor.The RMSE mentioned in Equation ( 27) is used as the performance evaluation criterion for the algorithm.The experimental conditions are the same as those used to validate the success probability of DOA estimation in the previous analysis.Since the algorithms in the control group exhibit lower accuracy in DOA estimation under low SNR conditions, the RMSE is calculated for SNR values ranging from 0 dB to 10 dB.Experimental results are presented in Figure 5. From Figure 4, it can be observed that the proposed method achieves higher accuracy in DOA estimation under low SNR conditions.In contrast, the traditional MUSIC and ESPRIT algorithms experience a rapid decline in DOA estimation accuracy when the SNR falls below 0 dB.When the SNR drops below −10 dB, these two algorithms become practically ineffective.On the other hand, compressed sensing-based DOA estimation algorithm offers the advantage of reduced computational time since it only selects a single snapshot signal for DOA estimation.However, its accuracy is compromised under low SNR conditions, resulting in a significant decrease in success probability compared to the other three estimation algorithms.
Experiment 3
In addition to the success rate of DOA estimation, the accuracy of the algorithm's DOA estimation is also a crucial factor.The RMSE mentioned in Equation ( 27) is used as the performance evaluation criterion for the algorithm.The experimental conditions are the same as those used to validate the success probability of DOA estimation in the previous analysis.Since the algorithms in the control group exhibit lower accuracy in DOA estimation under low SNR conditions, the RMSE is calculated for SNR values ranging from 0 dB to 10 dB.Experimental results are presented in Figure 5.
From Figure 5, it is evident that the proposed method exhibits a smaller RMSE compared to the traditional MUSIC algorithm, ESPRIT algorithm, and compressed sensingbased DOA estimation algorithm.This significant improvement in DOA estimation performance can meet the requirements of practical applications.
To verify the higher accuracy of the proposed algorithm in DOA estimation under low SNR conditions, the SNR is set in the range of −15 dB to 10 dB.In each simulation experiment, the Monte Carlo experiment is repeated 300 times, and a box plot is generated to visualize the statistical distribution of the DOA estimation results.From Figure 6, it can be observed that at an SNR of −15 dB, the DOA estimation results are within the theoretical range of ±3 • , with the majority concentrated within an error range of ±1 • .As the SNR increases, the error range gradually narrows.From Figure 5, it is evident that the proposed method exhibits a smaller RMSE compared to the traditional MUSIC algorithm, ESPRIT algorithm, and compressed sensingbased DOA estimation algorithm.This significant improvement in DOA estimation performance can meet the requirements of practical applications.
To verify the higher accuracy of the proposed algorithm in DOA estimation under low SNR conditions, the SNR is set in the range of −15 dB to 10 dB.In each simulation experiment, the Monte Carlo experiment is repeated 300 times, and a box plot is generated to visualize the statistical distribution of the DOA estimation results.From Figure 6, it can be observed that at an SNR of −15 dB, the DOA estimation results are within the theoretical range of ±3°, with the majority concentrated within an error range of ±1°.As the SNR increases, the error range gradually narrows.
Experiment 4
To further validate the robustness of the algorithm in DOA estimation for different angles, the experiment analyzes the change in the incident direction of two signals with different frequencies, ranging from −60° to 60° with an interval of 2°.The SNR is set to −10 dB.The DOA estimation results are shown in Figure 7. From Figure 5, it is evident that the proposed method exhibits a smaller RMSE pared to the traditional MUSIC algorithm, ESPRIT algorithm, and compressed sen based DOA estimation algorithm.This significant improvement in DOA estimation formance can meet the requirements of practical applications.
To verify the higher accuracy of the proposed algorithm in DOA estimation u low SNR conditions, the SNR is set in the range of −15 dB to 10 dB.In each simul experiment, the Monte Carlo experiment is repeated 300 times, and a box plot is gener to visualize the statistical distribution of the DOA estimation results.From Figure 6, i be observed that at an SNR of −15 dB, the DOA estimation results are within the theore range of ±3°, with the majority concentrated within an error range of ±1°.As the SN creases, the error range gradually narrows.
Experiment 4
To further validate the robustness of the algorithm in DOA estimation for diffe angles, the experiment analyzes the change in the incident direction of two signals different frequencies, ranging from −60° to 60° with an interval of 2°.The SNR is set to dB.The DOA estimation results are shown in Figure 7.
Experiment 4
To further validate the robustness of the algorithm in DOA estimation for different angles, the experiment analyzes the change in the incident direction of two signals with different frequencies, ranging from −60 • to 60 • with an interval of 2 • .The SNR is set to −10 dB.The DOA estimation results are shown in Figure 7.
From Figure 7, it can be observed that the proposed algorithm exhibits strong robustness.Within the range of [−60 • , 60 • ], even under low SNR conditions, the error remains within a range of ±2 • , with a few points reaching ±3 • .In this experiment, a relatively low SNR is set, which introduces significant noise.If the interference from noise is further reduced, the error in the experimental results may decrease even further.
To validate that the algorithm is suitable for DOA estimation applications in satellite interference scenarios, considering the characteristics of low SNR, multiple frequency points, and phase coherence, simulations are conducted under the following application scenarios.From Figure 7, it can be observed that the proposed algorithm exhibits strong robus ness.Within the range of [−60°, 60°], even under low SNR conditions, the error remai within a range of ±2°, with a few points reaching ±3°.In this experiment, a relatively lo SNR is set, which introduces significant noise.If the interference from noise is further r duced, the error in the experimental results may decrease even further.
To validate that the algorithm is suitable for DOA estimation applications in satelli interference scenarios, considering the characteristics of low SNR, multiple frequen points, and phase coherence, simulations are conducted under the following applicatio scenarios.
Scenario 1: Multiple Coherent Sources at the Same Frequency
The scenario assumes there are is set, so only one eigenvalue of the covarian matrix is non-zero.Consequently, when constructing the signal subspace using the eige
Scenario 1: Multiple Coherent Sources at the Same Frequency
The scenario assumes there are Q = 2 interference signals, both of which are coherent signals with a frequency of f 1 = 1268.52MHz, corresponding to the BeiDou Navigation Satellite System (BDS) signal at the B3 frequency point.The incident angles of the two signals are θ 1 = 20 • and θ 2 = 50 • , respectively.The array consists of N = 8 antenna elements, and the number of snapshots is M = 256.The SNR is −10 dB.
From Figure 8, it can be observed that due to the interference sources being coherent signals, the received signal's frequency domain covariance matrix is no longer full rank.In this case, rank{R} = Q − 1, indicates a rank-deficient matrix.When performing the eigen value decomposition of the covariance matrix, only Q − 1 signal eigenvectors can be obtained.In this experiment, Q = 2 is set, so only one eigenvalue of the covariance matrix is non-zero.Consequently, when constructing the signal subspace using the eigenvectors, the complete signal subspace cannot be obtained since its dimension is smaller than the number of sources.However, by performing time-frequency transformations on the received signals from the array, the data at the target frequency point already contains information about the two coherent sources with the same frequency but different incident angles.Therefore, even if the dimension of the signal subspace is smaller than the number of sources, the proposed algorithm can still distinguish between the two incident directions, as shown in Figure 9's spatial spectrum.
information about the two coherent sources with the same frequency but different in dent angles.Therefore, even if the dimension of the signal subspace is smaller than t number of sources, the proposed algorithm can still distinguish between the two incide directions, as shown in Figure 9′s The SNR is −10 dB.When there are interference signals at two frequency points, each with two cohere signals, the rank of the received signal covariance matrix can be determined as 2, accor ing to the formula.Therefore, in Figure 10, the eigenvalue matrix of the frequency doma covariance matrix shows two eigenvalues, corresponding to the number of frequen points.Figure 11 shows the spatial spectrum of the DOA estimation using the propos algorithm.It clearly distinguishes the two coherent signals in the two frequency poin and achieves accurate DOA estimation with minimal error.information about the two coherent sources with the same frequency but different in dent angles.Therefore, even if the dimension of the signal subspace is smaller than number of sources, the proposed algorithm can still distinguish between the two incid directions, as shown in Figure 9′s The SNR is −10 dB.When there are interference signals at two frequency points, each with two coher signals, the rank of the received signal covariance matrix can be determined as 2, acco ing to the formula.Therefore, in Figure 10, the eigenvalue matrix of the frequency dom covariance matrix shows two eigenvalues, corresponding to the number of frequen points.Figure 11 shows the spatial spectrum of the DOA estimation using the propos algorithm.It clearly distinguishes the two coherent signals in the two frequency poi and achieves accurate DOA estimation with minimal error.When there are interference signals at two frequency points, each with two coherent signals, the rank of the received signal covariance matrix can be determined as 2, according to the formula.Therefore, in Figure 10, the eigenvalue matrix of the frequency domain covariance matrix shows two eigenvalues, corresponding to the number of frequency points.Figure 11 shows the spatial spectrum of the DOA estimation using the proposed algorithm.It clearly distinguishes the two coherent signals in the two frequency points and achieves accurate DOA estimation with minimal error.12 shows two peaks, which further confirms that the eigenvalues of the frequency domain covariance matrix of the array-received signals are related to the number of target frequency points.Figure 13 represents the spatial spectrum of the DOA estimation using the proposed algorithm.Despite the interference signals at 20° and 50° being two coherent signals at the same frequency point, their amplitudes in the spatial spectrum are relatively smaller compared to the signals at the other frequency point.However, this does not affect the accuracy of the DOA estimation results.12 shows two peaks, which further confirms that the eigenvalues of the frequency domain covariance matrix of the array-received signals are related to the number of target frequency points.Figure 13 represents the spatial spectrum of the DOA estimation using the proposed algorithm.Despite the interference signals at 20° and 50° being two coherent signals at the same frequency point, their amplitudes in the spatial spectrum are relatively smaller compared to the signals at the other frequency point.However, this does not affect the accuracy of the DOA estimation results.In the case where there are interference signals at two frequency points, with one frequency point containing two coherent signals, the frequency domain covariance matrix of the received signal is rank{R} = Q − 1 = 2. Figure 12 shows two peaks, which further confirms that the eigenvalues of the frequency domain covariance matrix of the arrayreceived signals are related to the number of target frequency points.Figure 13 The feasibility of this method in the estimation of DOA for multi-frequency coherent signals has been verified through simulations involving scenarios with multiple coherent sources at the same frequency, multiple coherent sources at multiple frequencies, and the coexistence of multiple coherent sources and incoherent sources at multiple frequencies.By using the same frequency as the satellite navigation signals, the method's feasibility in the estimation of DOA for multifrequency coherent signals has been confirmed.Moreover, in the scenario of DOA estimation for satellite navigation interference sources, the presence of multi-frequency coherent signals, such as multipath interference signals, suppression interference signals at different frequencies, and deception interference signals, should also be considered.Therefore, this method is promising for the estimation of DOA in satellite navigation interference source scenarios.
Discussion
In this paper, the FDCMR method is proposed to address the following challenges in the field of satellite interference source DOA estimation.Spoofing interference signals are similar to the original navigation satellite signals and exhibit high levels of camouflage.Thus, the DOA estimation algorithm must be able to perform robustly even in low SNR conditions in order to effectively handle such interference signals.This paper presents an algorithm for fast and accurate estimation of DOA for multi- The feasibility of this method in the estimation of DOA for multi-frequency coherent signals has been verified through simulations involving scenarios with multiple coherent sources at the same frequency, multiple coherent sources at multiple frequencies, and the coexistence of multiple coherent sources and incoherent sources at multiple frequencies.By using the same frequency as the satellite navigation signals, the method's feasibility in the estimation of DOA for multifrequency coherent signals has been confirmed.Moreover, in the scenario of DOA estimation for satellite navigation interference sources, the presence of multi-frequency coherent signals, such as multipath interference signals, suppression interference signals at different frequencies, and deception interference signals, should also be considered.Therefore, this method is promising for the estimation of DOA in satellite navigation interference source scenarios.
Discussion
In this paper, the FDCMR method is proposed to address the following challenges in the field of satellite interference source DOA estimation.Spoofing interference signals are similar to the original navigation satellite signals and exhibit high levels of camouflage.Thus, the DOA estimation algorithm must be able to perform robustly even in low SNR conditions in order to effectively handle such interference signals.The feasibility of this method in the estimation of DOA for multi-frequency coherent signals has been verified through simulations involving scenarios with multiple coherent sources at the same frequency, multiple coherent sources at multiple frequencies, and the coexistence of multiple coherent sources and incoherent sources at multiple frequencies.By using the same frequency as the satellite navigation signals, the method's feasibility in the estimation of DOA for multifrequency coherent signals has been confirmed.Moreover, in the scenario of DOA estimation for satellite navigation interference sources, the presence of multi-frequency coherent signals, such as multipath interference signals, suppression interference signals at different frequencies, and deception interference signals, should also be considered.Therefore, this method is promising for the estimation of DOA in satellite navigation interference source scenarios.
Discussion
In this paper, the FDCMR method is proposed to address the following challenges in the field of satellite interference source DOA estimation.Spoofing interference signals are similar to the original navigation satellite signals and exhibit high levels of camouflage.Thus, the DOA estimation algorithm must be able to perform robustly even in low SNR conditions in order to effectively handle such interference signals.Multiple frequency point interference signals and coherent signals exist in satellite navigation systems.The DOA estimation algorithm should be capable of accurately estimating the DOA of multiple coherent signals at various frequency points.
Figure 1 .
Figure 1.The block diagram of the FDCMR algorithm.
Figure 1 .
Figure 1.The block diagram of the FDCMR algorithm.
Figure 1 .
Figure 1.The block diagram of the FDCMR algorithm.
is the Gaussian white noise signal.
Figure 2 .
Figure 2. Schematic diagram of the array structure.
represents the angle of the q-th incident signal in the i-th Monte Carlo e iment, and a total of D = 300 Monte Carlo experiments are conducted.Q represen number of incident signal sources.We assume that there are Q = 2 incident signals signal frequencies of 1 f = 1575.42MHz for GPS navigation satellite L1 frequency an = 1268.52MHz for BeiDou navigation satellite B3 frequency.The incident signal a
succes resolve the two signals.However, the traditional MUSIC algorithm exhibits wider s
Figure 4 .
Figure 4. Comparison of DOA estimation accuracy of different methods under different SNR conditions.
Figure 4 .
Figure 4. Comparison of DOA estimation accuracy of different methods under different SNR conditions.
Sensors 2023, 23 , 7575 11 of 17 Figure 5 .
Figure 5.Comparison of RMSE for DOA estimation of different algorithms under different SNR conditions.
Figure 6 .
Figure 6.Statistical chart of DOA estimation results using our algorithm under different SNR conditions.Boxes represent the 25th and 75th percentiles, and the central line represents the median.(a) Incident angle 1 20 = , (b) Incident angle 2 50 = .
Figure 5 .
Figure 5.Comparison of RMSE for DOA estimation of different algorithms under different SNR conditions.
Figure 5 .
Figure 5.Comparison of RMSE for DOA estimation of different algorithms under different conditions.
Figure 6 .
Figure 6.Statistical chart of DOA estimation results using our algorithm under different SNR ditions.Boxes represent the 25th and 75th percentiles, and the central line represents the me (a) Incident angle 1 20 = , (b) Incident angle 2 50 = .
Figure 6 .
Figure 6.Statistical chart of DOA estimation results using our algorithm under different SNR conditions.Boxes represent the 25th and 75th percentiles, and the central line represents the median.(a) Incident angle θ 1 = 20 • , (b) Incident angle θ 2 = 50 • .
Figure 7 .
Figure 7. DOA estimation results and errors for different incident signal angles.(a) DOA estimati results and errors for incident signal 1.(b) DOA estimation results and errors for incident signal
Q 2 =
interference signals, both of which are cohe ent signals with a frequency of 1 f = 1268.52MHz, corresponding to the BeiDou Navig tion Satellite System (BDS) signal at the B3 frequency point.The incident angles of the tw signals are 1 20 = and 2 50 = , respectively.The array consists of N 8 = antenna e ements, and the number of snapshots is M 256 = .The SNR is −10 dB.From Figure 8, it can be observed that due to the interference sources being cohere signals, the received signal's frequency domain covariance matrix is no longer full ran In this case, { } 1 = − rank R Q , indicates a rank-deficient matrix.When performing t eigen value decomposition of the covariance matrix, only 1 − Q signal eigenvectors ca be obtained.In this experiment, 2 = Q
Figure 7 .
Figure 7. DOA estimation results and errors for different incident signal angles.(a) DOA estimation results and errors for incident signal 1.(b) DOA estimation results and errors for incident signal 2.
Figure 9 .
Figure 9.The spatial spectrum of interference scenario 1.
Assuming there are Q 4 =other two interference signals are also c herent signals with a frequency of 2 f 3 20 8 =
interference signals, the first two interference signals a coherent signals with a frequency of 1 f = 1575.42MHz, corresponding to the GPS na gation satellite signal at the L1 frequency point.The incident angles of these two sign are = 1268.52MHz, corresponding to the BDS sign at the B3 frequency point.The incident angles of these two signals are , respectively.The array consists of N antenna elements, and the numb of snapshots is M 256 = .
Figure 9 .
Figure 9.The spatial spectrum of interference scenario 1.
3. 6 .other two interference signals are also herent signals with a frequency of 2 f 3 20 8 =
Scenario 2: Multiple Coherent Sources at Multiple Frequency Points Assuming there are Q 4 = interference signals, the first two interference signals coherent signals with a frequency of 1 f = 1575.42MHz, corresponding to the GPS na gation satellite signal at the L1 frequency point.The incident angles of these two sign are = 1268.52MHz, corresponding to the BDS sig at the B3 frequency point.The incident angles of these two signals are , respectively.The array consists of N antenna elements, and the num of snapshots is M 256 = .
Figure 9 .
Figure 9.The spatial spectrum of interference scenario 1.
3. 6 .
Scenario 2: Multiple Coherent Sources at Multiple Frequency Points Assuming there are Q = 4 interference signals, the first two interference signals are coherent signals with a frequency of f 1 = 1575.42MHz, corresponding to the GPS navigation satellite signal at the L1 frequency point.The incident angles of these two signals are θ 1 = 20 • and θ 2 = 50 • , respectively.The other two interference signals are also coherent signals with a frequency of f 2 = 1268.52MHz, corresponding to the BDS signal at the B3 frequency point.The incident angles of these two signals are θ 3 = −20 • and θ 4 = −50 • , respectively.The array consists of N = 8 antenna elements, and the number of snapshots is M = 256.The SNR is −10 dB.
3. 7 . 20
Scenario 3: Coexistence of Incoherent and Coherent Sources at Multiple Frequency Points Assuming there are Q 3 = interference signals, the first two interference signals are coherent signals with a frequency of 1 f = 1575.42MHz, corresponding to the GPS navi- gation satellite signal at the L1 frequency point.The incident angles of these two signals are The last interference signal is an incoherent signal with a frequency of 2 f = 1268.52MHz, corresponding to the BDS signal at the B3 frequency point.The incident angle of this signal is 3 =− .The array consists of N8 = antenna elements, and the number of snapshots is M 256 = .The SNR is −10 dB.In the case where there are interference signals at two frequency points, with one frequency point containing two coherent signals, the frequency domain covariance matrix of the received signal is
3. 7 .nal with a frequency of 2 f 20
Scenario 3: Coexistence of Incoherent and Coherent Sources Multiple Frequency Points there are Q 3 = interference signals, the first two interference signals are coherent signals with a frequency of 1 f = 1575.42MHz, corresponding to the GPS navi- satellite signal at the L1 frequency point.The incident angles of these two signals are The last interference signal is an incoherent sig-= 1268.52 MHz, corresponding to the BDS signal at the B3 frequency point.The incident angle of this signal is 3 =− .The array consists of N8 = antenna elements, and the number of snapshots is M 256 = .The SNR is −10 dB.In the case where there are interference signals at two frequency points, with one frequency point containing two coherent signals, the frequency domain covariance of the received signal is
3. 7 .
Scenario 3: Coexistence of Incoherent and Coherent Sources at Multiple Frequency Points Assuming there are Q = 3 interference signals, the first two interference signals are coherent signals with a frequency of f 1 = 1575.42MHz, corresponding to the GPS navigation satellite signal at the L1 frequency point.The incident angles of these two signals are θ 1 = 20 • and θ 2 = 50 • , respectively.The last interference signal is an incoherent signal with a frequency of f 2 = 1268.52MHz, corresponding to the BDS signal at the B3 frequency point.The incident angle of this signal is θ 3 = −20 • .The array consists of N = 8 antenna elements, and the number of snapshots is M = 256.The SNR is −10 dB.
represents the spatial spectrum of the DOA estimation using the proposed algorithm.Despite the interference signals at 20 • and 50 • being two coherent signals at the same frequency point, their amplitudes in the spatial spectrum are relatively smaller compared to the signals at the other frequency point.However, this does not affect the accuracy of the DOA estimation results.
Figure 13 .
Figure 13.The spatial spectrum of interference scenario 3.
Multiple frequency point interference signals and coherent signals exist in satellite navigation systems.The DOA estimation algorithm should be capable of accurately estimating the DOA of multiple coherent signals at various frequency points.
Figure 13 .
Figure 13.The spatial spectrum of interference scenario 3.
Multiple frequency point interference signals and coherent signals exist in satellite navigation systems.The DOA estimation algorithm should be capable of accurately estimating the DOA of multiple coherent signals at various frequency points.This paper presents an algorithm for fast and accurate estimation of DOA for multifrequency coherent signals under low SNR conditions.The algorithm utilizes FFT on the
Figure 13 .
Figure 13.The spatial spectrum of interference scenario 3.
• • • e Q ] form the frequency domain signal subspace.Σ FN is a diagonal matrix composed of the little eigenvalues from the set of eigenvalues, and the corresponding eigenvectors U FN = [e Q+1 e Q+2 • • • e N ] form the frequency domain noise subspace.Therefore, the frequency domain covariance matrix R F can be decomposed as | 10,567.2 | 2023-08-31T00:00:00.000 | [
"Engineering",
"Physics"
] |
Fighting the good cause: meaning, purpose, difference, and choice
Concepts of cause, choice, and information are closely related. A cause is a choice that can be held responsible. It is a difference that makes a difference. Information about past causes and their effects is a valuable commodity because it can be used to guide future choices. Information about criteria of choice is generated by choosing a subset from an ensemble for ‘reasons’ and has meaning for an interpreter when it is used to achieve an end. Natural selection evolves interpreters with ends. Surviving genes embody a textual record of past choices that had favorable outcomes. Consultation of these archives guides current choices. Purposive choice is well-informed difference making.
event occurs at the moment of Tristram's conception when his mother asks his father ''Pray, my Dear, have you not forgot to wind up the clock?'' This question ''scattered and dispersed the animal spirits, whose business it was to have escorted and gone hand in hand with the HOMUNCULUS, and conducted him safe to the place destined for his reception.'' From this minor, but far from inconsequential, perturbation followed many oddities of Tristram's character.
It is not implausible, indeed it is probable, that whatever my father and mother were thinking during the consummatory act before my conception had an influence on their posture and on which of the myriad sperm in my father's ejaculate won the race to the ovum in my mother's oviduct. 'Replaying the tape of life' retells a story in every detail because the sequence of causes remains unchanged. But the first time the tape was played, there was no way of knowing, until it happened, which of my father's sperm would fertilize my mother's egg. There is a single causal narrative out of the past but a beyond astronomical proliferation of possibilities into the future. One can explain with much greater confidence than one can predict.
Taking a step further back from my conception, a complex convergence of molecular events determined the location of chiasmata in the spermatocyte that gave rise to my haploid paternal progenitor. If any one of thirty-odd chiasmata had occurred a mere megabase to either side, then the child conceived would not have inherited my particular set of genes, and the same will have been true of the conception of every one of my ancestors. But a molecular explanation of the location of untold chiasmata would comprise only an infinitesimal part of what a complete causal account of my ancestry would entail. My father's father was an ambulance driver at the second battle of Villers-Brettoneux. So, an account of his survival, where so many others died, would need to explain the trajectories of innumerable projectiles and their fragments, and so on to the endlessly disputed causes of the First World War.
The point of this reductio ad absurdum is that, while all evolutionary processes are, in principle, reducible to physical causes, no feasible account can be causally complete. Every story needs a place to begin which leaves many things unsaid. So too, all scientific explanations include items that, for present purposes, are accepted without explanation.
Aristotle redux
It were infinite for the law to judge the cause of causes, and their impulsions one of another; therefore it contenteth itself with the immediate cause, and judgeth of the acts by that, without looking to any further degree. (Bacon 1596) In pre-classical Greek, aition and aitia had connotations of responsibility, guilt, blame, and accusation (Pearson 1952;Frede 1980). Aristotle's aitia was translated into classical Latin causa, a word that could refer to a lawsuit as in nemo iudex in causa sua. English cause was adopted from medieval Latin around AD 1300 and retains legal uses as in show cause. A similar association of cause and culpability occurs in Germanic languages. German Ursache (cause) is related to Anglo-Saxon sake as in for the sake of. Sake could refer to a lawsuit, complaint, accusation, or guilt. Thus, concepts of cause appear to have evolved from proto-legal notions of blameworthiness. A cause was something that could be held responsible.
Aristotle recognized four kinds of aitia; traditionally translated as material, efficient, formal, and final causes. Bacon (1605Bacon ( /1885) embraced material and efficient causes as the proper domain of physics but banished formal and final causes to the realm of metaphysics (p. 114). Aristotelian pluralism was supplanted by a monistic concept of causation of which efficient cause was the dynamical aspect and material cause the physical substrate. In the new mechanical philosophy, form lacked independent potency but was ''confined and determined by matter' ' (p. 115). Final causes were disparaged as an encumbrance to the advancement of learning, as ''remoraes and hindrances to stay and slug the ship from further sailing'' (p. 119).
The fundamental incompleteness of all causal stories has coexisted with faith in explanatory reduction because of scientists' confidence that a physical explanation could, in principle, be given of the things that are left unexplained in each particular causal account. For logical consistency, it should be scientifically and philosophically legitimate to invoke things that look like formal or final causes if these could, in principle, be explained by physical and material causes.
The intent of this paper was to defend the use of formal causes (information) and final causes (functions) in evolutionary explanation, but the paper has evolved to address broader questions. Formal causes will be seen as abstractions of material causes and final causes as an efficient way of talking about efficient causes. Form can be grounded in material cause because the matter of evolved beings possesses intricate fine structure that embodies experience of what has worked in the past. Purpose can be grounded in efficient cause because current means are explained by past ends via the recursive physical process we call natural selection. At times during the presentation of formal causes I will mention ends and purposes without yet having justified that language. I ask that these questions be held in abeyance until my discussion of final causes.
Eggs and chickens
Let's think of eggs. They have no legs. Chickens come from eggs But they have legs. The plot thickens: Eggs come from chickens, But have no legs under 'em. What a conundrum! (Nash 1936) Consider a causal chain: A causes B causes C causes D causes E. Prior things cause posterior things. C is an effect of A and B but a cause of D and E. So much is simple. But what happens when things recur? … A i-1 causes B i-1 causes C i-1 causes D i-1 causes E i-1 causes A i causes B i causes C i causes D i causes E i causes A i?1 causes B i?1 causes C i?1 causes D i?1 causes E i?1 … where the recursion continues into the indefinite past and indefinite future. Each type occurs both before and after each other type. A token is either cause or effect of another token-it cannot be both-but cause and effect are inextricably entangled once one attempts to generalize and describe lawful relations among types. Types are both causes and effects of each other (and of themselves). A linear chain was chosen here for simplicity of exposition but similar arguments could be developed for multidimensional webs. When an amplifier feeds back, what sound is input and what is output? 'Selfevident' distinctions between cause and effect are far from obvious in recursive processes. As one moves back along a chain of physical causation, one encounters things that resemble things to be explained. Eggs produce chickens and chickens produce eggs. Genes are causes of phenotypes and phenotypes causes of which genes replicate (Haig 1992).
A phenotypic effect (P) may be viewed as both a cause and consequence of a genotypic difference (G) when both are considered as types. A complete causal account of P i (subscripts indicate tokens) would include many prior occurrences of P plus many prior occurrences of G and would resemble a complete causal account of G i . If P i-1 causes G i causes P i causes G i?1 , then it is a matter of preference whether P is considered the cause and G the effect or the other way round. A molecular biologist argues from G to P when explaining how gene expression determines phenotype whereas an evolutionary biologist argues from P to G when explaining why a gene has its particular effects. The former mode of explanation is commonly accepted as unproblematic whereas the latter is rejected as teleological and unscientific. But this is no more than a convention of scientific story-telling. Phenotypes are among the efficient causes of genotypes (the central dogma of molecular biology notwithstanding).
Two other points are worth making briefly. First, a recursive non-equilibrium system must be thermodynamically open because a closed system cannot return to an earlier state. Second, evolution requires imperfections in recursion or nothing can change.
Retrorecursion
Information can exist only as a material pattern, but the same information can be recorded by a variety of patterns in many different kinds of material. A message is always coded in some medium, but the medium is really not the message. (Williams 1992, p. 10) Most eukaryotic genomes harbor retroelements that replicate DNA via RNA intermediates or, what amounts to the same thing, replicate RNA via DNA intermediates. Nothing structural persists in this process. DNA is 'copied' into RNA and then RNA is 'copied' into DNA at a new location in the genome (Finnegan 2012).
An LTR retrotransposon can serve as a paradigm. In its guise as double-stranded genomic DNA, the retrotransposon is transcribed by host-encoded RNA polymerase from an antisense-strand of DNA into a sense-strand of RNA. The resulting RNA can have two functional fates: it can be processed into messenger RNA (mRNA) that is translated by ribosomes into gag and pol proteins; or it can be used as genomic RNA that is packaged with pol and gag proteins as an infective particle. Pol is a remarkable gadget: acting as a reverse transcriptase, pol synthesizes an antisense-strand of DNA complementary to the genomic RNA; acting as an RNAse, pol degrades the RNA template; acting as a DNA polymerase, pol synthesizes a sense-strand of DNA from the antisense-strand; and acting as an integrase, pol inserts the double-stranded DNA into a new site in 'host' DNA (Finnegan 2012). A sense-strand of RNA can be used as a template to make proteins (translation) or antisense DNA (transmission) but the same copy cannot perform both functions.
Retrotransposons trace their origins back before the beginning of cellular life but an active retrotransposon cannot reside long at any one place in the genome. At each location its DNA is inserted, natural selection favors mutations that inactivate and degrade retroelement functions because retrotransposition is costly to organismal fitness. Nevertheless retrotransposition persists because reverse-transcribed DNA inserts at new sites faster than mutations degrade source DNA. Mutations that enhance transposition disperse to new sites while mutations that reduce transposition accumulate at old sites. An active element must stay one jump ahead of inactivating mutations. It is a restless wanderer, leaving crumbling genomic footprints at each step along the way (Haig 2012b(Haig , 2013a. Retrotransposition involves changes in substance and material form. Consider a nine-nucleotide segment of gag. 5 0 -CGCACCCAT-3 0 (antisense DNA) is transcribed into 5 0 -AUGGGUGCG-3 0 (RNA) which can be translated as methioneglycine-alanine (peptide) or reverse transcribed as 5 0 -CGCACCCAT-3 0 (antisense DNA). The latter is then used to synthesize 5 0 -ATGGGTGCG-3 0 (sense DNA). Sense and antisense DNA differ, not only in the use of complementary bases, but also because complementary bases occur in reverse order relative to the sugarphosphate backbone because of antiparallel pairing. Sense DNA and RNA differ in the substitution of thymine (T) for uracil (U) and in the use of deoxyribose rather than ribose in the backbone. RNA and peptide are chemically chalk and cheese.
Many things within cells are made of DNA, RNA, or protein. Many RNAs are transcribed and many proteins translated. What allows us to pick out a retrotransposon as a nameable entity from these other components and activities? What thing can be held responsible? The retrotransposon is distinguished from other cellular components because it possesses distinct criteria for evolutionary success. Sense DNA, antisense DNA, sense RNA, and peptide are linked by complex causal dependence but are structurally unrelated. Each can be considered to represent the others as material avatars of an immaterial gene. 1 The 'information' that is the retrotransposon must repeatedly change substance and location to persist in an unbroken chain of recursive representation. 2 There is, in principle, a complete causal account that invokes nothing but efficient and material causes, and in which there is recurrence without continuity of any material thing, but one cannot give a meaningful account of a retrotransposon without reference to its telos and eidos. The forms are shadows of shadows. 3
Formal causes and information
If it be true that the essence of life is the accumulation of experience through the generations, then one may perhaps suspect that the key problem of biology, from the physicist's point of view, is how living matter manages to record and perpetuate its experiences. (Delbrück 1949) Medieval Latin informatio referred to molding or giving form to matter (Capurro and Hjørland 2003) but Anglo-Norman informacione (13th century) was a criminal investigation by legal officers. Metaphors of information abound in modern biology. Not everyone who uses them is a fool. There must be meaning behind the metaphors but precisely what has been difficult to pin down. Delbrück (1971) wrote that '''unmoved mover' perfectly describes DNA; it acts, creates form and development, and is not changed in the process.'' Biological information, whatever that may be, performs an explanatory role similar to Aristotle's eidos (Grene 1972).
An evolutionary distinction between information and objects in which information resides has often been made. It appears in contrasts between replicators and vehicles (Dawkins 1976), information and its avatars (Gliddon and Gouyon 1989), codical and material domains (Williams 1992), and informational and material genes (Haig 1997(Haig , 2012a. In the latter formulation, material genes were physical objects but informational genes were the abstract sequences of which material genes were temporary vehicles. Material genes were identified with gene tokens and informational genes with gene types, but this is not quite right if 'type' is interpreted as a material kind. Sense DNA, antisense DNA, RNA and protein all represent an informational gene but are not molecules of one kind. Continuity resides in the recursive representation of immortal pattern by ephemeral avatars.
Shannon information quantifies the reduction of uncertainty for a receiver observing a message relative to other messages it could have been. The larger the set of possible messages the greater the reduction in uncertainty. Perhaps a better formulation, would be to say that information measures the reduction of uncertainty of an interpreter observing one thing rather than other things it could have been. The interpreter uses the observation to select an interpretation from a set that matches possible interpretations to possible observations. In this formulation, the interpreter can observe the environment, or things intended to be hidden, but a message corresponds to the special case of information sent with intent.
A human genome contains 3.2 gigabases (Gb) with up to two bits of information per base (a choice from four alternatives). Therefore, a human genome contains 6.4 gigabits of information relative to the set of all possible 3.2 Gb strings. This is the reduction in uncertainty provided by a particular sequence for an interpreter who had no prior knowledge other than the length of the sequence. Every 3.2 Gb string contains the same information but most strings are meaningless (Winnie 2000; Moffatt 2011). Only an infinitesimal subset of the Vast library of 3.2 Gb sequences contains genomes that have ever existed (Dennett 1995). Other measures of Shannon information might compare the sequence to the set of all extant human genomes or to the set of all past genomes. The amount of Shannon information depends on the background knowledge of the receiver.
Information and meaning are distinct. A DNA sequence contains information that acquires meaning when the sequence is interrogated for answers to particular questions. One might use it to determine the amino acid sequence of an otherwise unknown protein or to search for the cause of genetic disease in a patient. Genomes contain clues about evolutionary history if we can only read the hints. If an individual carries the Benin sickle-cell S haplotype, then we can infer that he or she had recent ancestors who lived in West Africa and survived malaria. Other inferences can be made by comparing sequences. We compare DNA documents to reconstruct phylogenetic trees, to date times of divergence, to infer ancestral population size, or to locate regions of positive selection.
Information has meaning for an interpreter when it is used to achieve an end. The proximate end of the interpretative process is an interpretation of the information. Interpretation of one thing as another differs from simple change of one thing into another because of its end. An interpretation is intended for use but an uninterpreted change simply occurs. 4 Meaning is a property of the interpretation not of the information because the same information can mean different things to different interpreters. A sender may intend a particular interpretation, and have constructed a message accordingly, but how the message is interpreted is determined by the interpreter. An interpreter may observe more, or less, than was intended by the sender.
Meaning is extracted from a DNA sequence, represented in the output of an automatic sequencer, when a technician reads T rather than A and infers that a fetus will express hemoglobin S. The technician's end is clinical diagnosis. Meaning is extracted from the same DNA sequence, represented as an RNA message, when a ribosome incorporates valine rather than glutamate into a b-globin chain. The ribosome's end is protein synthesis. Selectively-neutral single-nucleotide polymorphisms have meaning for a geneticist who uses them to isolate a disease-causing gene but no meaning for the organisms from which they come. No meaning is extracted when DNA is eaten by a bacterium. The use of something as an object (throwing a stone), rather than as a representation (reading a stone tablet), does not count as use of information.
A pause is in order. A thing contains information when it differs from something else it could have been. Two things contain mutual information if an observer can learn about one by observing the other. This is a symmetric relation. An effect 4 My account of meaning can be viewed as parallel to Peirce's (1877) account of belief. His trinity of belief, desire, and action-''our beliefs guide our desires and shape our actions''-can be loosely translated as my triad of meaning, end, and interpretation. For Peirce, beliefs were habits of mind that guided action: ''Belief does not make us act at once, but puts us into such a condition that we shall behave in some certain way, when the occasion arises.'' Represented in other words, beliefs were latent information whose meaning was expressed in conditional action to achieve a motivated end.
Fighting the good cause 681
represents its cause when observation of the effect allows inference about the cause. This is an asymmetric relation: X i represents Y i to the extent that Y i is causally responsible for their mutual information. A thing has meaning for an interpreter when its 'difference from something else' is used by the interpreter to achieve an end. An interpretation is a representation of the information used by the interpreter. An interpretation can be the text interpreted by another interpreter. Interpretation is recursive when interpretations return to prior forms. X and Y, considered as types, reciprocally represent each other if the token X i represents Y i represents X i-1 represents Y i-1 . Replication is reliable, high-fidelity, recursion of interpretation. (The game of 'Chinese whispers' shows what happens when representation is unreliable.) The text of a replicator is an interpretation of itself.
Living things are replete with reliable reciprocal representation. Each strand of the double helix represents the other. An mRNA represents the DNA from which it is transcribed and the DNA represents the mRNA. A protein represents the mRNA from which it is translated and the mRNA represents the protein. DNA represents protein and protein represents DNA. Extended phenotypes represent genotypes and genotypes represent extended phenotypes (Dawkins 1982;Laland et al. 2013a). All represent what has worked in past environments. Natural selection creates complex causal dependence between past environments and patterns and processes within cells.
Life is made meaningful by a multitude of mindless interpreters reinterpreting the molecular metaphors of other mindless interpreters. RNA polymerases transcribe DNA as RNA. tRNAs interpret codons as places to deposit amino acids. Ribosomes translate RNA sentences into protein poetry. Higher-level interpreters depend on the activity of myriads of lower-level interpreters. Islet cells integrate blood glucose and other inputs to regulate insulin. Fat cells, muscle cells, and liver cells interpret insulin for diverse ends. Neurons respond to signals from muscles and muscles to signals from neurons. Brains comprehend social relations. You read this sentence. Organisms are self-constructed interpreters of genetic texts in environmental context.
The environment chooses phenotypes and thereby chooses genes that represent its choices and embody information about the environment's criteria of choice. Observation of these choices would reduce the uncertainty of an omniscient observer about which genes will be transmitted to future generations. The choices of the environment are unintended but actions that are repeated because of their effects are thereby intended. The choices of the environment are not themselves messages, but genes that represent these choices are copied and passed as messages from one generation to the next (Bergstrom and Rosvall 2011). Organisms and their lowerlevel parts are senders and interpreters of these texts.
Difference demystified
A difference is a very peculiar and obscure concept. It is certainly not a thing or an event. (Bateson 1972, pp. 451-452) A soldier fires at Marius but É ponine blocks the shot with her body saving Marius' life. The difference between the soldier firing or not firing makes no difference as to whether Marius survives but makes a difference as to whether É ponine survives. É ponine's choice, the difference between lunging forward or holding back, makes the difference between Marius' death or survival. The soldier's shot is responsible for É ponine's death, and É ponine's sacrifice is responsible for Marius' survival, but the soldier's shot is not responsible for Marius' survival. Responsibility is not transitive.
Things or events do not make a difference. Differences between things or events make a difference. One cannot decide whether something is responsible for an outcome without answering the question, compared to what? A choice is an act that could have been otherwise and may make a difference.
A physician gives morphine to a patient dying of cancer. The difference between a fatal and non-fatal dose does not make a difference between the patient dying or not dying, but does make a difference between the patient dying a painful or nonpainful death. If I tell you the dose of morphine I do not provide any information about whether the patient lives or dies but provide information about the nature of the death. 5 Bateson (1972) defined the unit of information as a ''difference which makes a difference''whereas many philosophers define causes to be makers of difference (Lewis 2000;Sartorio 2005). There is indeed a close connection between concepts of cause and information: a cause can be considered a difference that makes (or explains) a difference. The former difference is the cause and the latter its effect. 6 Observation of either difference contains information about the other. This information is potentially about the relation between cause and effect but use of the information requires an interpreter that has either been designed or evolved for that end.
Consider again the nine-nucleotide segment of gag antisense DNA embedded within a much longer sequence. When this sequence is interpreted by an RNA polymerase every DNA base makes a difference in the resulting RNA sequence: 5 0 -CGCACCCAT-3 0 is transcribed as 5 0 -AUGGGUGCG-3 0 . 7 The RNA polymerase receives its instructions from a DNA sequence in which every base conveys actionable information: A means 'choose U', C means 'choose G', G means 'choose C', and T means 'choose A.' Once transcription is initiated, and until it terminates, RNA polymerase always interprets A, C, G, or T as U, G, C, or A regardless of the context of surrounding bases. Every change in the DNA base sequence would cause a change in the RNA sequence (given a well-functioning RNA polymerase).
Ribosomes translate 5 0 -AUGGGUGCG-3 0 as methione-glycine-alanine. They are more sophisticated interpreters than RNA polymerases because the meaning of 5 The physician takes the role of the first assassin and cancer the role of the backup assassin in scenarios of causal preemption (e.g., Hitchcock 2007). If the patient does not die from an overdose, then the patient dies from cancer. 6 ''To the common sense of mankind it is the property of a cause, qua cause, that it might have been different and have had different effects'' (Fisher 1934, p. 106). 7 I remind readers that T at the 3' end of the DNA segment corresponds to A at the 5' end of the RNA segment because of antiparallel synthesis. bases for ribosomes is determined by context. The AUG triplet communicates crucial information. It is the symbol 'start here with methionine' that initiates most polypeptides and sets the reading frame for translation of the rest of the message in triplets. AUG in the body of an mRNA (when in the correct reading frame) simply means 'choose methionine'. The two meanings are distinguished by context. G appears five times in the nine RNA bases. The G in AUG is essential for the meaning 'choose methionine' because any other base in that position would result in a different amino acid added to the polypeptide. The two Gs in GGU taken together mean 'choose glycine' (the ribosome also interprets GGC, GGA and GGG as 'choose glycine'). The first G in GCG means 'choose alanine' in the context of C in the second position, any other base in the first position would be interpreted as a different amino acid, but the G in the third position of GCG does not make a difference and could be replaced by any other base without a change from alanine. However, a deletion of the third base (a difference between nobase and somebase) would cause a frameshift and a change in the interpretation of the rest of the message.
RNA polymerases and ribosomes choose from ensembles. When an RNA polymerase transcribes G, it picks out a C from a cytoplasmic mixture of U, C, A, and G. Similarly, when a ribosome translates AUG, it selects a tRNA charged with methionine from a mixture of tRNAs charged with all twenty amino acids. Methionine is the bon mot the ribosome seeks to capture the meaning of AUG. AUG is present in this position in the RNA message because it has competed, and will compete, with alternatives such as ACG or UUG that have different denotations for the ribosome and connotations for the organism. Natural selection among variant texts chooses those that are useful and discards the rest. Thereby the macrolevel of ecology and social interactions informs the microlevel of molecules.
Some changes to an RNA message change the amino acid added to the growing polypeptide-these are differences that make a difference in the translated proteinwhereas other changes are synonymous and make no difference in translation. The choice of a particular amino acid at a particular location in a protein may have no effect on protein function, in which case different codons are meaningful for the ribosome but meaningless for the organism. The difference in the mRNA (and the DNA from which the message was transcribed) causes a difference in the protein but does not cause a difference in fitness. The choice of amino acid by the ribosome is purposive but the choice of nature is random.
A choice is a difference that makes a difference. It is a branch point at which a traveller could have gone by different paths but, once one path is chosen, the path taken informs an observer of the traveller's choice. Information about what befalls on a path would be useful in making a choice if the traveller ever came that way again. If travellers copy their choices for later reference, and death awaits on one path but safety on another, then the choices that take the wrong path never return to the fork in the road but the choices that take the right path return to make the same 'wise' choice again. In a perilous maze, the records of surviving travellers provide a safe guide for finding a way.
Choices are degrees of freedom and the meanings of information are the choices it guides. Information is useful if, and only if, it helps to change the future for the better. By tortuous paths, we have come to view choice as synonymous with cause and information as a potential guide to choice. Given a textual record of recurring choices, Darwin's demon (Pittendrigh 1961) culls the bad choices and retains the good. Well-informed choice is purposive difference making.
Final causes and functions
It follows that there are several causes of the same thing … And things can be causes of one another, e.g. exercise of good condition, and the latter of exercise; not, however, in the same way, but the one as end and the other as source of movement. (Aristotle 1984;Metaphysics, p. 1600) Teleological language in biology appears in a heterogeneous class of explanations united by the loose property that a thing's existence is explained by an end (telos) that the thing makes possible. A beaver grows sharp incisors to cut down trees to build a lodge to provide shelter from the storm. Dental development has the goal of sharp incisors with the function of cutting down trees for the sake of building a lodge for the purpose of shelter, all for the good of a beaver. ''In order to gain access to buried stretches of DNA inside nucleosomes, a chromatin remodeling ATPase is required to unwrap the nucleosomal DNA'' (Mellor 2005) is no less teleological than ''the hairs about the eye-lids are for the safeguard of the sight'' (Bacon 1605(Bacon /1885.
A final cause explains something by its effects. The thing exists for the sake of an end. In the absence of conscious intent, such explanations have been rejected because explanandum precedes explanans. However, this argument loses force for products of natural selection because ends i can be causes of means i?1 without backward causation. A thing exists today because similar things in the past had effects that enhanced survival and reproduction. The thing expresses similar effects in the present because its effects are heritable. Therefore the thing considered as a type exists because of its effects.
Ends can be means to other ends. Ayala (1970) distinguished proximate ends, the functions or end-states a feature serves, from the ultimate goal of reproductive success. Most biological research addresses the end-directedness of adaptations to achieve proximate ends without explicit reference to ultimate goals. The proximate ends of the mindless interpreters described in previous sections are interpretations of information from the environment or sent as genetic texts. The purposeful behavior of these interpreters can be explained as the outcome of selective processes that incorporated information about what worked in past environments into the fine structure of information-carrying molecules.
Selection means choosing from a set of alternatives. If there is no alternative, there can be no choice. In Darwin's metaphor of natural selection, the environment 'chooses' via differential survival and reproduction. In Haig's (2012a) formalism of this process for genetic replicators, the environment chooses among effects of genes and thereby chooses among genes. An effect is a difference a gene makes relative to some alternative. It is not a property of an individual gene but rather a relation Fighting the good cause 685 between alternatives. The selected gene is a difference that made a difference. In this formalism, phenotype (synonymous with a gene's effects) is defined as all things that differ between the alternatives, whereas environment is defined as all things shared by the alternatives. By these definitions, what is a phenotype in one comparison may be environment in a different comparison. Natural selection will tend to convert phenotype into environment because environment is that for which there is no reasonable alternative. 8 Choices of the environment reduce uncertainty about which genes will leave descendants and the selected genes thereby convey information about these choices to ribosomes and other mindless interpreters in subsequent generations. If the choices of the environment are non-random, then the genes embody usable information about the environment's criteria of choice and guide effective choices of organisms.
A gene is 'responsible' for its effects. Changes of allele frequency extract average additive effects on fitness from a matrix of non-additive interactions (Fisher 1941). Whatever effects of an allele contribute to a positive average effect on fitness can be considered final causes of the allele's persistence. A gene's function can be defined as those of its effects that have contributed positively to its spread and present frequency. All other effects, negative or neutral, are side-effects without function. If an effect contributes to a gene's success-by any route, no matter how devious-then the gene exists for the sake of that end and the end exists for the good of the gene (Haig and Trivers 1995;Haig 2012a).
In the struggle for existence in a world of finite resources, one variant's success comes at the expense of alternatives. The causes of death of individuals without an allele contribute to an allele's success, just as much as the causes of survival of individuals with the allele. 9 An allele must make a difference in many lives if it is to spread by natural selection, from a single copy arising by mutation in a germ cell to fixation in a population of many individuals. No one event can be singled out as the cause of adaptation but many similar events, distributed through space and time, result in adaptive change. Natural selection is not an efficient cause but a statistical summary of many efficient causes.
One must consider not only allelic substitutions but also failures of substitution. All adaptations will degrade over time unless mutations that impair the evolved function are weeded out. Each new mutation creates an allelic difference that is subject to selection on the basis of its average effect on fitness. If the mutation is eliminated by a choice of nature, then the difference of phenotypic effect exists for the good of the allele chosen. Many phenotypically interchangeable, but genetically distinct, loss-of-function mutations can be grouped together into a single allelic difference. In this way, a genetic function, determined by interactions between multiple sites within a coding sequence, can be considered for the good of the evolutionary gene.
Consider the substitution of thymine for adenine in the middle base of the sixth codon of the human b-globin gene. This difference causes a replacement of glutamate by valine at the sixth amino acid position of the b-globin polypeptide. The resulting protein, hemoglobin S, is responsible for sickle-cell disease when homozygous and resistance to malaria when heterozygous. The alternative allele with valine at position 6 is known as hemoglobin A. 10 With respect to the allelic difference between A and S, the function of S is containment of malarial infection in a genotypic environment that includes an A allele. A deleterious side-effect of S is life-threatening anemia in a genotypic environment that includes another S allele (see discussion in Haig 2012a).
The sickle-cell mutation has been presented as an exemplar of a 'selfish nucleotide' and used to dispute the identification of 'evolutionary genes' with DNA (Griffiths and Neumann-Held 1999). The reductio ad absurdum fails because evolutionary genes have been defined as stretches of DNA rarely disrupted by recombination (Williams 1966;Dawkins 1976) and sufficiently short to maintain linkage disequilibrium (Haig 2012a). Non-random associations of variable nucleotides, some of which may be functional, extend for hundreds of kilobases to either side of the 'selfish thymine' (Hanchard et al. 2007). As recombination between sites lessens, and as the strength of epistatic selection increases, a point is reached at which different sites can no longer be considered as belonging to different evolutionary genes (cf. Neher et al. 2013). For sites sufficiently close together, nonadditive interactions on the axis of expression contribute to an additive effect on the axis of transmission (Neher and Shraiman 2009;Haig 2011b).
Any complex organismal adaptation will involve many allelic substitutions at multiple loci. For ancient adaptations, most substitutions will have occurred in the deep past, in organisms and environments very different from those of the present. In the process, some genes may have been transformed beyond recognition. While each substitution could be considered for the good of that gene at that time, the adaptation serves proximate ends today. For what entity are these ends a good? A standard answer is that complex adaptations are for the good of the organism. A gene-selectionist could counter that a complex adaptation is for the good of each and every gene whose loss of function by mutation results in loss of the adaptation (Haig 2012a).
Darwin's demon
The literature written by [Darwin's] Demon is no more deducible from a complete command of the nucleotide language, let alone physical law, than the works of Shakespeare or Alfred North Whitehead are deducible from a complete command of the English language. (Pittendrigh 1993) 10 This sentence deliberately confuses gene and protein. Proteins and genes often share the same name (metonymy). Sometimes a gene is named for its protein and sometimes a protein for its gene. By convention, the gene and its mRNA are italicized but not its protein. In speech, the denotation of a name often encompasses gene, mRNA, and protein.
Fighting the good cause 687
Maxwell imagined a demon that performed work by choosing which molecules to allow through a partition, thereby selecting ordered subsets from a disordered ensemble. Chickens can unscramble eggs by eating them (Gregory 1981, p. 137).
A rocket is a rigid tube, open at one end, that converts the disordered molecular motion of combustion into coherent motion of the tube. Roughly speaking, the closed end of the tube selects molecular momentum orthogonal to its surface and imparts that momentum to the rocket while the open end discards momentum in the opposite direction. The rocket engine is the selective environment that chooses an ordered subset of moving particles from a disordered set as the entropy of the working material increases. A piston selects molecular momentum orthogonal to the one moveable wall of a cylinder and thereby does work while discarding unworkable energy into a heat sink (Atkins 1994, p. 83). Organisms are elaborate self-assembling engines that acquire or synthesize their own fuel and dump entropic excrement. They are the selective environment by which food is converted to work.
Subset selection is a semantic engine. Consider a set subject to a procedure by which some are 'chosen' and others 'rejected'. Choice is random if membership of the selected subset is determined by criteria independent of intrinsic properties of things chosen (for example, if no attribute has a periodicity of five but every fifth entity is selected). The disjunction of selected and discarded subsets contains no information about the criteria of choice when choice is random. However, the disjunction contains information about the criteria of choice when choice discriminates among members of a set on the basis of one or more of their intrinsic properties (a reasoned choice). The selected and discarded subsets are biased samples of the whole. One might say that one is adapted, and the other maladapted, to the selective environment.
Wind winnows wheat from chaff by the criterion of weight to cross-sectional area. A bird picks berries from a bush on the basis of palatability and the bird's criteria of choice are reflected in differences between eaten and uneaten berries. A man chooses a wife and we can infer something about his preferences by comparing his spouse to others who were available but passed over. His choice is restricted to members of a comparison set constrained by the comparison sets and preferences of potential partners. You can't always get what you want.
Natural selection, it has been said, differs from subset selection because ''offspring are not subsets of parents but new entities'' (Price 1995). But the genes of the next generation are a subset of the genes of the last. Therefore, natural selection can also be inscribed under the rubric of subset selection if focus shifts from vehicles to replicators, from interpretations to texts. Natural subset selection is indirect. The environment selects a subset of phenotypes to be parents and thereby selects a subset of genes to be transmitted.
Selection from a selected subset retains information from past choices, imperfectly. Retention is imperfect because information is dissipated by random culling, by random mutation of past reasoned choices, and by changes in criteria of choice. In the absence of replication, recursive selection reduces the size of the comparison set at each round of choice. Replication creates redundancy and thus increases the probability that information from past choices will be retained despite dissipative forces.
Mutations are random guesses in the neighborhood of previous choices. Mutation degrades semantic information about past choices but adds entropy for future reasoned choice. For the right balance of mutation and selection, recursive selection of mutable replicators results in accretion of semantic information and refinement of fit to criteria of choice.
Mendel's demon
Why all this silly rigmarole of sex? Why this gavotte of chromosomes? Why all these useless males, this striving and wasteful bloodshed? (Hamilton 1975) Clonal reproduction replicates entire genotypes that are judged repeatedly in the court of environmental opinion. Each asexual genotype is a single 'evolutionary gene' responsible for its own average effects after repeated retesting. The difference between genotypes that differ at a single site can be attributed to that site but responsibility cannot be attributed to individual sites when genotypes differ at multiple sites. Segments of particular value must share credit with segments that do not pull their weight and are hidden from blame. All must share in communal praise and collective guilt.
Sexual genotypes, by contrast, are ephemeral. Judgment of each individual genotype is unique and unrepeated but smaller segments are tested repeatedly against different backgrounds and can be held responsible for their average effects. Sexual genotypes are pastiche, cobbled together from parts of two parental genomes, four grandparental genomes, eight great-grandparental genomes (you get the idea), in a process that mindlessly breaks up effective combinations for the chance of something better. Every one of these genomes has been tested by the environment and passed. The sexual disassembly and reassembly of genotypes allows attribution of responsibility to parts.
Mendel's demon (Ridley 2000) is a randomizing agent that shuffles the genetic deck and deals out fresh hands in each round. It can be a mischievous imp that impedes the work of Darwin's demon by breaking-up favorable combinations or a helpful sprite that rescues parts of promise from bad company. As the genome is diced into smaller pieces, the range of effects for which each non-recombining segment can be held responsible diminishes (Godfrey-Smith 2009, p. 145; Okasha 2012) but each segment is more readily held responsible for its causal effects. Darwin's and Mendel's demons, working together, create teams of champions rather than champion teams (Haig 1997).
Peirce's demon
Experiment … is an uncommunicative informant. It never expiates: it only answers 'yes' or 'no' … It is the student of natural history to whom nature opens the treasury of her confidence, while she treats the cross examining experimentalist with the reserve he merits. (Peirce 1905) Fighting the good cause 689 Peirce (1905) compared an experimental scientist with men whose education had largely been learned from books: ''he and they are as oil and water, and though they be shaken up together, it is remarkable how quickly they will go their several mental ways, without having gained more than a faint flavor from the association.'' His vivid use of metaphor belied his admonition ''that no study can become scientific … until it provides itself with a suitable technical nomenclature, whose every term has a single definite meaning universally accepted among students of the subject, and whose vocables have no such sweetness or charms as might tempt loose writers to abuse them.'' He contrasted the poverty of the experimentalist's ''meagre jews-harp of experiment'' to the richness of the naturalist's ''glorious organ of observation.'' Despite such a seemingly invidious comparison, the rational purport of belief was to be found solely in answers to repeated experiments and their consequences for future conduct:-''if one can define accurately all the conceivable experimental phenomena which the affirmation or denial of a concept could imply, one will have therein a complete definition of the concept, and there is absolutely nothing more in it.'' Right conduct is choice guided by experience.
Experiments are choices offered to nature for the resolution of doubt. They provide terse inarticulate answers to narrowly defined questions. These answers are informative when they reduce the experimentalist's uncertainty about the state of the world. The belief they engender has meaning when used to guide conduct. By this means, ''thought, controlled by a rational experimental logic, tends to the fixation of certain opinions'' that are not arbitrary but predetermined by nature (Peirce 1905).
The experimental method (Peirce's demon) and natural selection (Darwin's demon) are resolvers of difference in which choices of nature inform adaptive behavior via the accumulation of useful information. Practice perfects performance by trial and choice. A controlled experiment varies one thing while holding other things constant (ceteris paribus) to determine the differences for which that thing can be held responsible. But experiments must be replicated to average out residual, uncontrolled, variation. Sexual recombination achieves a similar statistical control by repeated retesting of allelic differences on different genetic backgrounds. The average effects of allelic differences reduce the complexity of biological interactions to simple binary choices. The success of the experimental method and of sexual organisms suggests that short-sighted choice among recombinable units often outperforms reasoned judgment of integrated wholes.
The histories of causal and legal concepts are closely intertwined. The function of a trial is to determine whether a defendant is responsible for a crime. Many circumstances and opinions are weighed in the balance but the judgment is binary, guilty or not guilty. The earliest known meanings of try are to sift or pick out, to separate one thing from another, especially the good from the bad, and to choose or select. A trial was the determination of a difference, between guilt or innocence, by tribunal, battle, or ordeal. Natural selection is a recursive process of trial and judgment by which good causes are rewarded and relative truths learnt.
Gene-selectionism and developmental systems theory … a thing exists as a natural end if it is cause and effect of itself. (Kant 2000, p. 243) Phenotype interprets genotype in environmental context. Why should genes be singled out as possessors of purposes and as self-interested beneficiaries of adaptation? Genes belong among the material causes of development, and gene expression among its efficient causes, but ontogeny proceeds via complex interactions between genes and environment. From the perspective of developmental systems theory, the causal matrix recreates itself, recursively, without a privileged role for genes (Oyama 2000).
Genes interact with each other and the environment to create phenotypes that causally influence which individuals leave descendants. But, when the environment chooses which allele increases in frequency, the choice is based on the average effect of a difference (Fisher 1941). In Lewontin's (2000) terminology, the allelic effects are causes of difference but the interactions are causes of state. 11 The prosaic selection of differences creates poetic changes of state (Haig 2012a).
Gene selectionism is concerned with how information gets into the genome via natural selection and what can be held responsible for the appearance of purpose in nature. By contrast, developmental systems theory is concerned with understanding ontogenetic mechanisms. One might say that gene selectionism addresses the writing, and developmental systems theory the reading, of a text. From this perspective, the frameworks are complementary rather than in conflict. Any text of lasting value is read, and judged, repeatedly as it is revised.
Two domains of explanation are in play that have been characterized as a vertical axis of transmission and a horizontal axis of development (Bergstrom and Rosvall 2011). One concerns the inheritance of genetic information between generations and the other the expression of genetic material within generations. Teleological concepts appear in both domains. On the axis of transmission, final causes appear as adaptations that serve the ultimate end of fitness. On the axis of expression, final causes appear as end-states of developmental processes and as the proximate ends of goal-directed behaviors. Explanations in the two domains have different flavors because mapping from gene to gene copy in the course of transmission is straightforward but from genotype to phenotype in the course of development is devilishly difficult.
The conceptual separation of axes of transmission and development is related to Shea's (2007) separation of phylogenetic and ontogenetic explanations; to Ayala's (1970) distinction between ultimate goals and proximate ends; to Weismann's (1890) separation of germ plasm and cytoplasm; to the difference between DNA replication and RNA transcription; to the divide between text and interpretation and the contrast between mention and use of a lexical item. 12 11 Lewontin (2000) reprises his earlier distinction between the analysis of variance and analysis of causes (Lewontin 1974). 12 Kant (2000, p. 243) can be interpreted as making a related distinction when he describes the twofold sense in which a tree is both cause and effect of itself. A tree generates itself both as a species/genus (transmission) and as an individual (development).
Fighting the good cause 691
Whether conceptual separation of developmental from evolutionary questions is productive or counter-productive is a subject of present polemics. Some maintain the distinction is indispensible (Griffiths 2013) whereas others see it as an impediment to understanding (Laland et al. 2013a). Most of those who support the distinction are comfortable with invoking functions as causes (Haig 2013b), whereas many of those who want to do away with it are explicit that ''functions are not causes … the outcome of a behavior cannot determine its occurrence'' (Laland et al. 2013b).
Our penchant for dichotomies, distinctions, and oppositions reflects the power of reducing complex questions to binary choices. Many arguments within the philosophy of biology, and between the sciences and humanities, reflect a tension between the reductive simplicity of average effects and the richness of interaction; between the 'meagre trump' of attributing credit to parts and the 'glorious Wurlitzer' of integration of wholes. But, we have more than two options. One can play a duet.
Genomes as texts
Are God and Nature then at strife, That Nature lends such evil dreams? So careful of the type she seems, so careless of the single life. (Tennyson 1849) Genomes resemble historical documents (Williams 1992, p. 6;Pittendrigh 1993). Thymine rather than adenine, or valine rather than glutamate, has no meaning out of context but a nucleotide sequence of b-globin, with thymine at position 17, or an amino acid sequence of b-globin, with valine at position 6, both have meaning in context, although neither says anything explicit about malaria. Genomes are allusive archives of choice, with unstated meanings without explicit expression or discrete location. They are palimpsests on which new text is written over partially erased older text (Haig and Henikoff 2004). Not all of the text is readable. It contains gobbledegook and epigenetic annotations on what should not be read. Genomic censors strive to shut down the clandestine presses of retrotransposons.
Where does meaning reside in a text? My essay evolved via incremental rewording and extensive rewriting. There was a struggle for existence among ideas for space on the page. There is a lot more I could have said. My meaning resides in the difference between what is said and unsaid. Often a change in one part necessitated changes in other parts to maintain consistency. The essay selfconsciously reflects back upon itself with repetition, recurrence, reciprocal reference, and allusive alliteration. Part of its meta-meaning is that many meanings are distributed throughout the text, never fully explicit, to reflect and suggest the organization of meanings within the genome. There is no meaning in a letter, a little in a word, a bit more in a sentence, but much of the intended meaning is implicit, to be understood from the synergistic whole rather than the additive parts. And yet, the text was written letter by letter and word by word by additive increments. On the axis of reading, new meanings can be found, but on the axis of transmission it is only that which is written that counts.
Meaning resides in the interpretation. There are meanings I intend you to find and meanings you find. I wrote to persuade. But you may use my prose to persuade others that I am mistaken. You interpret my essay as you will. Imprecision of language allows charity of interpretation and slaying of straw men. Falsehood can arise from misinformation by an author or misinterpretation by a reader.
The question what genes mean, if what genes do depends on interactions with other genes in environmental context, resembles the question what words mean when all definitions are expressed in other words in semantic context. Modern philosophers confront the 'indeterminacy of translation' when attempting to understand what aition meant to Aristotle and 'indeterminacy of interpretation' when attempting to understand, or deliberately misunderstand, each other's arguments. Modern biologists confront similar indeterminacy in the semantic content of genetic material. Critics of 'information talk' in biology often demand a more rigorous justification of meaning in DNA than they could provide for meaning in language.
An idea is the semantic equivalent of a non-recombining segment of DNA. It is a chunk of meaningful stuff that is transmitted as a parcel. It is a semantic difference that makes a difference. Ideas and ''pithy quotations'' are readily reusable because they are meaningful when taken out of context. Science proceeds via recombination of ideas whereas great works of literature are clonally replicated and interpreted as wholes. In the scientific literature, least publishable units have replaced magisterial tomes in part because shorter texts are more likely to be used and cited. Working biologists mostly read the Origin of Species for virtue or pleasure, because the good bits have been reused again and again, in new associations, in a sesquicentury of scientific endeavor.
There are parallels between the ascription of effects to genes and the assignment of credit to authors. Scientists cite each other more than philosophers, novelists hardly at all. Citations not only provide pointers to additional information but also ascribe credit. All new insights originate in the context of many acknowledged and unacknowledged precursors, but credit is easier to attribute, or harder to deny, for portable ideas than for rearrangements in the tangled web of meanings. Tristram Shandy contains philosophical insight but is rarely cited because discrete ideas are difficult to disentangle from its interwoven fabric.
Scientists care about citation because they want their name to hitchhike with 'their' ideas to feedback for their good. But to be worthy of credit one must be unambiguous. Otherwise one could claim credit for interpretations that prove prescient but shift blame for interpretations that fail. A scientist is expected to commit to one interpretation but a novelist often leaves a choice for the reader. Indeterminacy of interpretation is a designed feature of novels but a flaw in experimental notebooks and scientific papers.
Teleodynamics
In an indeterministic world natural causation has a creative element, and science is interested in locating the original causes of effects of special interest, and not merely in pushing a chain of causation backwards ad infinitum. (Fisher 1934) Consider the fates of zygotes, scions of countless spermatic races to ova. Their lives unfold via interactions among genes, and between genes and environment. Many fall by the wayside, by chance or necessity, and those that reach maturity produce progeny, some a hundredfold, some 30-fold, some 60-fold. Sometimes an allelic difference causes one to leave more issue than another. And, lo and behold, the genes of the progeny, and of the progeny's progeny, even unto the third and fourth generation, are a biased sample of the genes of their progenitors. The tale is repeated, with minor variations and mutations, time without end, and verily there is something new under the sun.
This evolutionary parable could be elaborated endlessly with causal explanations of ever finer detail and ever deeper regression into the past. There is a causal story behind each and every mutation, each and every chiasma, each and every choice of a mating partner, each and every union of gametes, each and every catastrophe that did not happen. But this story is untellable because of incomplete information, chaotic dynamics, and computational complexity. And if it could be told, the story would be incomprehensible. One must simplify to tell a tale, giving greater salience to some items and leaving loose ends.
A pedant could argue that pressure is not an efficient cause and should be expunged from physical explanations-only individual molecular impacts are truly causal-but his argument would be dismissed as obfuscation. For questions at the appropriate scale, pressure provides a perfectly adequate explanation, indeed one that is superior to the unattainable account that describes each and every molecular collision. Darwinian final causes are similarly grounded in efficient causes and are perfectly adequate, indeed indispensable, for certain kinds of biological explanation. A 'selection pressure' summarizes many reproductive outcomes just as the pressure of a gas summarizes many molecular motions. Darwinism, like thermodynamics, is a statistical theory that does not keep track of every detail (Peirce 1877;Fisher 1934).
Much recent semantic work has been done on concepts of Darwinian information (Adami 2002;Adami et al. 2000;Colgate and Ziock 2011;Frank 2009Frank , 2012. The various expositions exhibit phenotypic resemblance, both from shared ancestry and convergence in a common selective environment, although conceptual differences remain. Rather than choose among the differences, because the available space could not do them justice, I will synthesize a subset of select conclusions. Semantic information comes from the environment via subset selection and refers to that environment. It is functional, looking backward to what has worked in the past and forward as a prediction of what will work in the future. Replication is essential for the indefinite persistence of information in the face of dissipative entropic forces.
Back to the future
The word 'cause' is so inextricably bound up with misleading associations as to make its complete extrusion from the philosophical vocabulary desirable. (Russell 1913) My intent in partial rehabilitation of formal and final causes is not to argue that the four causes provide the best causal taxonomy for current ends, but to recognize that Aristotle's classification was found useful for more than a millennium and must surely have approximated significant categories of understanding. Moreover, if formal and final causes do not exist in their 'bad' metaphysical senses, then the terms and the concepts are available for use in their 'good' post-Darwinian senses of information and adaptive function.
My essay concerns the seduction of narrative, the magic of metaphor, and the rhythm of recursion (Hofstadter 1979;Haig 2011a). Meaning is expressed through metaphor by representing one thing by another. Recursive representation allows eidos and telos to be grounded in hyle and kinesis. Choice captures information. The environment, personified as natural selection, chooses ends and thereby chooses means with meanings, because the ends of the past are the means of the present. Meaning requires an interpreter and an end. Darwin's demon supplies both. My text returns repeatedly to etymologies and histories of ideas because logos and eidos evolve by paths parallel to genes, providing fruitful metaphors and philosophical perspective.
Natural selection is both a metaphor and a metaphorical process of recursive representation. It is a meaningless, purposeless, physical algorithm that produces things for which meaning and purpose are useful explanatory concepts (Dennett 1995). Among the products of natural selection are rational agents, with beliefs and desires, pursuing conscious goals, exchanging truthful and deceptive information, who can delight in a meaningful life.
L-d! said my mother, what is all this story about?-A COCK and a BULL said Yorick-And one of the best of its kind I ever heard (Sterne 1767, finis). | 14,169.4 | 2014-02-18T00:00:00.000 | [
"Philosophy"
] |
BIOMECHANICAL AND KINETOTHERAPEUTICAL ASPECTS OF THE SCAPULO-HUMERAL PERIARTHRITIS SYNDROME
The scapulo-humeral periarthritis is the clinical syndrome characterized by pain, joint redness and functional impotence determined by pathological processes located at the shoulder level and affecting the periarticular structures: ligaments, joint capsule, tendons, and muscles. The form of the scapulo-humeral periarthritis successfully benefits from treatment of stabilization and biomechanical and neuromuscular balancing, in kinetotherapy applications. In conclusion, in the recovery of the scapulohumeral periarthritis, the kinetotherapeutic treatment intervenes to prevent loss of mobility and prevent fibrosis, restoring muscle strength, stability and controlled movements of the shoulder.
Introduction
The scapulo-humeral periarthritis is a syndrome of particular importance due to its high incidence in medical practice, the limiting effect in the area of work and the progression in time to a capsulitis that leads to severe disabilities before a resolution occurs. The unique feature of this syndrome is that it does not install at the level of the joint other than the shoulder and a "frozen" shoulder has complete ankylosis that can "thaw" spontaneously, leaving behind a relatively normal joint [24].
Due to the complexity and particularities of biomechanics, the shoulder joint is one of the most prone joints to the appearance of pathology. The scapulohumeral periarthritis is as a clinical syndrome, with symptoms such as pain, redness and functional impotence of the shoulder, in various degrees of movement, due to pathological processes, which affect the periarticular tissues and sometimes the joint capsule. Shoulder impingement syndrome is the common cause of algal impurities and causes significant disability [17].
The scapulo-humeral periarthritis is present in three stages. It is classified in primary (idiopathic) and secondary cases. The etiology for primary syndrome remains unknown.
It is commonly associated with other systemic conditions, most commonly with diabetes, or after periods of immobilization, such as stroke ( [12].
Most cases of scapulo-humeral periarthritis can be manager in the onset phase. Physiotherapists are encouraged to begin treatment with the patient's education. Explaining the natural history of the condition often contributes to reducing frustration, to a better understanding of the phenomenon and to alleviating the patient's fears.
It is estimat that the incidence is 2 to 5% in the general population of those affected by this syndrome and more than 20% among those with diabetes. The high frequency occurs in the fifth and sixth decade of life, with a peak age in the mid-50s. The onset before 40 is less frequent, and the non-dominant shoulder is less likely to be affect than like another. In 6% to 17% of patients, the other shoulder will be affected over the next five years (25).
In 2015, Hollmann et al. finally made a scientific experiment on this topic, simple and relevant. Under general anesthesia, five patients diagnosed with frozen shoulder was check before and after the shoulder capsular release surgery. All five had better amplitude of the passive abduction movement, which would be impossible if the articular capsule had been stiffened or cemented or under any physical limitation. Improvement in all axes of movement ranged from a minimum of 44˚ to an increase of 110˚ (to normal) [8].
The researchers reasonably concluded that the loss of passive amplitude of shoulder movements cannot be explained solely by capsular retraction and thickening.
The passive shoulder abduction evaluat in the five patients and after anesthesia does not accurately reflect the true amplitude of available movement of the affected shoulder. It seems that active stiffness or muscle protection is a major contributing factor the decrease in movement a patient with the frozen shoulder (27).
The Biomechanics of the Arms 2.1. The biomechanics of the scapular belt
In the belt of the level, we distinguish the clavicle joint with the sternum and acromion of the shoulder blade as well as the scapulothorax joint at the shoulder blade level. The two bones of the belt are integral with each other in performing the lifting and lowering movements, the forward and backward projection of the shoulder. The circumcision of the shoulder is achieved by summing these movements. Solidarization is ensuring by the connection at the level of the acromioclavicular joint as well as by the coracoclavicular ligaments [16].
The movements at the level of the scapular belt are shoulder movements and are most often associated with the movements performed by the member itself at the scapulo-humeral joint level [16].
a) Lifting and lowering movements
In the shoulder lifting movement, the collarbone moves upwards, and with a horizontal angle of 30-40°, its distal extremity rises approximately 8-10 cm and will train through the acromioclavicular joint and the scapula, which performs a translational movement; up and the shoulder blade slides on the muscle planes. The superior sternoclavicular ligament limits these movements [9].
The shoulder muscles are for the collarbone the trapezoid and the sternocleidimastoid and for the scapula the entire trapezius and the elevator of the shoulder blade; the descending ones are for the clavicle -subclavicular, the great pectoral and deltoid, and for the shoulder blade -the small pectoral, the trapezoid (its lower fascicles), the great dorsal etc. [14].
b) Movement of forward projection
It occurs in the act of pushing forward, forcing and striking, usually accompanied by some degree of forward rotation. The lateral muscle of the collarbone is anteriorly and with it, and the shoulder blade will be project forward. The movement is limit by the anterior sternoclavicular ligament. The anterior clavicle movement is performed by the large pectoral and deltoid, and for the anterior pectoral and small pectoral blades [19].
c) Backward projection movement
It is the reverse movement when the shoulders are pull back the clavicle and shoulder blade are pull back by the trapezius and stenocleidomastoid muscle, respectively by the trapezoid and rhomboids. The extension has limited amplitude of the coraco and glenohumeral ligaments, therefore the movement measures 50-60° [14].
d) The movements of tilting the scapula
There are rotational movements that occur around an anteroposterior axis passing through the acromioclavicular joint. They actually occur at the level of the scapulo-thoracic joint. By rotating forward and upward of the shoulder blade (lateral tilt), the superoexternal angle together with the glenoid cavity is directed upward, the superointernal angle downward, and the inferior angle forward, describing a 45° circle arc. [9].
The Biomechanics of the Scapulohumeral Joint
The shoulder joint is the most mobile of all body joints. The arm executes a series of wide movements -fig1, to which are add the possibilities of mobilization of the shoulder blade and thus, at the level of the three joints of the scapular belt, it confers the mobility of the whole member in the three planes [16].
a) Abduction and adduction
In the abduction movement, the first 90 degrees are due to the involvement of the gleno-humeral joint, and when exceeding 90 degrees the scapula movement intervenes. The last 30 degrees of abduction is due to the involvement of the thoracic spine, by lateral flexion, and the cervical spine, by a slight lateral flexion in the opposite side of the flexion of the thoracic spine (the flexion of the cervical spine is finish to maintain the physiological posture of the head). When both arms are abducted, the column remains motionless [1].
The adduction is the reverse movement, the return of the arm and is, in the strictly frontal plane, limited by the trunk, passed in the oblique plane before the arm can be continue through the front of the chest.
The abductor muscles are represented by the deltoid with all its bundles, but initially by the super-spine, which is at the same time a tensor of the articular capsule [16].
Adductor muscles: the most important are the great pectoral, the great dorsal, the large and small round, the subscapular, the infraspinosus, the coracobrachial and the brachial biceps [16].
b) Flexion and extension
The flexion reaches the amplitude of approximately 180°, up to 90° acting glenohumerally. Above this value, those that block is the coraco and glenohumeral ligaments and the following degrees are favorit by rotations in the acromioclavicular and sterno-clavicular joint and the antepulsion of the scapular belt. The last 30° of the movement are benefit by lumbar hyperlordosis [20].
The anteductive muscles is the deltoid through the clavicular and acromial bundles; the great pectoral by the clavicular, coracobrahial and the biceps brachial fascicle (short end). The retrospective muscles are the deltoid, through the spinal bundle, the great dorasl, the large round, the triceps (long end) [16].
The extension reaches the amplitude of 50-60°, around a transverse axis, through the large tubercle, and in the sagittal plane, through the glenoid cavity cavity [20].
c) Internal and external rotation
It takes place transversely, around a vertical axis. The amplitude is 85 ° for external (lateral) rotation, the limitation starts by the tension of the anterior portion of the capsule, the glenohumeral ligaments and the muscles. The internal amplitude is 90°, its limitation due to the tension of the posterior part of the capsule, the glenohumeral ligaments and the muscles [21].
External rotator cuffs are super-spine and small round, and internal rotators: deltoid, large pectoral, large round, and large dorsal. Circumcision summarizes previous movements. The humerus describes in motion a cone, the humeral head (tip of the cone), rotating in the spleen, and the lower extremity draws an oval representing the base of the cone [15].
The average normal amplitudes of movement, specific to the shoulder complex, is present in the following table, by three authors: Clement Baciu, Charles Rocher (cited by Tudor Sbenghe) and David Magee [5].
The Scapulo-humeral Periarthritis
The scapulo-humeral periarthritis is a clinical syndrome, where the pain, redness and functional impotence of the shoulder appear, all of them, associated in varying degrees of the amplitude of the movements, of pathological cause, which especially affect the periarticular tissues (tendons, pouches) and -in some casesjoint capsule [22].
The scapulo-humeral periarthritis is a painful clinical syndrome, accompanied by redness and limiting movements, due to damage to the periarticular structures by degenerative and/or inflammatory lesions [2].
The scapulo-humeral periarthritis is one of the most common disorders for which patients refer to the specialist doctor, this being found in both sexes, in patients of active age, especially targeting people over 40 years [3].
Symptoms
In the first phase, pathologically, the scapula-humeral periarthritis has as a substrate the degenerative lesions of the tendons, especially of the biceps, which is characterizet by necrosis that can lead to partial ruptures and through calcification. These wear processes are common in subjects over 40-50 years old, asymptomatic for a long time [4].
In some cases, the migration of the calcium material with its penetration inside a stock exchange (in the subacromiodeltoid scholarship) can cause a very strong inflammatory process, which could be due to the presence of predominantly pressing pain [13].
Symptoms may be mild or more severe: Shoulder pain, without limiting its movements caused by tendonitis of the supraspinatus muscle ∕ of the long muscle of the biceps; Acute pain of the shoulder with total limitation of its movements due to inflammation of the serous exchange (bursitis); Blocking the shoulder caused by a retractable capsulitis (retraction and thickening of the joint capsule of the shoulder), severe pain and a stiff shoulder; Pseudoparalitic shoulder (frequently in athletes) due to a tendon rupture, the pain is weak and the person cannot move the shoulder (28).
Diagnostic
The objective examination, upon inspection of the shoulder, may reveal a swollen and red shoulder, amyotrophy (especially of the deltoid and trapezoid), the fall of the shoulder, changes of the hand (in something-dystrophic syndrome; shoulder-hand). In addition, the palpation shows the existence of painful spots on the large tuberosity (on which the superspine is inserted), in the bicep groove, on the coracoid apophysis [6].
The rotational movement can also be achieved by bringing the hand to the neck (external rotation) or back to the dorsolombar column (internal rotation), the forearm being semi-flexed on the arm (at a right angle). The active mobility examination is then performed, the examiner opposing resistance [13].
The pain, which appears only on the abduction, suggests a tendency of the super-spine, the pain that appears on the internal rotation is specific to the tendon of the subscapular, the one that appears on the external rotation is related to the tendinitis of the subspin, and the one that appears only on the extension suggests a bicipital tendon.
The physical examination should also include the analysis of the acromioclavicular joint, the edges of the glenohumeral joint, the rotator cuff, the subacromial bursa and the bicipital slide. An eventual axillary adenopathy should also be sought and an examination of the cervical spine and the respective upper limb performed (reflexes, sensitivity) [6].
Evolution and Prognosis
The evolution can take quite a long time, even a few months, but the prognosis is generally favorable, and can be obtain after an early treatment, well developed but also more complex and difficult movements [10].
Other times the spontaneous evolution to healing requires 1-2 years, as in the case of the blocked shoulder. If a welldefined program is not established, and in the absence of proper treatment, the shoulder blockage may be delayed for several months; however, with the passage of time, sometimes after about 6 months, the shoulder may begin to gradually release and most patients will regain full mobility [18].
The evolution of scapulo-humeral polyarthritis usually ends in a few weeks, after which the patient can resume his activities, but there is a possibility that there may be a feeling of embarrassment that may occur due to fatigue or from cold and moisture [10].
Modern Therapies used in the Treatment of the Scapulo-Humeral Periarthritis
Shockwave Therapy or Shockwave Therapy is a modern technology that uses shock waves to treat chronic pain in the musculoskeletal system. It is base on the generation of a very intense energy in a very short period of time (10 milliseconds), thus shock wave crossing the tissues at a speed greater than the speed of sound. In shoulder disorders, shockwave therapy has proven effective, so in 81% of cases it reduced pain and increased shoulder mobility [11]. High-power laser therapy is base on the principles of low-power laser, but with a maximum power 50 times higher. The wave stimulates free nerve endings and immediately relieves pain (biostimulation and analgesia program). If used immediately after sockwave applications, the pain is greatly alleviated (29).
The Kinetic Therapy in Scapulohumeral Periarthritis
The kinetic treatment of shoulder recovery in rheumatic disorders has the following objectives in the functional recovery of the shoulder: Pain control, objective achieved mainly through positions and postures in functional position; restoring/maintaining mobility, through passive, active, active exercises with resistance; restoring the strength and stability of the shoulder, through exercises with resistance; restoring the controlled movement of the shoulder; maintaining or improving the ability for daily gestures, by training the arm [22]. During the recovery sessions, the kinetotherapeutic program contains exercises, from simple to complex, with breaks between 30 seconds one-minute series. Thus, in the first session, passive movements, pendulum exercises, exercises performed antigravitationally and with the help of the cane can be perform, following which in the following of the session new exercises will be add, by loading their own body, with progressive resistance and with the help of the objects in the recovery room.
(Sandbags, mattresses, shoulder wheel, medicine ball), within the functional limits of the patients. On the movements of the return to the initial position, the exercise is slow and controlled, taking care at the same time to acquire a proper breath-expiration breath.
Discussion
It is of vital importance to participate in the recovery process of a specialized framework, respectively a trained kinetotherapist who places his imprint on both the kinetic program, its mode of accomplishment, and on the patient's psychic. Improvement of the functional rehabilitation techniques through kinetic programs, as well as the good patientkinetotherapist-doctor cooperation, lead to an improvement of the recovery within the limit of the functional remaining of each patient. | 3,704.4 | 2019-12-12T00:00:00.000 | [
"Medicine",
"Biology"
] |
Probing top quark FCNC couplings in the triple-top signal at the high energy LHC and future circular collider
Our main aim in this paper is to present detailed studies to probe the top quark flavor changing neutral current (FCNC) interactions at $tqg$, $tq\gamma$, $tqH$ and $tqZ (\sigma^{\mu \nu}, \gamma_{\mu})$ vertices in the triple-top signal $p p \to t t\bar t \, (\bar t t \bar t)$ at the high energy proposal of Large Hadron Collider (HE-LHC) and future circular hadron-hadron collider (FCC-hh). To this end, we investigate the production of three top quarks which arises from the FCNC couplings taken into account the fast simulation at $\sqrt{s} = 27$ TeV of HE-LHC and 100 TeV of FCC-hh considering the integrated luminosities of 10, 15 and 20 ab$^{-1}$. All the relevant backgrounds are considered in a cut based analysis to obtain the limits on the anomalous couplings and the corresponding branching ratios. The obtained exclusion limits on the coupling strengths and the branching ratios are summarized and compared in details with the results in the literature, namely the most recent direct LHC experimental limits and HL-LHC projections as well. We show that, for higher energy phase of LHC, a dedicated search for the top quark FCNC couplings can achieve much better sensitivities to the triple-top signal than other top quark production scenarios. We found that the limits for the branching ratios of $tqg$ and $tqH$ transitions could reach an impressive sensitivity and the obtained 95\% CL limits are at least three orders of magnitude better than the current LHC experimental results as well as the existing projections of HL-LHC.
The top quark with the mass of m top = 173.0 ± 0.4 GeV [1] which is close to the electroweak symmetry breaking scale, is the most sensitive probe to search *<EMAIL_ADDRESS>for a new physics evidence beyond the Standard Model (SM) at hadron and lepton colliders [2,3]. Due to the large mass of the top quark, the productions and related theoretical and experimental studies are golden places to look for possible signatures of new physics at TeV scales. Whilst improving the precision of SM predictions is highly important in its own right, any studies on the top quark to probe signatures of new physics are also most welcome. In this respect, over the past few years, several dedicated studies have shown that the non-SM couplings of the top quark should be one of the key analysis program pursued at the Large Hadron Collider (LHC) [2,[4][5][6][7]. These dedicated studies have been done in the top quark related processes, most notably in the single top quark production [8] or in the double top quarks production [9,10] scenarios. Among them, the flavor changing neutral current (FCNC) interactions involving a top quark, other quark flavors and neutral gauge boson are much of interest.
The FCNC interactions of top quark are forbidden at the tree level and due to the Glashow-Iliopoulos-Maiani mechanism (GIM) [11], are highly suppressed in a loop level. The SM predictions of the top quark FCNC decays to the gluon, photon, Z and Higgs boson, and an up or charm quark are expected to be at the order of O(10 −12 − 10 −17 ) which are currently out of range of present and even future experimental sensitivity [12]. However, in the beyond SM scenarios such suppression can be relaxed which yield to couplings of the orders of magnitude larger than those of the SM [13][14][15][16][17][18][19][20]. Hence, the possible deviation from the SM predictions of FCNC couplings would imply the existence of new physics beyond the SM [21]. In recent years, there has been a growing number of analyses focusing on this topic, and to arXiv:1909.03998v1 [hep-ph] 9 Sep 2019 date, there are many phenomenological studies in literature that have extensively investigated the association production of top quark with a gluon, photon, Higgs and Z boson mainly through single or double top production at hadron and lepton colliders, see e.g. Refs. [21][22][23][24][25][26][27][28][29][30][31][32][33][34][35] for the most recent reviews.
At the LHC, the top pair pp → tt and single-top quark productions are the dominated processes due to the strong coupling of gg → tt subprocess [8,36]. The production of an odd number of top quarks, i.e. tripletop quarks pp → ttt (ttt), requires a tbW vertex in every diagram. Since it also often involves a b-quark in the initial state of the hard process, therefore, they lead to a significant suppression in comparison to the strong processes. At the 14 TeV energy of LHC, triple-top quarks production cross section, with σ 1.9 fb, is almost five orders of magnitude less than the top pair production which is the dominant mechanism of top productions at the LHC. This relatively small SM production cross section of three top quarks makes it an interesting channel for investigating any signal of new physics.
The LHC at CERN and its luminosity upgrade, (HL-LHC) [37][38][39][40][41][42], have been actively carried out and will still continue the journey on searching for any signal of new physics in the next two decades. In addition to the HL-LHC, there are other proposals for the future higher energy hadron colliders to perform the direct searches at the energy frontier. These include the energy upgrade for the LHC to 27 TeV center-of-mass energy (HE-LHC) [37][38][39] and the future circular collider of about 100 TeV centerof-mass energy FCC-hh [43]. They will collect datasets corresponding to integrated luminosities of 10-20 ab −1 and 10 ab −1 , respectively. The high energy and high luminosity reach of these colliders strongly motivated to search for the FCNC couplings of top quark. Considering the need of these proposed colliders and our discussion on three top productions, one can conclude that the triple-top signal at HE-LHC or FCC-hh may potentially provide clear evidence for the top quark FCNC couplings. In this paper, we set out an initial study for the triple-top signal and present a detailed study to probe top quark FCNC interactions at tqg, tqγ, tqH and tqZ(σ µν , γ µ ) transitions at HE-LHC and FCC-hh. It should be mentioned here that, the triple-top quark signal that we are interested to investigate makes it possible to study all the top quark FCNC interactions tqX.
On the experimental side, lots of efforts performed earlier at the Tevatron at Fermilab and now at the 13 TeV LHC have failed to reveal any interesting observation of FCNC transitions. However, the obtained bounds on such couplings from the mentioned experiments are very strong. Most recently, considering the 13 TeV data from CMS and ATLAS, the exclusion limits on the top quark FCNC transitions have significantly improved by the LHC experiments. CMS and ATLAS Collaborations at CERN reported the most stringent constrains through the direct measurements [44][45][46][47][48][49][50][51][52][53][54][55][56][57].
These collaborations have set upper limits on the tqH FCNC couplings in the top sector at √ s = 13 TeV considering an integrated luminosity of 36.1 fb −1 (AT-LAS) and 35.9 fb −1 (CMS). Considering the analyses of the different top FCNC decay channels, the 95% confidence level (CL) upper limits have been found to be Br(t → uH) < 0.19% and Br(t → cH) < 0.16% from the ATLAS [45], and Br(t → uH) < 0.34% and Br(t → cH) < 0.44% from the CMS [56] Collaborations. In addition to this direct collider measurement for tqH couplings, single top quark production in the t channel is used to set limits for the top quark FCNC interactions with gluon tqg considering the data taken with the CMS detector at 7 and 8 TeV correspond to the integrated luminosities of 5.0 and 19.7 fb −1 . The upper limits on the branching fractions of Br(t → ug) < 0.002% and Br(t → cg) < 0.041% have been measured [50]. A search for FCNC through single top quark production in association with a photon also have been performed by CMS at √ s = 8 TeV corresponding to an integrated luminosity of 19.8 fb −1 . Upper limits at the 95% CL on tqγ anomalous couplings are measured to be Br(t → uγ) < 0.013% and Br(t → cγ) < 0.17% [47]. Finally, search for the FCNC top quark decays of t → qZ in proton-proton collisions at √ s = 13 TeV have been done both by CMS and AT-LAS Collaborations through different channels. Upper limits at 95% CL level on the branching fractions of top quark decays can be found to be Br(t → uZ) < 0.015% and Br(t → cZ) < 0.037% from the CMS [46], and Br(t → uH) < 0.024% and Br(t → cH) < 0.032% from the ATLAS [52] Collaborations for the integrated luminosities of 35.9 and 36.1 fb −1 , respectively.
In this paper, we shall try to investigate in details the projected sensitivity and discovery prospects of the HE-LHC and FCC-hh to the top quark FCNC transitions within the model independent way using an effective Lagrangian framework. To this end, we follow the strategy presented in [12,58] and quantify the expected sensitivity of the HE-LHC and FCC-hh to the top quark FCNC couplings tqX. The realistic detector effects are included in the production of the signal and background processes with the most up-to-date experimental studies carefully considering the upgraded CMS detector performance [59] for the HE-LHC and the FCC-hh baseline detector configuration embedded into Delphes. As we will demonstrate, the expected constraints for tqg and tqH from the HE-LHC and FCC-hh are significant and fully complementary with those from the LHC and HL-LHC.
This article is arranged as follows. In Sec. II, we present the theoretical framework and the effective Lagrangian approach for the top quark FCNC couplings. The details of the analysis strategy applied in this investigation are clearly discussed and presented in Sec. III. This section also includes the signal and background estimations, the simulations and detector effects for HE-LHC and FCC-hh. We detail in Sec.IV the statistical method we assume, together with the numerical calculations and distributions for the HE-LHC and FCC-hh. Sec. V includes the numerical results and findings in details. The 95% 95% confidence level (CL) limits of HE-LHC and FCC-hh are compared with the LHC measured limits and the other studies in literature. Finally, in Sec. VI, we conclude and summarize our main results and findings.
II. THEORETICAL FRAMEWORK AND ASSUMPTIONS
This section presents the theoretical framework and assumptions applied in this analysis to study the top quark FCNC transitions at HE-LHC and FCC-hh. The possibility of the top quark anomalous FCNC couplings with light quarks (q = u, c) and a gauge bosons (g, H, Z, γ) is explored in a model-independent way considering the most general effective Lagrangian approach [58,60]. In the search of anomalous FCNC interactions at high energy colliders, this approach has been extensively studied in literature for lepton and hadron colliders [21][22][23][24][25][26][27][28][29][30][31][32][61][62][63][64][65][66][67][68][69][70][71]. In this framework, these FCNC vertices are described by higher-dimensional effective operators L tqX FCNC independently from the underlying theory. Up to dimension-six operators, the FCNC Lagrangian of the tqg, tqH, tqZ(σ µν ), tqZ(γ µ ) and tqγ interactions can be written as [12,58,60]: In Eq. (1), the real parameters ζ qt , η qt , κ qt , X qt and λ qt represent the strength of FCNC interactions of a top quark with gluon, Higgs, Z and γ, respectively. q indicates an up or charm quark as well. At the tree-level, in the SM, all the above coefficients are zero and in the presence of the anomalous FCNC vertices the straightforward way is to set limits on these couplings strength and the corresponding branching fractions. In the above equation, g s is the strong coupling constant and P L(R) denotes the left (right) handed projection operators. In this study, we assume no specific chirality for the anomalous FCNC interactions, and hence, we set ζ L qt = ζ R qt = ζ qt , η L qt = η R qt = η qt , κ L qt = κ R qt = κ qt , X L qt = X R qt = X qt and λ L qt = λ R qt = λ qt . As we mentioned earlier in the introduction, the triple-top quark signal includes all the top quark FCNC interactions, and hence, make it possible to study all these coefficients in this signal topology.
In this study, we consider pp → ttt (ttt) signal process to search for anomalous FCNC tqX(X = g, H, Z, γ) interactions in the presence of effective Lagrangian of Eq. (1). In order to provide more details on the FCNC vertices, we present in Fig. 1, representative Feynman diagrams contributing to this signal process at tree level. As one can see from Fig. 1, this Feynman diagram contains tqg vertices (red circle) in which make it possible to study the top quark FCNC coupling in this signal process. We consider the leptonic decays of the W boson originating from the same-sign top quarks which lead to same-sign dilepton final states. Another top quark decays hadronically. In order to provide more insight on the available FCNC transitions in triple-top productions, we present in Fig. 2, a set of Feynman diagrams contributing to FCNC vertices q → tH, q → tγ and q → tZ. The FCNC vertices are shown as a red circle.
In Fig. 3, we show the total cross sections σ(t → qX) in the unit of fb in the presence of anomalous tqX couplings versus the top quark FCNC branching ratios Br(t → qX) for five different signal scenarios.
The following conclusions can be drawn from the results presented in Fig. 3. As it can be seen, in term of individual tqX coupling, the largest contributions for the triple-top signal mainly come from the tqg coupling, and then the tqH. This indicates the large parton distribution functions (PDFs) of u-quark and gluon in the calculation of cross section at high center-of-mass energy. In our study, we show that the sensitivity to the branching ratio of tqg and tqH channels are much better than the current LHC experimental limits, and even they are much better than the projected limits on top FCNC couplings at HL-LHC with an integrated luminosity of L int = 3000 fb −1 . These findings suggest that the measurement of tqg and tqH FCNC couplings through triple-top productions at future high energy collider would carry a significant amount of information, and hence, are the most sensitive probe to search for a new physics beyond the SM.
III. ANALYSIS STRATEGY AND NUMERICAL CALCULATIONS
As we mentioned, in this study, we plan to investigate the discovery potential of future HE-LHC with 27 TeV C.M. and FCC-hh with 100 TeV C.M. to the top quark FCNC transitions. To this end, we follow an strategy based on an effective Lagrangian approach to describe the top quark FCNC in a model independent way. There are a lot of studies in literature that have been done in new physics searches to enrich the physics motivations of such proposed colliders [37][38][39]59]. One of our main goals in this paper is to study the impact of HE-LHC and FCC-hh to the top quark FCNC coupling determinations. After introducing our theoretical framework and assumptions in previous section, we now present the analysis strategy and numerical calculations related to our study. We first discuss the tqX signal and background analysis. Then, we present the simulation and realistic detector effects for both HE-LHC and FCC-hh. Figure 1: The Feynman diagram for the triple-top quark production containing tqg anomalous FCNC vertex. As we described in the text, we consider the leptonic decays of the W boson originating from the same-sign top quarks which lead to same-sign dilepton final states.
Figure 2:
The Feynman diagrams for the triple-top quark production in the presence of FCNC q → tH, q → tγ and q → tZ vertices.
A. The tqX signal and SM backgrounds
In the following section, our study on the pp →ttt(ttt) signal process including the FCNC tqX(X = g, H, Z, γ) couplings as well as the relevant SM backgrounds at the HE-LHC and FCC-hh are given. This signal provides searching for all FCNC couplings of tqg, tqH, tqZ and tqγ independently. We also do this study separately considering q = u and q = c. For the tripletop quark, we consider both hadronic (jj) and leptonic ( ν) decays of W boson by analyzing a very clean signature with two same-sign leptons ( ±± ), where the lepton could be an electron or a muon. Then, the signal analysis is performed with the following final states: . Therefore, these unique signal events are characterized by the presence of exactly two isolated same-sign charged leptons, (2 + or 2 − ). In addition, there should be a large missing transverse energy (MET) from the undetected neutrino. This signal also characterized by several jets in which three of them should come from b-quarks. As one can see from the Feynman diagrams (see Figs. 1 and 2), the top quark FCNC couplings can be understood by considering the appearance of subprocess diagrams like tqX →ttt(ttt) with q = u, c and X = g, H, Z, γ.
Considering these signal scenarios, the following relevant background processes which have similar final state topology need to be taken into account: ttZ in which Z decays to a pair of opposite-sign isolated leptons (Z → + − ) with semi-leptonic decay of one top quark and fully hadronic decay of another top quark. We also consider the ttW with leptonic decay of W boson (W → ν), semileptonic of one top and fully hadronic decay of another quark. In addition, the W W Z background also included in which Z decays to a pair of leptons (Z → + − ) and the W decays to quark-antiquark pair (hadronic) or to a charged lepton ( ± ) and a neutrino (leptonic). The mentioned backgrounds are the most important sources of backgrounds analyzed in this study. In addition to these relevant backgrounds, we also consider other source of backgrounds such as SM four top productions, ttH, ttW W and ttW W in our study of MET + jets + lep- tons final states. In order to minimize the contributions of these backgrounds, different selection cuts are applied which will be discussed in details in Sec. IV. We show that by applying a same-sign isolated dilepton and 3 b-jets selections, some of these SM backgrounds can be strongly reduced and safely ignored, and hence, they are not considered. Some selected examples of partonic Feynman diagrams for thettW , ttZ and W W Z backgrounds analyzed in this study are shown in Fig. 4.
In the next section, we present the simulation of signal ans backgrounds, and the realistic detector effects for the HE-LHC and FCC-hh.
B. The simulation and detector effects
In this section, we present the analysis of pp → ttt (ttt) signal process including the FCNC tqg, tqγ, tqH and tqZ(σ µν , γ µ ) vertices as well as all the relevant SM backgrounds with experimental conditions of the HE-LHC and FCC-hh. For the simulations of the HE-LHC and FCC-hh collider phenomenology, we use the FeynRules [72] to extract the Feynman rules from the effective Lagrangian of Eq. (1). The Universal FeynRules Output (UFO) files have been generated [73] and then UFO files fed to the Monte Carlo event generator MadGraph5_aMC@NLO [74,75] to generate the event samples for signal processes. Mad-Graph5_aMC@NLO [74,75] is also used to generate background processes. The sample are generated using the leading order (LO) NNPDF23L01 parton distribution functions (PDFs) [76][77][78] considering the renormalization and factorization scales are set be the threshold value of the top quark mass, µ = µ F = µ R = m top . For the parton showering, fragmentation and hadronization of generated signal and backgrounds events we utilized the Pythia 8.20 [79]. During the production of signal and backgrounds samples, all produced jets inside the events forced to be clustered using the FastJet 3.2 [80] considering the anti-k t jet clustering algorithm with a cone radius of R = 0.4 [81]. Finally, we pass all generated events through the Delphes 3.4.2 [82], which handles the detector effect.
We should emphasize here that, for the FCC-hh analysis, we use the default FCC-hh detector card configuration implemented into the Delphes 3.4.2 in order to consider the realistic detector effects of the FCC-hh baseline detector. Considering this configuration, the efficiency of b-tagging b (p T ), efficiency of c-jets c (p T ) and misidentifications rates for the light-jets are assumed to be jet transverse momentum dependent. They are given by [21,31] For the case of HE-LHC projections and in order to produce the Monte Carlo events, we also employed the DELPHES framework for performing a comprehensive high luminosity (HL) CMS detector response simulation. To this end, we have used the HL-LHC detector card configuration implemented into the Delphes 3.4.2 which includes high configuration of the CMS detector [36,39,42,59]. The b-tagging efficiency b (p T ) and misidentification rates for light-flavor quarks are assumed to be (3)
IV. STATISTICAL METHOD FOR THE tqX FCNC ANALYSIS
We detail below the statistical method we assume, together with the numerical calculations and distributions for the HE-LHC and FCC-hh. More details are provided in the next section. As we discussed in details in Sec. III A, the studied topology gives rise to the MET + jets + leptons signature characterized by five or more than five jets, and a missing transverse momentum from the undetected neutrino and exactly 2 same-sign isolated charged leptons. Among these jets, three of the them should be tagged as b-jets. Based on this signal topology, we follow a standard methodology to distinguish the signal signature from the corresponding SM backgrounds and considered some different preselection cuts as we describe here.
As we applied the leptonic channels of W boson for the top (antitop) quark pairs in the signal process, exactly two same-sign isolated charged leptons (electron or muon) are required, n = 2 ±± , with |η | < 2.5 and p T > 10 GeV. As we highlighted before, one of the key ingredients in the strategy pursued in the present study is the triple-top signal with the topology of two samesign isolated charged leptons. We will discuss in the next section that an additional cut of the same-sign dileptons invariant mass distributions (M ±± > 10 GeV) need to be taken into account to suppress events with pairs of same-sign energetic leptons from the heavy hadrons de-cays of backgrounds. Since we consider the doubly leptonic decay of W boson in the final state, triple-top signals include a substantial amount of missing transverse energy. For the case of missing transverse energy, we apply E miss T > 30. Our signal scenario also includes at least five jets n j ≥ 5 jets with |η jets | < 2.5 and p jets T > 20 GeV. We also considered the distance between leading leptons and jets ∆R( , j i ) = (∆φ ,ji ) 2 + (∆η ,ji ) 2 > 0.4 in which are azimuthal angle and the pseudorapidity difference between these two objects. The same selection also need to be taken into account between two jets, ∆R(j i , j j ) > 0.4. Among the selected jets, at least three of them need to be tagged as b jets, i.e. n b−jets ≥ 3.
After adopting our basic cuts and selection of signal and background events, in next section we study the signal and the backgrounds at the level of distributions and the numerical calculations for the HE-LHC and FCC-hh separately. We should notice here that, in our study that will be discussed in the next section, we only concentrate on the t → qg and t → qH modes as a reference throughout this work for presenting some selected distributions. We also choose those distributions which show a good potential to separate the signal for the SM backgrounds.
A. The signal and background analysis and distributions at HE-LHC
After introducing the simulation, detector effects and the event selection in previous sections, in the this section, we present the numerical calculations and distributions for the HE-LHC scenario.
Let us know present and discuss the cross sections of the triple-top signal and all SM backgrounds in order to provide a basic idea of their production rate. As we discussed before, ttZ, ttW and W W Z SM backgrounds are the main backgrounds considered in this study. We also include other source of SM backgrounds such as SM four top productions, ttW W , ttH and ttW W . However, we found that these backgrounds have small contributions in the total background composition. The cross-sections in the unit of fb for the ttZ, ttW and W W Z SM backgrounds passing sequential selection cuts are presented in Table I for the HE-LHC at √ s = 27 TeV. As one can see from Table I, these selection criterion could significantly suppress the large contributions of background events originating from the ttZ and ttW , and specially from the W W Z. Among the selection strategy, two same-sign isolated leptons selection n = 2 ±± reduces these backgrounds and lead to the selection efficiencies of 12%, 37% and 12% for ttZ, ttW and W W Z backgrounds, respectively. Selecting jets and b-jets, considerably affect all backgrounds as well. For example, the cut efficiency of b-jets selection is about 0.3% for ttZ, 0.6% for ttW , and 0.001% for W W Z backgrounds which all have the same final state with the signal. These small efficiencies indicate that the three tagged b-jets selection can reduce the SM backgrounds strongly. The sum of the cross section for all SM backgrounds after all cuts is found to be 0.249 fb.
After our discussion on the numerical calculations for the backgrounds, let us know present our signal calculations. We should notice here that, the fixed values of ζ qt = 0.1, η qt = 0.1, κ qt = 0.1, X qt = 0.1 and λ qt = 0.1 with q = u, c are chosen for all coupling strength as the benchmark point if not stated otherwise. Taking these typical benchmarks input for the triple-top signal, the expected cross sections before and after the selection cuts are presented in Table II.
The following conclusions can be drawn from the cut flow table present in Table II. For all triple-top signal topologies, around 35% efficiency obtained considering the same-sign dilepton selection and 12-15% efficiency achieved after the jet selection strategy. As one can see, after the sample selections, around 6-7% of signal events could pass the selection criteria. Now let us discuss the triple-top signal and the SM backgrounds at the distribution level. Characteristic signature of the triple-top signal process analyzed in this study suggests to work with the events having at least two isolated same-sign lepton in which can be an electron or a muon, large missing transverse energy (MET), and at least 5 jets which three of them are required to be identified as jets originating from the b-quark. Considering these signal scenarios and all relevant SM backgrounds, in Fig. 5, we show the jets and b-jets multiplicities for triple-top signal and the main SM backgrounds events before applying the jet and b-jet selection. All the plots are unit normalized. As we mentioned before, we only concentrate on the t → qg and t → qH modes as a reference throughout this work for presenting some selected distributions. Hence, for the signal in these figures, only one coupling (ζ tq or η tq with q = u, c) at a time is varied from its SM value. It can be concluded from these plots, the requirement of at least five jets (n jets ≥ 5) is useful to reduce the contributions of the SM backgrounds. In addition to this requirement, selecting at least three btagged jets n b−jets ≥ 3 among those jets is also useful to suppress the contribution of the SM backgrounds.
B. The signal and background analysis and distributions at FCC-hh
In this section, we focus on the numerical calculations of the FCC-hh collider for the triple-top signal analyzed in this study. As we mentioned before, in addition to the HE-LHC, we also plan to investigate the potential of future FCC-hh collider to the top quark FCNC couplings at a center of mass energy of 100 TeV and present our study to set an upper limits on the anomalous top FCNC tqX(X = g, H, Z, γ) vertices including the realistic detector effects. However, one can even expect the higher center-of-mass energies of FCC-hh collider could lead to the improvement of these limits. For the FCChh analysis, we follow the same strategy applied for the case of HE-LHC, namely the the semileptonic final state with two same-charged W decaying to either electron or muon, and the other decaying hadronically. We also use the same selection cuts for the signal and backgrounds. We expect, and do find, a similar peak on the jets and b-jets multiplicity distributions for the signal and for the backgrounds as the case of HE-LHC presented in Fig. 5. The difference between the HE-LHC and FCC-hh mainly comes from the different detector configurations. As we discussed in Section III B, for the FCC-hh analysis, we use the default FCC-hh detector card configuration implemented into the Delphes 3.4.2 in order to consider the realistic detector effects of the FCC-hh baseline detector. All the SM backgrounds listed above for the HE-LHC are also belonging to the backgrounds of FCC-hh.
Notice that while the colliding energy of FCC-hh is different, we consider the same selection cuts which are optimized for the HE-LHC. We find that these selections also reasonable for the case of FCC-hh. Considering this point, in Table III, we present the cross section measurements of the SM backgrounds at FCC-hh before and after applying the selection cuts. As one can see, selection of two same-sign isolated charged lepton could significantly affects the ttZ and W W Z backgrounds with 11% of efficiency. This selection also leads to 43% for the ttW backgrounds. This finding indicates that, among all the backgrounds, the ttW is indeed hard to suppress. One can see that, the jets selection also affects the mentioned backgrounds considerably. Finally, our selection strategy leads to 0.8% of ttZ, 2% of ttW and 0.004% of W W Z backgrounds.
Our numerical calculations of the triple-top signal cross sections at leading order in the presence of the top quark FCNC vertices are presented in details in Table IV. This table shows the cut flow dependence of various signal scenarios studied here. One clearly sees that we achieved to the selection efficiency of 6% for tug, 10% for tcg and Table II: Cross-sections in the unit of fb for the triple-top production at HE-LHC pp → ttt (ttt) with = e, µ for five signal topologies of tqg, tqH, tqZ(σµν ), tqZ(γµ) and tqγ before and after passing sequential selection cuts.
around 15% for all other signal scenarios.
For completeness, we also depict in Fig. 6, some selected distributions, including the cosine between two same-sing leptons cos( ± , ± ) (left) and the invariant mass distributions of dilepton (right) for tqg and tqH signal scenarios and the corresponding ttZ and ttW SM backgrounds for the FCC-hh collider. As we explained before, an additional cut on the same-sign dileptons invariant mass distributions (M ±± > 10 GeV) has been applied to suppress events with pairs of same-sign energetic leptons from the heavy hadrons decays of backgrounds.
V. ANALYSIS RESULTS AND 95% CONFIDENCE LEVEL LIMITS AT HE-LHC AND FCC-HH COLLIDERS
In this section, we present the main results of this study, namely the upper limits on the coupling strengths obtained from the fast simulation of the triple-top signal at HE-LHC and FCC-hh. First, we discuss the upper limits obtained in this study focusing on the 95% CL limits of the HE-LHC and FCC-hh, and then we compare the results with other studies in literature. Finally, we detail a number of updates and improvements that should be foreseen for the future work.
Considering the optimized selections of signal and backgrounds that we discussed in section IV and having at hand the signal efficiencies and the number of backgrounds, we set 95% CL upper limits on the anoma-lous FCNC couplings and determine the expected limits on the FCNC branching fractions Br(t → qX) using a Bayesian approach [83]. The 95% CL constraints on various FCNC branching fractions including the gluon, Higgs boson, Z boson and photon are detailed and summarized in Table V for HE-LHC working at three scenarios of integrated luminosities of L int = 10, 15 and 20 ab −1 of data. The most recent experimental constraints on the corresponding branching ratio of the top quark FCNC transitions obtained at the ATLAS and CMS with 95% CL are also presented as well [45][46][47]50].
A few remarks concerning the results presented in this table are in order. As one can see, with an integrated luminosity of 15 ab −1 , the sensitivity to the branching ratio of tug and tcg channels are three order of magnitude better than the available experimental limits form CMS Collaboration [50]. For the tuH and tcH, the limits obtained in this study are two and one orders to magnitude better than the most recent direct limits on the corresponding branching ratios reported by ATLAS Collaborations at CERN with an integrated luminosity of 36.1 fb −1 at √ s = 13 TeV. As a short summary and based on the HE-LHC projections, one can conclude that in the cases of tqg and tqH FCNC transitions, the most stringent constraints have been obtained, and hence, the limits through the tree-top signal can be much better than other channel studied in the literature. For other FCNC transitions, namely tqγ and tqZ, we determined comparable branching ratios with the recent experimental limits [46,47]. This finding indicates that the tree top signal analyzed in this study is not much sensitive to these particular FCNC vertices. At this stage, we turn to present and discuss upper limits on the signal rates at 95% CL for the case of FCChh scenario. We exactly follow the statistical method used for the HE-LHC study to set upper limits on the coupling strengths and the resulting branching fractions. The branching fractions Br(t → qX) for the FCC-hh at the luminosity of 10 ab −1 are detailed in Table VI. Similar conclusions as those for the HE-LHC can be drawn for the FCC-hh. However, one can expect further improvements with a higher center-of-mass energy collider. Our study suggests that improvement on the upper limits of all analyzed FCNC couplings can be obtained at FCC-hh collider; which is found to be around 80% for the tug and one order of magnitude for the tcg coupling with respect to the HE-LHC. The improvements on these bounds are likely due to the higher center of mass energy of FCC-hh. In the light of the results on the limits on the FCNC coupling strengths in triple-top quark signal presented in Tables V and VI, we confirm the remarkable sensitivity that HE-LHC and FCC-hh would have on these couplings, especially on those of tqg and tqH, which are comparatively much more constrained than other results in literature.
As a final point, the present research explores for the first time, the sensitivity of triple-top signal at 27 TeV HE-LHC and 100 TeV FCC-hh to probe the top quark FCNC couplings with a gluon, photon, Higgs and Z boson. We have presented and discussed the sensitivity of such collider and shown that the triple-top signal signature improves the current limits of LHC for most of top quark FCNC interactions, especially for the tqg and tqH FCNC couplings. We have then examined and highlighted limits for HE-LHC working at different scenario of integrated luminosity. In this study, we have introduced some methodological improvements aimed to improve the current limits on the top quark FCNC branching fractions. Regarding the results presented here, a number of important updates and improvements are foreseen. The main scope of the present paper is the beyond SM scenarios study of triple-top quark productions leading to anomalous tqX vertices. Along with the phenomenological method presented here, additional observables can be use to suppress the background contributions and enhance the signal significance in extracting the reach of the couplings. A further improvement for future investigation is the use of multivariate technique [84]. Such technique is expected to improve the limits of coupling Table IV: Cross-sections in the unit of fb for the triple-top production at FCC-hh pp → ttt (ttt) with = e, µ for five signal topologies of tqg, tqH, tqZ(σµν ), tqZ(γµ) and tqγ before and after passing sequential preselection cuts. Figure 6: The cosine between two same-sing leptons cos( ± , ± ) (left) and the invariant mass distributions of dilepton (right) for tqg and tqH signal scenarios obtained from MadGraph5_aMC@NLO [74,75] at leading order for FCC-hh at 100 TeV. The main SM backgrounds ttZ and ttW also are presented as well.
strengths. Finally, one could also consider another channel of top quark production to study top quark FCNC transitions at the HE-LHC and FCC-hh.
VI. SUMMARY AND CONCLUSIONS
We now turn to present our summary and conclusions. We first compare our limits with the projections of HL-LHC. Then, we conclude this section with a plot showing a summary of branching fraction limits for top quark FCNC interactions in compassion to the SM predictions, as well as to the various beyond SM scenarios.
All the proposed future lepton [85] and hadron [43] colliders including the high energy large hadron collider (HE-LHC) [86] will likely face critical decision making in the coming years [87]. On the theory side, investigations of the top quark FCNC coupling to photons, gluon, Z and Higgs boson offer one of the important alternatives to explore new physics beyond the SM. Determination of new physics potentials of such proposed colliders at the energy frontier, in particular the reach on the top quark FCNC couplings measurements, is an important topic for the high energy physics community. Continued efforts are needed to be done to investigate the new physics accessible at these proposed colliders [88]. In recent years, there has been a considerable amount of literature that published to highlight the need for future high energy colliders.
Currently, the most stringent limits on top quark FCNC branching fractions Br(t → qX) have been measured with ATLAS and CMS Collaboration at the LHC. The results obtained at √ s = 13 TeV of LHC significantly improve the upper limits set with the 7 and 8 TeV data. It is worth mentioning here that, due to availability of large number of experimental results on the top quark FCNC transitions from the LHC through pp collision, it could leads to a good prospects for pushing the top quark FCNC boundaries to even much higher constraints using future colliders. This paper focused to present phenomenological investigations to analyze the sensitivity of the triple-top quark signal at HE-LHC and FCC-hh to the top quark FCNC couplings, tqX with X = g, H, Z, γ. To this end, we have studied the triple-top production pp → ttt (ttt) at 27 TeV of HE-LHC and 100 TeV of FCC-hh taken into account the unique signal signature of two same-sign isolated charged lepton.
Numerical values are provided in Table V and VI. Considering the 27 TeV HE-LHC projections, our results clearly show that with an integrated luminosity of 15 ab −1 , it is likely that while the LHC can be obtained limits on the top quark FCNC couplings of tqg and tqH up to a sensitivity of the order 10 −5 and 10 −3 , the limits on these couplings can reach up to a sensitivity of the order 10 −8 and 10 −5 , respectively. Consistent with the literature, this study confirmed that most stringent limits can be obtained for the case of tqg.
The reach we obtain on the upper limits of top quark FCNC branching ratios Br(t → qg) and Br(t → qH) are highlighted in Fig. 7, for the two future hadron-hadron colliders considered in this analysis, HE-LHC and FCChh.
As one can see from Fig. 7, our limit on the tqg and tqH branching ratios are much better than the projected limits on top FCNC couplings at HL-LHC [40,41]. While some of other calculated limits sound approximately similar to that are projected in the case of HL-LHC, note that our analysis considered the triple-top signal which could let us to examine all the FCNC couplings. We believe that our study and the results presented here have clearly brought out the advantages of the HE-LHC and FCC-hh colliders in probing the top quark FCNC couplings with gluon, photon, Z and Higgs boson, which could complement the information that could be extracted from the LHC and HL-LHC.
In order to have a better estimate of the extracted limits, let us conclude our discussions by comparing the branching fractions obtained in this study with theoretical predictions in the SM and other new physics models beyond SM. In Fig. 8, our obtained results are compared with the theoretical predictions in the SM, as well as with various new physics models. In comparison with those of new physics models, our findings for future projections of HE-LHC and FCC-hh represent good prospects for pushing top FCNC boundaries to even higher constraints.
ACKNOWLEDGMENTS
We acknowledge fruitful discussions with Daniel Schulte, Mogens Dam, Marco Zaro, Giovanni Zevi and Frederic Deliot in customizing the detector card used in this analysis. Author are thankful to Mojtaba Mohammadi for reading this manuscript and many helpful discussions and comments. Author thanks School of Particles and Accelerators, Institute for Research in Fundamental Sciences (IPM) and University of Science and Technology of Mazandaran for financial support of this research. Author also is thankful the CERN theory department for their hospitality and support during the preparation of this paper. | 9,836.4 | 2019-09-09T00:00:00.000 | [
"Physics"
] |
Current Trends in Metal–Organic and Covalent Organic Framework Membrane Materials
Abstract Metal–organic frameworks (MOFs) and covalent organic frameworks (COFs) have been thoroughly investigated with regards to applications in gas separation membranes in the past years. More recently, new preparation methods for MOFs and COFs as particles and thin‐film membranes, as well as for mixed‐matrix membranes (MMMs) have been developed. We will highlight novel processes and highly functional materials: Zeolitic imidazolate frameworks (ZIFs) can be transformed into glasses and we will give an insight into their use for membranes. In addition, liquids with permanent porosity offer solution processability for the manufacture of extremely potent MMMs. Also, MOF materials influenced by external stimuli give new directions for the enhancement of performance by in situ techniques. Presently, COFs with their large pores are useful in quantum sieving applications, and by exploiting the stacking behavior also molecular sieving COF membranes are possible. Similarly, porous polymers can be constructed using MOF templates, which then find use in gas separation membranes.
Introduction
Membranes as adisruptive technology are able to reduce the global energy consumption in the chemical separation of raw materials,a sw ell as actively reduce greenhouse gases actively,a nd thus form the basis for as ustainable future. [1,2] Membrane technology in the petrochemical sector alone could replace distillation processes and save up to 80 % energy in separation processes,w hich could lead to 8% savings in the global energy consumption. More than half of the separations are to gas separations ( Figure 1). [1][2][3] Porous membranes have come along way from the first description of metal-organic framework (MOF) mixed-matrix membranes (MMMs) using MOF-5, [4] the first neat MOF membranes starting with Mn-(HCO 2 ) 2 in 2007, [5] and the development of the first ZIF-8 membranes in 2009, [6] to todayss tate-of-the-art membranes.C ovalent organic frameworks (COF) were used much later for gas separation membranes,since water stability was one of their early issues. [7,8] Nevertheless,t he first neat and 3D COF membranes comprising COF-320 date back to 2015, [9] whereas the first experimental CO 2 -separating MMMs using exfoliated NUS-2 and NUS-3 sheets were reported in 2016. [10] When MOF membranes were first developed, the aim was to make these membranes as thick as possible and membranes of 20-300 mmt hickness were synthesized. This originated more or less from experience with zeolite membranes,w hich was drastically changed for thin films. [6,11,12] Especially for gas separation and purification, MOFs and COFs show great potential that needs to be unlocked. Today we know that thinner layers are better due to two main factors:higher flux and better selectivity.However,d efects are still an issue with thinner films,and single crystals might be considered for recording permeation data. Nevertheless,t he goal is downsizing with the preparation of thin films on the nanometer scale and the use of nanoparticles with the best possible polymer-filler interactions in MMMs.A dditionally,t he development of methods for high reproducibility and control over processes is needed. Theaim of this Minireview is to give an overview and highlight trends for the next steps in MOF and COF membrane research, paying particular attention to novel and very hot topics.
Metal-organic frameworks (MOFs) and covalent organic frameworks (COFs) have been thoroughly investigated with regards to applications in gas separation membranes in the past years.More recently,new preparation methods for MOFs and COFs as particles and thin-film membranes,a swell as for mixed-matrix membranes (MMMs) have been developed. We will highlight novel processes and highly functional materials:Zeolitic imidazolate frameworks (ZIFs) can be transformed into glasses and we will give an insight into their use for membranes.Inaddition, liquids with permanent porosity offer solution processability for the manufacture of extremely potent MMMs.Also,MOF materials influenced by external stimuli give new directions for the enhancement of performance by in situ techniques. Presently,C OFs with their large pores are useful in quantum sieving applications,a nd by exploiting the stackingb ehavior also molecular sieving COF membranes are possible.Similarly,porous polymers can be constructed using MOF templates,w hichthen find use in gas separation membranes. Figure 1. An simplified depiction of the world energy consumption and the amount used only for separation tasks in the production of primary chemicals. [1][2][3] [*] B. HosseiniMonjezi, Dr.M .T sotsalas, Dr.A . Knebel Institute of Functional Interfaces (IFG) Thep rocessability of MOF and COF materials is of increasing importance and has caught the attention of the scientific community,l eading to many derivative materials with extreme potential, such as porous liquids, [13][14][15] amorphous,p orous MOF-based glasses, [16][17][18] and porous organic polymers, [19][20][21] bringing MOFs/COFs up to the next level. Also,f rom am ore fundamental point of view it is important to step away from the random testing of materials.W ew ill show published data that leads to ad eeper understanding of the materials properties,from experiment and theory. [22,23] To actually find experimental model systems for membrane separation, single-crystal permeation testing is necessary. [24,25] Also we will address stimuli-responsive MOF materials, where gas transport has been followed and in situ and framework effects such as gate-opening, vibrational modes, [26] and electrostatic interactions between guests,l inkers,a nd metal centers could be investigated. [27] Forr eal-life applications,g ood processability and performance of MOF and COF membranes is crucial, which is more amatter of post-processing rather than the original material. Park et al. recently published ap aper where they show that material development is the most important step towards good performing membranes. [28] Making new materials out of existing ones by novel processing methods leads to advanced materials. [29] Advanced separation techniques and devices will be highlighted here as well, such as quantum sieving with MOF and COF membranes for isotope separation. [30,31] Separation processes are among the greatest challenges worldwide and using membranes could help save the planet [2] by reducing greenhouse gas emission, either actively in CO 2 separations,o rpassively by saving energy. fiber. Making polymer-filler MMMs from MOF particles is in general simple and cheap. [32] Since most COFs grow to form sheet-like structures anyway,w ew ill dive deeper into MOFs here,w here obtaining sheet-like particles is not so trivial. Owing to the high structural variety that is offered by the chemistry of MOF materials,special techniques are needed to prepare sheet-like particles.
Sheet-Like MOF Particles
MOF nanosheets in the production of ultrathin MMMs in general lead to high performance in separation applications. An alignment in thin polymer composite films is guaranteed due to the shearing forces from the casting approach, making sheet-like particles extremely interesting for polymer composite films.
Avery interesting example for the preparation of sheets was published by Peng et al. in 2014, [33] where the lamellar structure of Zn 2 (bIm) 4 allows the soft physical exfoliation of sheets by wet ball-milling and mild chemical delamination (Figure 2A-C). In addition to the use of particles for MMMs, they used afiltration technique to deposit the MOF particles as an ultrathin film on av ery rough, porous Al 2 O 3 support ( Figure 2C). Achieving alayer of 5nmthickness on asupport with that amount of roughness is not possible by solvothermal growth methods or layer-by-layer deposition. Asolvothermal growth technique will always result in ag reater thickness to form adense and gas-separating layer. [33] Many lamellar growing crystals can be exfoliated chemically,which was also shown by Pustovarenko et al. in 2018. [34] They used as urfactant-assisted approach in the synthesis of nanosheets.T he first solution contains Al(NO) 3 ·9 H 2 Oa nd the surfactant hexadecyltrimethylammonium bromide (CTAB);t he other solution contains the deprotonated 1,4benzodicarboxylic acid (BDC) linker together with 2-aminoterephtalic acid (2-ATA)asapromoter. After both solutions are heated, nucleation is induced by blending them. The CTAB forms al amellar phase and the MOF grows as approximately 100 nm 100 nm sheets between the surfactant lamellas. [34] Another approach is diffusion-mediated synthesis at at wo-phase interface,a sr eported by Rodenas et al. [35] ( Figure 2D-F). MOF-2(Cu), which already grows as alamellar MOF,issynthesized at atwo-liquid interface.Bydiffusion control, the sheets grow along the polar/nonpolar interface. AFM (atom force microscopy) analysis proves to be al ot more accurate than SEM (scanning electron microscopy) imaging for determining sheet thickness. [35] We highlight the production of MOF sheets here,since the morphology has as trong impact on the gas-separation performance of the membrane.K ang et al. [36] report big differences for [Cu 2 (ndc) 2 (dabco)] n in bulk, cubic,and sheetlike morphologies.They evaluated the different morphologies by the MMM performance in precombustion hydrogen separation and find that 1) downsizing to nanocrystals increases the performance drastically,w hereas 2) the use of nanosheets increases the performance further and leads to Figure 2. A) SEM image and B) crystal structure of Zn 2 (bIm) 4 ;C)SEM image of Zn 2 (bIm) 4 sheets deposited by filtration as athin membranefilm. From Y. Peng et al. [33] reprinted with permission from AAAS 2014. D) SEM image showing the layered morphology of CuBDC. E, F) AFM analysis of delaminated CuBDC sheets. Reprinted from T. Rodenas et al. [35] with permission from SpringerN ature 2014. benchmark performances. [37,36] Also the Tsapatsis group reported astrong increase in the selectivity and permeability by almost 70 %w hen sheet-like particles are used. Their approach is to directly synthesize Cu(BDC) nanosheets 2.5 mmi nl ength and/or width and only 25 nm thickness.F or CO 2 /CH 4 separation, they find higher values with nanosheets than with spherical particles;t hey also predicted performances theoretically and came to the same conclusion. [38] A rather important aspect of the incorporation of MOFs into MMMs is the compatibility of filler and polymer. [38,39] The implementation of simulations could be ag ood hint towards defect density in MOF membranes [38,40] as it is already widely used to predict the best polymer-filler pairs. [41]
Preparation of Mixed-Matrix Membranes
In addition to material choice and particle preparation, ag ood procedure for MMM preparation is necessary to achieve the best possible performance.However,gaining the optimal interaction between inorganic fillers and polymers is challenging.
Thea dhesion between polymer and filler materials is strongly dependent on the ratio of inorganic to organic components in the MOF material. Fori nstance,t he MIL-96 material with av ery high amount of inorganic Al-m 3 -oxocentered trinuclear clusters shows avery poor polymer-filler interaction;i tforms agglomerates and even shows crystal ripening in operando,l eading to void formation. [39] in their paper on zeolite 4A, Moore and Koros [42] reported several cases of membrane defects that can occur as ar esult of nonideal effects (Figure 3). [42] Thedistribution of the MOF filler also plays acritical role, because the formation of percolation defects can occur, as shown by Castro-MuÇoz et al. [43] Small defects can have ah uge impact and MMM procedures should aim for the perfect embedding of fillers (Case 0) by improving the polymer-filler interaction. Even tiny problems in the compatibility of the components can lead to cracks and defects in the resulting composite membranes,e specially when ah igh content of filler is used. [44] We think that several factors play arole when apoorly performing MMM results:1)The solvent used for the polymer does not give stable MOF dispersion. This leads to low MOF loading capacity and bad performance due to agglomeration. [39] 2) Thep roportion of inorganic buildings units and organic linkers is suboptimal. [42] 3) A procedure for good polymer compatibility is not followed. Some recipes consist of complicated mixing procedures,such as the stepwise addition of specific small amounts of the polymer to the colloidal solution to form astabilizing polymer shell surrounding the nanoparticles. [45]
Porous Liquids for Liquid Processing of MMMs
Porous liquids (PLs) are anovel class of porous materials that have been known for only afew years.First proposed by the James group in 2007, [46] they reported the experimental breakthrough in 2015. [15] PLs are materials with as pecial feature:p orous cage structures with am aximum pore . Non-ideal effects in an MMM lead to ad rastic change in performance. Case 0: The ideal case-selectivity and permeability increase. Case 1: Ar igidified polymer layer around the filler.C ase II:V oids form around the filler,gas breaks through. Case III:"Halo" defects accelerate transport through polymer without transport through the filler.C ase IV:C logged pores exclude transport through the sieve. Case V: Formation of aregion with reduced permeability.R eprinted from Moore and Koros [42] with permission from Elsevier 2012.
Angewandte
Chemie diameter smaller than that of the solvent molecules surrounding them. Thus,t hey remain empty and accessible for gases while in the liquid state. [47] PLs can be categorized in three different types [46,48] (Figure 4): Ty pe 1PLs are cage materials that are liquid by themselves.T he only example known to us thus far are polyether-functionalized coordination cages that act as ionic liquids. [49] Carefully said, MOF-based melts,f or example,m ade of ZIF-62 might also be regarded as type 1p orous liquids (see below). [50][51][52] Ty pe 2p orous liquids are organic cages that can be dissolved in asterically demanding solvent. Fori nstance,o rganic cages could be obtained by cycloimination of (15S,16S)-1,4,7,10,13-pentaoxacycloheptadecane-15,16-diamine with the cross-linker benzene-1,3,5trialdehyde.H ere,1 5-crown-5 serves as the solvent for the cages. [15] Organic cages were recently used for propylene/ propane separation with great results. [53] In contrast, aT ype 3P L, is ac olloidal solution of solid framework particles.T here are reports of ZIF-8 and zeolite ZSM-5 dispersed in ionic liquids that are Ty pe 3P Ls. [13,54] A new finding also suggests MOFs and zeolites dissolved in long-chain organic oils and silicon oils yield Ty pe 3P Ls. [55] MOF-based PLs are able to load MMMs with ahigh wt. % due to high colloidal stability.R ecently,Z IF-67 and ZIF-8 nanoparticles could be functionalized on their outer surface by N-heterocyclic carbenes (NHCs) to make them solution processable.The NHC-functionalized ZIFs were able to form monodisperse,h ighly stable colloidal solutions in nonpolar solvents,inwhich many polymers are prepared. Using ZIF-67 and ZIF-8 with NHC functionalization also leads to av ery good interaction with the polymer matrix (6FDA-DHTM-Durene and 6FDA-DAM), enabling very high MOF loadings of up to 47.5 wt. %inside the polymer,while also being able to separate gases in the liquid state. [56]
MOF Glasses for Membranes
As already mentioned, some MOFs in the ZIF family melt and form stable liquids,when heated under inert atmosphere (typically Ar or N 2 ). [17,18] Thei nert atmosphere is crucial in order to prevent thermal oxidation and decomposition of the ZIF melt. Ap rototypical example is ZIF-4, which melts at % 590 8 8Cb efore thermal decomposition at % 600 8 8C ( Figure 5A). [17] Molecular dynamics simulations on the thermal behavior of ZIF-4 yielded further insights into the process of MOF melting. TheZ n À Nb onds dissociate on the ps timescale,g enerating undercoordinated Zn 2+ cations. [52] Subsequently,new ZnÀNbonds are formed by association of other imidazolate linkers.T hese simulations suggest that the liquid ZIF-4 still possesses microporosity similar to the crystalline phase,b ut there is currently no experimental proof that the pores in liquid ZIF-4 are accessible.N evertheless,t he liquid ZIF-4 can be regarded as av ariation of aT ype IP L. Quenching the liquid ZIF-4 to room temperature generates aglass denoted a g ZIF-4 (a g = amorphous glass). [17,18] Theglass features af rozen atomic configuration of the supercooled liquid state.X-ray total scattering experiments show that the glass is amorphous (i.e.d oes not possess long-range order), but it possesses alocal structure that is identical to that of the crystalline ZIF. [17,18,57] In the past few years,anumber of other (mixed-linker) ZIFs have also been shown to melt and form glasses. [18,50,51,58] Prominent examples include ZIF-62 and TIF-4, which are structurally closely related to the prototypical ZIF-4 and feature the same cag network topology,b ut as econdary imidazolate linker. Importantly,t hese mixed-linker ZIFs generally feature am uch lower melting point than conventional ZIF-4. As demonstrated for ZIF-62, the melting point of the crystals,a swell as the glass transition temperature of [49] with permission from Springer Nature 2020. B) PL Type 2: Organic cages with permanent porosity in the crown ether.Reprinted from Giri et al. [15] with permission from Springer Nature 2015. C) PL Type 3: Colloidally disperse, NHC-functionalized ZIF-67 in the non-penetrating solvent mesitylene. Reprinted from Knebel et al. [55] with permission from Springer Nature 2020.
Angewandte
Chemie the corresponding glasses,c an be adjusted precisely by the amount of secondary linker,r esulting in am elting point of only % 372 8 8C, more than 200 8 8Cl ower than that of ZIF-4. [59] Most importantly,m ixed-linker ZIF glasses feature permanent porosity for av ariety of gases,s uch as CO 2 ,H 2 ,a nd several hydrocarbons ( Figure 5B). [50,51,59,60] Even though the sorption capacity of the ZIF glasses is typically approximately 50 %l ower than the capacity of their crystalline parent frameworks,t his finding sets the stage for the application of glassy ZIFs in gas separation. Kinetic sorption measurements of propane and propylene in a g ZIF-62 materials showed that propylene is adsorbed much faster than propane,demonstrating the potential of ZIF glasses for gas separation applications ( Figure 5C). [59] As an important consequence of their liquid-state processability, [61] ZIF glasses can easily form composites with other materials.B ennett and co-workers prepared MOFcrystal-glass composites of crystalline MIL-53 in am atrix of a g ZIF-62. [62] When it comes to membrane applications,M OF glasses have two conceptual advantages:1)The glasses can be easily processed and deposited in their liquid state and 2) there are no grains or grain boundaries in the isotropic glass.G rain boundaries are unavoidable in polycrystalline MOF membranes and represent af undamental problem, since these boundaries represent defects or microscopic cracks that can significantly compromise the selectivity of the membrane. Wang and Jin et al. reported the first ZIF glass membrane made of an a g ZIF-62 film on ap orous a-Al 2 O 3 support ( Figure 6). [63] In analogy to the ZIF bulk glasses,the ZIF glass membrane was prepared by melt-quenching asolvothermally synthesized polycrystalline ZIF-62 film (thickness % 70 mm) on a a-Al 2 O 3 support under inert atmosphere.T he original ZIF-62 film featured intergrown microcrystals associated with gaps,pinholes,and grain boundaries.The melt-quenched a g ZIF-62 membrane is smooth and defect-free without any grain structure ( Figure 6). TheZ IF-62 glass membrane showed enhanced gas separation properties,w ith separation factors of 50.7 (H 2 /CH 4 ), 34.5 (CO 2 /N 2 ), and 36.6 (CO 2 / CH 4 ). [63] Another recent proof-of-concept study reported an MMM consisting of a g ZIF-62 imbedded in ap olyimide matrix. [64] AZ IF-62/polyimide-MMM containing 20 wt. % ZIF-62 showed an improvement of its CO 2 /N 2 selectivity by % 27 %upon thermal transformation of the crystalline ZIF-62 to a g ZIF-62 at 440 8 8C.
Neat MOF Membranes
Neat MOF membranes are usually grown on ceramic supports by solvothermal methods.Since ceramic membranes are 1000 times more expensive (per m 2 )than polymeric films, neat MOF membranes are hard to apply in industrial settings. [11] MOF membranes on ceramic supports cannot be used for antifouling treatment such as decoking,since MOFs would burn as well-a huge disadvantage.T he one-time use of these membranes would be ah uge cost factor.N evertheless,M OF membranes have long been synthesized on ceramic supports,a nd we do not want to exclude potential applications.F rom af undamental perspective,e specially for understanding the transport properties of the MOF itself,itis [52] with permission from Springer Nature 2017. B) C 3 H 6 and C 3 H 8 sorption isotherms of a g ZIF-62 glasses containing various amounts (x)o fthe bim À linker.C)Kinetic sorption profiles for C 3 H 6 and C 3 H 8 .R eprinted from Frentzel-Beyme et al. [59] with permissionf rom the American Chemical Society 2019. Energy-dispersive Xray mapping shows Zn distribution (green). Reprinted from Wang et al. [63] with permissionf rom Wiley-VCH 2020.
Angewandte Chemie
Minireviews of great importance to produce and measure crystalline intergrown layers.H owever,o fp articularly interest is the gathering of "true permeation data" from single-crystalline membranes (Figure 7). [25,65] The" true" separation properties can be measured using single crystals,and diffusion constants and real permeation data can be obtained. [66] Although singlecrystalline membranes would be the ultimate goal of membrane science,t hey cannot be obtained on al arge scale. Therefore,t he layer-by-layer growth of surface-anchored metal-organic frameworks (SURMOFs) could be ak ey approach, because it produces an almost perfect layer. This technique offers large-scale processability of neat MOF layers with highly defined thickness. [67] Thec rystallinity can be so high that HKUST-1 films become transparent, since the characteristic blue color centers are missing. [68] It has been demonstrated that this technique also offers applications for neat MOF membranes,with the first example being ZIF-8. [69] Thedefined heteroepitaxial growth of ZIF-67 and ZIF-8 with exactly the same layer height has been shown. [70] The techniqueso nly current drawback is the limited number of accessible MOF structures,d ue to the solvent and temperature limitations of the method. Recently,U iO-66-NH 2 has been made available by liquid-phase epitaxy [71] and many more complicated frameworks will be available as SURMOFs in the near future.T he SURMOF technology will set standards as atool since it is possible to follow the exact growth of neat MOFs step by step. [72]
Stimuli-Responsive Neat MOF Membranes
Whereas the response of MOFs to applied pressure and temperature is aw ell-known concept, [73] MOFs can also respond to other types of external stimuli, such as light and electric fields. [74] Thefirst conceptional proof of electric-fieldstimulated MOFs was shown theoretically by the group of Maurin et al. for the breathing behavior of MIL-53. [75] In general, electric fields should be able to align dipolar moments inside the MOF structures (e.g.t he linker molecules). [76] There are theoretical concepts that use strongly dipolar linkers to align dipolar moments in ah igh-voltage electric field. [77] Nevertheless,f or MOFs that are feasible for membranes or adsorptive separation techniques (ZIF-8, MIL-53 etc.), theory and practical work conclude that electric fields must exceed the breakthrough voltage before linker orientation occurs. [75,76,78] Nevertheless,s ome MOFs are known for ferroelectric effects,e ven when it is nearly impossible to measure the hysteresis at accessible temperatures. [79] This is the case for ZIF-8 with the space group of I4 3m,which is able to display as tructural transformation inside an electric field (500 Vmm À1 ). Thes ymmetry is reduced to the monoclinic space group Cm and the symmetry switches further to R3m at higher fields.A sc onsequence of the E-field-driven transformation, achange in rotational energy barriers and ahigher framework stiffness arises,w hich increases molecular sieving. [78] Zhou et al. demonstrated the direct synthesis of aZIF-8m embrane in an electric field. There,Z IF-8 crystallizes directly in the Cm space group during the growth process. This leads to av ery good molecular-sieving ZIF-8(Cm) membrane that outperforms the usual ZIF-8(I4 3m)m embrane. [80] Utilizing light-responsive molecules inside the pores of MOFs,either in the backbone or as aguest molecule,leads to controllable gas transport and adsorption. [27] When, for example,a zobenzene (AZB) is introduced in the pores as guest molecules,a no ld concept already used in zeolites, [81] molecular transport through the pores can be influenced by gating effects,a sc oncluded from in situ gas permeation results. [82] Another case shows AZB molecules as side chains on the backbone of MOFs,l eading to adsorptive separation differences due to cis-trans isomerization, which effects adsorption of CO 2 by hindering its diffusion via interaction with its quadrupolar moment. [27,83] Light response has also been shown for MMMs,w here JUC-6 and PCN-250 were used inside aM atrimid 5218 membrane.N evertheless, thermal effects in the MMM could also have an effect on the gas separation. [84] Thef ield of stimuli-responsive membranes and switches [85] is strongly growing and the visionary aim towards au niversal membrane system for all different kind of separations looks achievable. [25] with permission from Elsevier 2019. B) Measuring anisotropic gas permeationt hrough [Cu 2 (bza) 4 (pyz)] n with 1D channels. The channels in the (100) direction are free for gas transport, no gas permeation occurs in the (001) direction. Reprinted from Takamizawa et al. [65] with permission from the American Chemical Society 2010. Chemie
Membranes Based on Covalent Organic Frameworks
Thef irst COFs reported by Yaghi et al. in 2005 [8] were constructed utilizing the reversible reaction of boronic acid trimerization to form boroxine COF (e.g.C OF-1), or their condensation with catechols to form boronate ester COFs (e.g. COF-5). Both reactions proceed with the evolution of H 2 Oa nd therefore the equilibrium in these reversible reactions is dependent on the water content and humidity stability is limited. Since then, many different reversible reactions have been used for the formation of COFs,a lso providing high chemical, thermal, and mechanical stabilities. Thev ery first example of aC OF studied in terms of gas separation was reported by Zhu et al. in 2013. [86] Them icroporous boronate ester 3D COF (MCOF-1) derived from tetra(4-dihydroxy-borylphenyl)methane and 1,2,4,5-tetrahydroxybenzene was synthesized and investigated as an adsorbent for various gases,such as methane,ethylene,ethane,and propane.Although no experimental data was provided in this study,itclearly demonstrated the high potential of COFs for gas separation, and since then research on COF membranes for gas separation has increased drastically. [86]
Neat COF Membranes and the Bilayer Approach
Due to their large pore sizes,COFs have found use in gas separation membranes as components of stacked bilayers on porous supports (e.g. porous a-Al 2 O 3 ,c ellulose acetate, Nylon). Theb ilayer approach uses two different materials stacked upon each other. This was realized in both bottom-up and top-down ways.T he first examples relied on a-Al 2 O 3 supports for the synthesis of a mm-thick film of azine-linked COF (ACOF-1) via the condensation of 1,3,5-triformylbenzene and hydrazine hydrate under solvothermal conditions. [87] Thep erformance of the resulting membranes was measured for CO 2 /CH 4 mixed-gas separation using aWicke-Kallenbach permeation apparatus and reached a(CO 2 /CH 4 ) = 97.1 under optimized conditions.InACOF-1 CO 2 strongly adsorbs to the polar framework and the permeance of CH 4 is significantly lowered in the mixture compared to the single-gas permeance,w hich was explained by competitive adsorption mechanisms.T his is af ine demonstration for the critical need of mixed-gas permeances as atrustable measure of the material performance.I nadouble-layer system, using imine-linked COF (LZU1) on top of ACOF-1, performance could be increased. [88] Due to the large pores of COF materials,m olecular sieving can be achieved using an interlaced layer ("gateclosing" approach) between two COFs;outstanding H 2 /CO 2 , H 2 /N 2 ,and H 2 /CH 4 selectivities were achieved for the LZU1/ ACOF-1 bilayer membrane.A ne legant way to realize as imilar approach was recently reported by the Zhao group for two different COFs-an anionic imine-based COF, containing sulfonate groups,a nd ac ationic imine-based COF,c ontaining N-alkylated phenanthridine bromide (Figure 8). Both were deposited by the Langmuir-Schaefer method as thin films on ap orous a-Al 2 O 3 support. [89] Strong electrostatic interactions led to the formation of ac ompact staggered stacked film with narrow pores which could achieve very high H 2 /CO 2 selectivity. [89] One unique application for COF membranes aims towards the quantum sieving effect for the separation of hydrogen isotopes at cryogenic temperatures. [30] COF-1, prepared at room temperature in the presence of pyridine,c ontains one pyridine molecule per boroxine ring, limiting the pore size and provide kinetic hindrance at the aperture at cryogenic temperatures.Separation of an H 2 /D 2 isotope mixture could be achieved at 26 mbar loading pressure for the temperature range between 20 and 100 Kw ith as electivity a(D 2 /H 2 )o f9 .7 AE 0.9 at T exp < 30 K and 3.1 AE 0.5 at 70 K, exceeding the selectivity of commercial cryogenic distillation processes.
COF-Based MMMs
In 2016 Zhao [10] and Gascon [90] showed the first COFbased MMMs in mixed-gas separation. Exploring different 2D COF materials,the Zhao group used NUS-2 and NUS-3, which have the ultimate advantage of high stability against water. [10] These COFs are derived from the condensation of triformylphloroglucinol with hydrazine hydrate (NUS-2) or 2,5-diethoxyterephthalohydrazide (NUS-3). Thep resence of -OH groups allows keto-enol tautomerization to act as al ocking mechanism for the labile imine bonds by transferring them into nondynamic and chemically inert b-ketoenamine bonds.This stable 2D COF could be exfoliated to COF nanosheets and incorporated up to 30 wt. %p olyetherimide (Ultem )a nd polybenzimidazole (PBI). Them ixed-gas separation performance of these MMMs using an equimolar H 2 /CO 2 gas mixture was a = 5.80 for NUS-2@Ultem and a = Figure 8. The COF bilayer approach using cationic and anionic COF sheets to prepare staggered bilayer membranes for H 2 /CO 2 separation. Reprinted from Ying et al. [89] with permission from the American Chemical Society 2020. Angewandte Chemie 31.40 for NUS-2@PBI. TheM MM prepared using 2D imine ACOF-1 in Matrimid displays ahigh selectivity for CO 2 /CH 4 gas mixtures and at wofold increased CO 2 permeability. [90] Significantly increased CO 2 permeability owing to electrostatic interactions in COFs seems to be common, since other imine-based COFs (e.g.those formed by the condensation of melamine and terephthaldehyde) in PIM-1 [91] also showed this effect.
With regards to the polymer-filler interaction described in Section 2.2, purely organic, covalently bonded COFs typically perform very well in MMMs,i nc ontrast to MOFs or zeolites. [92] Nevertheless,t he compatibility of COFs and polymers can be further improved. Fore xample,amatrix able to make van der Waals interactions (e.g. hydrogen bonds between COF and polymer chains) allows for better component mixing. [93][94][95] When an NH-rich imine-COF was combined with an NH-rich PBI matrix, aCOF loading of 50 wt. % was possible, [93] whereas OH-rich COF-5 showed good compatibility with PEG-containing polyether block amide (PEBAX or VESTAMID E). [95] TheW ang group [96] premodified the surface of 2D imine COF-LZU1 particles with polyvinylamine chains,w hich resulted in the good compatibility of the modified COF with polyvinylamine matrix.
Advanced MOF/Polymer and COF/Polymer Hybrids
MOFs,COFs,and classical polymers feature different and often complementary properties in terms of their stability, surface area, and regular structure,a sw ell as their processability. [97] We want to provide ab rief summary of recent approaches going one step further in the combination of MOF/COF and polymers,i nw hich polymer species are inserted inside the MOF or COF pores and serve as precursors for MOF or COF growth, or in which MOFs/ COFs template the synthesis of porous polymer networks. [98] These new approaches can lead to improved performance and stability in avariety of membrane or separation applications, including water treatment and gas separation, for example, for CO 2 sequestering. [21,99] Theformation of advanced MOF-polymer hybrid devices and membranes can be divided into three main approaches categorized as:a )polymer synthesis within the MOF pores, b) PolyMOFs,and c) crosslinked MOFs (Figure 9). [19] Thed escribed approaches intend to either enhance the properties of the MOF or COF membranes by the combination of polymers with enhanced processability and stability,or enhance the performance of polymeric materials by the advantages of MOFs such as their high degree of order across multiple length scales,making it possible to implement highthroughput computational screening approaches. [23,100] We envision that these new concepts in the tight integration of MOF and COF materials on the one hand and polymer materials on the other hand will be further exploited in the future to tackle real-world separation challenges.
Perspectives
There is acritical need for disruptive technologies such as membranes to lower the energy use of the chemical industry and reduce greenhouse gas emission worldwide.M OFs and COFs are materials with extraordinary properties to help separations in the petrochemical sector, such as propylene/ propane,aswell as in direct CO 2 capture and the sustainable production of CH 4 .T om ake use of the potentially best materials for these processes,atargeted material development, rather than synthesis of more and more novel materials, is crucial. We have described the non-ideal polymer-filler effects in MMMs,w hich are already known but too often neglected. [42] We have recounted many pioneering studies of material development that are highly suitable for membrane science and we encourage people to work on these:p orous liquids and the liquid processability of MOF and COF particles in the production of polymer composite membranes; the formation of glasses composed of molecular-sieving ZIFs, opening up totally new perspectives,such as grain-boundaryfree films and the production of hollow fiber membranes made of neat MOF-glass.O nt he other hand, we think it is acrucial step and the main task of science to do fundamental research, determine material parameters (e.g.u sing single crystals for diffusion studies) and go to next-level separations such as quantum sieving.E ven though, for some of these processes finding an application is rather challenging,there is al ot to learn fundamentally:A sa ne xample,s timuliresponsive materials taught us al ot about MOFs and gas transport stimulation, whereas applications as "universal" membranes switching to the desired application are yet futuristic. Thep rerequisite here is that the phenomena be fully understood and that process integration is available for the spectrum of MOF,C OF,a nd polymer materials.A combined theoretical and experimental approach is necessary to develop these materials towards ak ey technology and transfer them to industry. Figure 9. Strategies for the tight integration of MOFsand COFsw ith polymer materials by a) integrating the polymer chain inside the pores, b) using polymericl inkers as precursors, or c) crosslinking the linker molecules post-synthetically.Reprinted from Begum et al. [98] with permission from the American ChemicalS ociety 2020. | 7,757.4 | 2020-12-17T00:00:00.000 | [
"Materials Science"
] |
[Amino(iminio)methyl]phosphonate
The title compound, CH5N2O3P, exists as a zwitterion. The N atom of the imino group is protonated and the phosphonic acid group is deprotonated. The molecular geometry about the central C atom of this zwitterionic species was found to be strictly planar with the sum of the three angles about C being precisely 360°. In the crystal, the molecules are interlinked by O—H⋯O and N—H⋯O hydrogen-bonding interactions, forming a three-dimensional supramolecular network structure.
The title compound, CH 5 N 2 O 3 P, exists as a zwitterion. The N atom of the imino group is protonated and the phosphonic acid group is deprotonated. The molecular geometry about the central C atom of this zwitterionic species was found to be strictly planar with the sum of the three angles about C being precisely 360 . In the crystal, the molecules are interlinked by O-HÁ Á ÁO and N-HÁ Á ÁO hydrogen-bonding interactions, forming a three-dimensional supramolecular network structure.
Comment
In the last decade considerable attention has been afforded to the synthesis of metal phosphonates due to their potential applications in ion-exchange and sorption, catalysis, magnetism and sensors (Ayyappan et al., 2001;Clearfield, 1998;Haga et al., 2007;Vivani et al., 2008;Bao et al., 2007;Cave et al., 2006;Cao et al., 1992;Ma et al., 2006Ma et al., , 2008. In order to synthseize metal phosphonates with novel structures and properties, many kinds of phosphonic acid ligands have been used. In order to study the crystal structure of phosphonic acid, we synthesized and determined the structure of the title compound ( Fig. 1). As shown in Scheme 1, the molecular exists as a zwitterion, the imino group being protonated and the phosphonic acid group being deprotonated. The molecular geometry about the central C atom is strictly planar with the sum of the three angles about C being precisely 360°. The three bonds about the central carbon atom consist of two nearly equivalent C-N1 and C-N2 distances of 1.299 (5) Å and 1.314 (5) Å, respectively, and a C-P bond distance of 1.845 (3) Å. These two C-N bonds are considerably shorter than a typical C-N single bond distance of 1.47 Å, Similar zwitterions have been formed by other aminoiminomethanesulfonic acids (Makarov et al.,1999). The P-O distances in these compounds range from 1.4872 (2) Å to 1.5872 (2) (Table 1). Thus the molecules are interlinked by these intermolecular hydrogen bonds, forming a three-dimensional supramolecular network structure (Fig.2).
Experimental
All solvents and chemicals were of analytical grade and were used without further purification. The title compound was prepared by the following reaction: A sample of 2,4,6-tri-(phosphonate ethyl)-1,3,5-triazine (9.8 g, 20 mmol) was dissolved in 6 mol/ ml HCl (20 ml), The mixture was heated (100 °C, 10 h) and then evaporated to dryness leaving a white solid.
Crystallization was carried out by dissolution of 0.62 g of the title compound (about 0.5 mmol) in 10 ml water, followed by evaporation at room temperature. After two weeks, colorless block crystals obtained.
Refinement
All non-hydrogen atoms were refined anisotropically, whereas the positions of all H atoms bonded to nitrogen were fixed geometrically (N-H = 0.86 Å), and included in the refinement in the riding mode, with Uĩso~(H) = 1.2U~eq~(N). The (12) | 743.4 | 2010-08-18T00:00:00.000 | [
"Chemistry"
] |
A methodology for multilayer networks analysis in the context of open and private data: biological application
Recently, an increasing body of work investigates networks with multiple types of links. Variants of such systems have been examined decades ago in disciplines such as sociology and engineering, but only recently have they been unified within the framework of multilayer networks. In parallel, many aspects of real systems are increasingly and routinely sensed, measured and described, resulting in many private, but also open data sets. In many domains publicly available repositories of open data sets constitute a great opportunity for domain experts to contextualise their privately generated data compared to publicly available data in their domain. We propose in this paper a methodology for multilayer network analysis in order to provide domain experts with measures and methods to understand, evaluate and complete their private data by comparing and/or combining them with open data when both are modelled as multilayer networks. We illustrate our methodology through a biological application where interactions between molecules are extracted from open databases and modelled by a multilayer network and where private data are collected experimentally. This methodology helps biologists to compare their private networks with the open data, to assess the connectivity between the molecules across layers and to compute the distribution of the identified molecules in the open network. In addition, the shortest paths which are biologically meaningful are also analysed and classified.
Introduction
Network theory is an important tool for describing and analysing complex systems which are represented as mathematical graphs. It has many applications in social, biological, physical, information and engineering sciences (Fortunato 2010;Newman 2003;Gosak et al. 2017;Seminar 2019;Pavlopoulos et al. 2011;Djemili et al. 2017). For example, it has been used to capture interesting properties of many real networks, e.g. having a heavytailed degree distribution, having the small-world property, the existence of nodes playing central roles and/or the existence of modular structures (Newman 2003).
Recently, an increasing body of work investigates networks with multiple types of links, as well as the so-called "networks of networks". Variants of such systems have been examined decades ago in disciplines such as sociology and engineering, but only recently have they been unified, along with other nomenclature, within the framework of multilayer networks defined by Kivelä et al. (2014).
In parallel many aspects of real systems are increasingly and routinely sensed, measured and described, resulting in many private, but also open data sets. By private data we mean data collected internally in a company or institution. Open data refers to the idea that some data should be freely available to everyone to use and republish at will, without restrictions from copyright, patents or other mechanisms of control.
In many domains publicly available repositories of open data sets constitute a great opportunity for domain experts to contextualise their privately generated data compared to publicly available data in their domain.
In this paper we propose a methodology for multilayer network analysis in order to provide domain experts with measures and methods to understand, evaluate and complete their private data by comparing and/or combining them with open data when both are modelled as multilayer networks.
Main contributions of this paper are: 1. We propose a new formalism for multilayer network that allows to carry out fine analysis by considering two levels: the intra-layer level and the inter-layer one.
We show examples of how we can extend the definition of global and local measures as density and centralities to the inter-layer level and the whole network. 2. We define the private multilayer network : the induced graph elaborated from the private data is extracted in order to be analysed and compared to the whole network. 3. We define the private egocentric network : the notion of egocentric network which is defined around a given ego node (Marsden 2002;Djemili et al. 2017) is extended to an egocentric network around private multilayer network.The private egocentric network can be used to evaluate the connectivity strength between the different layers of private data in comparison to the whole network. The private egocentric network can also help to focus the study of the private network in the space of its neighbours across the layers especially in the context of very large-scale open networks. 4. We define layer and inter-layer reachability metrics of a given sub-network: this measure is based on the private egocentric network and help to appreciate the connectivity strength of private data across layers.
We illustrate our methodology through a biological application. The open multilayer network is constructed from open databases where weighted interactions between proteins-proteins, metabolites-metabolites and proteins-metabolites are given. The private data is a set of proteins and metabolites collected experimentally and present a set of nodes in the open multilayer network. We show how the private network is constructed, analysed and compared to the whole (open) network. The private egocentric network is analysed and the layers reachability metrics are computed and discussed. Pathways between pairs of private proteins are then analysed and classified according to their location in the open network (private, egocentric or extra-egocentric). The KEGG (Kyoto Encyclopedia of Genes and Genomes) open data set (Kanehisa and Goto 2000) is also used to describe pathways.
By applying this methodology on the biological data we show how it can help biologist to complete, assess and interpret their private data by using the open network: weighted interactions between private collected molecules are added by using the open network. The connectivity between the molecules inter-layers and across layers are computed and the distribution of the identified molecules in the open network are observed and interpreted, Reachabilities across layer is computed in addition shortest paths which are biologically meaningful are also analysed and classified.
The rest of this paper is organised as follow: we present in "Multilayer network analysis elements" section elements and notions we use for multilayer networks analysis. Related work are presented in "Related work" section. We present in "Biological application" section the biological application. We finally present conclusion and perspectives in "Conclusion and perspectives" section.
Multilayer network analysis elements
We firstly present a new formalism of multilayer network as well as examples showing how we update global and local measures to the context of multilayer networks. We give then a formal definitions of multilayer egocentric network, of private multilayer network and of private egocentric one. We show then how we can use these notions to define the layer and inter-layer reachabilites of a given sub-network.
Notations, properties and metrics
We represent a multilayer network by a tuple that contains a set of vertices, a set of edges intra-layers and a set of edges inter-layers.
Let N = (V, E, C) be a graph containing l layers (see Fig. 1) .V l } is the set of vertices contained in the layers where l is the number of layers l > 1, V i is the set vertices in the layer number i, This representation allows to propose an adaptation of global and local metrics taking into account the intra-layers and the inter-layer links. We can then aggregate these metrics in order to propose a metric for the whole network. For example, we can propose the following metric for the density: • Intra-layer density for the layer i: • Inter-layer density for the bipartite component C ij : Likewise, the degree centrality can be generalised to the inter-layer level and to the whole networks. The centrality degree and connectivities of a vertex v i j belonging to the layer V i are given by: • Inter-layers connectivity: we define the connectivity of a vertex in the bipartite Multilayers connectivity: we propose to generalise the definition of the connectivity of a node to the whole network:
Multilayer egocentric networks
Given a complex network (and more particularly an online social network), the egocentric network is defined around an ego node u is a sub-network containing the ego u and the alters (the neighbours) as well as the set of links of the ego-network. In the literature, two cases of online personal networks are identified depending on the distance of the alters from the ego: 1-level and k-level.
Let G = (V , E), and u a vertex, the 1-level egocentric network of u G u = (V u , E u ) is given by (see Fig. 2) : We propose an extension of this definition to multilayer networks which aims to access to the alters located in the same layer as well as the layers connected to the one of the ego (see Fig. 3 PE i is the set of edges in the layer number i given by: Fig. 4, the blue graph represented the multilayer network N extracted from the open data, red nodes represent the private data and the red graph illustrates the private multilayer network N[ PV ] . We extend now the definition of egocentric network (which is defined around a given ego node (Marsden 2002;Djemili et al. 2017)) to an egocentric network around private multilayer network.
We define the private egocentric network as follow: Let N[ PV ] = (PV, PE, PC) be the private mutilayer network. We define the private egocentric network : Fig. 5, red nodes represent the private data and the graph containing red and yellow nodes and edges illustrates the private egocentric network N PV
Layer and inter-layer reachability of a subnetwork
We define a graph reachability for a given layer as follow: Let N = (V, E, C) be a multilayer network containing l layers, G = (V , E) a subgraph of N and i a given layer.
• Reachability(G,i) is given by the subgraph G i = (V i , E i ): In order to appreciate the connection strength between private nodes across layer, we apply the reachability on the private egocentric network computed on a given layer i to another layer j. Let N PV i = G V PV i , E PV I be the private egocentric network computed from the layer i, let the reachability Reachability N PV i , j to another layer j be the graph precisionR(i, j) gives the ratio of private nodes belonging to layer j that are reachable from layer i to all reachable nodes in the layer j. Fig. 6 Reachability from layer i to layer j: red nodes are private ones, yellow and orange nodes are egocentrics ones computed from layer i, green nodes are the egocentric ones that belong to the the private network of the layer j, precisionR(i,j)=1 and recallR(i,j)=0.4: this means that all reachable nodes from layer i are private ones but only 40% of private nodes of the layer j are reachable from layer i Fig. 7 Reachability from layer j to layer i: red nodes are private ones, yellow and orange nodes are egocentrics ones computed from layer j, green nodes are the egocentric ones that belong to the the private network of the layer i, precisionR(j,i)=0.5 and recallR(j,i)=1: this means that 50% of reachable nodes from layer j are private ones but all private nodes of the layer i are reachable from layer j recallR(i, j) is the ratio of private nodes of the layer j that are reachable from layer i to all private nodes belonging to layer j.
We define also a graph inter-layer reachability for a given bipartite part as follow. Let N = (V, E, C) be a multilayer network containing l layers, G = (V , E) a subgraph of N and C ij is a given bipartie part.
Given a bipartite part C ij , we can apply the InterReachability from the private induced multilayer network or from the private egocentric one.
For example, let N[ PV ] be the private multilayer network, let InterReachability(N[ PV ] , C ij ) be the graph C ij we can evaluate the reachable bipartite edges by computing the ratio
Related work
Recently, there have been increasingly intense efforts to investigate networks with multiple types of connections as well as the so-called "networks of networks". Variants of such systems have been examined decades ago in disciplines such sociology and engineering, but only recently have they been unified, along with other nomenclature, within the framework of multilayer networks defined by Kivelä et al.
In Kivelä et al. (2014) a complete review of the field of multilayer network is presented, the networks types, the characteristics of nodes and layers, the notion of aspect as well as the nature of coupling between layers are detailed.
Many studies are currently addressing themes related to multilayer networks as structure and dynamics of multilayer networks (Boccaletti et al. 2014;Magnani and Rossi 2013; Aleta and Moreno 2019), communities detection in multilayer networks (Liu et al. 2018) and visualisation .
Many work show also that experts in multiple domains as digital humanities (McGee et al. 2016), biology (Gosak et al. 2017), techno-anthropology etc. present their data using the multilayer networks and are aware of the strong necessity of having tools that analyse their data (Kivelä et al. 2019).
In this paper, we propose a methodology for multilayer network analysis in order to provide domain experts with measures and methods to understand, evaluate and complete their private data by comparing and/or combining them with open data when both are modelled as multilayer networks.
This methodology uses a formalism based on a set of graphs some of them represent layers (see "Notations, properties and metrics" section), others are biparties graphs representing the inter-layers connections. This formalism allows us to clearly separate three types of analysis: the intra-level one, the inter-level one and the global one that aggregate both (intra and inter) levels.
In Kivelä et al. (2014), a general formalism of the most general type of multilayer network was proposed, an underlying graph that represents this multilayer network is defined, where a node is represented by a tuple containing three identifiers: the node one, the layer one and the aspect one. In addition, two types of edges are proposed: intra-layer edges and inter-layer ones.
Our formalism for multilayer network allows to carry out fine analysis by considering two levels (see "Notations, properties and metrics" section) : the intra-layer level and the inter-layer one. We showed above, examples of how we can extend the definition of global and local measures as density and centralities to the inter-layer level. Measures for the whole networks are then computed by aggregating both precedent measures.
In many other work (Battiston et al. 2014), a monoplex network is constructed by aggregating data from the different layers of a multilayer network, the classical definition of node degree is then applied to the resulting monoplex network. However, network aggregation leads to a loss of information. In Some other work, the distinction of the layers is maintained and the degree of node is represented by a vector. It is also possible to define degree and neighbourhood in terms of a focal node and any subset of the layers (Berlingerio et al. 2013).
On the other hand, we defined layer and inter-layer reachability metrics of a given sub-network this measure is based on the private egocentric network and help to appreciate the connectivity strength of private data across layers (see "Layer and inter-layer reachability of a subnetwork" section) .
In Kivelä et al. (2014) the mesure of node interdependence is defined as being the ratio of shortest paths in which two or more layers are used to the total number of shortest paths. It is a measure to quantify the value added by the multiplexicity to the reachability of nodes. The interdependence of a multiplex is computed as the average node interdependence.
Biological application
The aim of this application is to study several sets of biological data collected in experimentally related samples (i.e. cannabis samples). Identified molecules (proteins and metabolites) are measured form the biological collected data in different "omics" experiments: transcriptomics, proteomics and metabolomics. In their experiments, biologist measured at several time points, contigs: each one quantify genes, spots: each one quantify one or more proteins, and metabolites. Each gene produce typically one protein but sometimes more proteins. (Szklarczyk et al. 2019), which is the main protein-protein (and so also gene-gene) interactions database as well as the STITCH (Search Tool for InTeractions of CHemicals) one. STITCH (Szklarczyk et al. 2016) is a twin database including edges between metabolites and metabolites, and also between proteins and metabolites (see Fig. 8). Each interaction in both databases is based on the presence of experimental, coexpression (similar behaviour across several public available experiment), text mining (appearing in the same phrase),pathway (participating to the same known biological network). A combined score aggregating all these types of interactions whose value is are between 0 and 1000 is added to both databases (see Tables 1, 2 and 3).
The open multilayer network is constructed form the open STRING and STITCH databases (see Fig. 8). Weighted interactions (edges) between proteins-proteins, metabolites-metabolites and proteins-metabolites are created according to the value of the combined score. The private data is the set of identified proteins and metabolites collected experimentally in the laboratory by biologists as mentioned above and will present a set of nodes in the whole open data as explained in Fig. 4. Once the regulatory network has been sketched, it shall be analysed. The complexity of the network shall be reduced, by selecting significant interactions. Biologists need to identify key nodes (molecules) shortest paths, sort them via centrality measures between given ends and track the path from a receptor to transcription factors and vice versa.
From a biological point of view we remind that: 1. Biologists are often interested to find neighbours of molecules (and more particularity proteins), hence the necessity to analyse the private egocentric network. 2. Biologists need to extract and analyse signal transduction and metabolic pathways from the network. Shortest path is biologically meaningful as energetically the most favorable for detecting the signal transduction interactions as well as the metabolic pathways.
Signal transduction represent a series of interactions between different bioentities such as proteins, chemicals or macromolecules in order to investigate how signal transmission is performed either from the outside to the inside of the cell, or within the cell. Likewise, metabolic pathways are related to a series of chemical reactions occurring within a cell at different time points holding information about a series of biochemical events and the way they are correlated we consider.
To analyse the biological network we proceed as follow: 1. Analysis of each layer (proteins and metabolites layers) (a) The layer is constructed from the open data base (STRING and STITCH). Private network is also constructed from the set of identified molecules (proteins and metabolites) collected from the cannabis samples experiment, where biological data are collected in different "omics" experiments: transcriptomics, proteomics and metabolomics. (Blondel et al. 2008) in order to detect communities, private (identified from experiment) molecules distribution is studied according to the detected communities. The KEGG open data set (Kanehisa and Goto 2000) is also used to describe pathways.
Open biological databases are very big in relation to the average high throughput biological experiment. In our case the proteins in the experiment represent only form 0.58% to 0.74% of the total number of proteins in the whole network. The metabolites in the experiment represent less than 0.05% of the total number of metabolites in the whole network. Table 4 shows the distribution of the combined score values. Combined scores express strength interactions between two proteins according to the open STRING database. Figure 9 shows that the distribution of the values of the combined scores is similar to scale free network behaviour.
Network construction
We construct the proteins layer by considering the combined score as threshold: if we take the minimum score (150) all the interactions are considered otherwise a part of the network is considered according the chosen percentile (see Fig. 10). When the combined core value is incremented some nodes will be disconnected. These nodes are dropped from the network. The private network is constructed also. (see Appendix A for more details) Identified proteins in the experiment present only form 0.58% to 0.74% from the total number of proteins in the open network (see Table 5). Network densities do not vary a lot between the open and the private networks which means that the identified proteins (in the experiment) are almost balanced distributed in the whole protein layer. We notice also that from the 75th percentile, certain identified proteins begin to be missed (see Appendix A for more details).
Table 4
The distribution of the combined score according to the STRING data base
Degree distribution:
Results show that the degrees centralities mean values of the identified proteins do not vary a lot in comparison to the other proteins (see Appendix A for more details). This is coherent with the observation on the networks densities that we mention above (see Table 5). 3. Communities detection: We apply the Louvain algorithm to the protein layer 2 (Blondel et al. 2008). Eight communities are detected for the minimum score. We notice that the values of the precision in all communities are not varying a lot (see Appendix A for more details). This means that the identified proteins are almost distributed in a balanced way in communities. Table 5 shows a summary of results and observations concerning the protein layer analysis. Table 6 shows the distribution of combined scores values. Combined scores express the strength of the interaction between two metabolites according to the STITCH data base. Figure 11 shows that the distribution of the values of the combined sore is similar to scale free network behaviour.
Metabolites layer analysis
1. Network construction As for proteins layer, we construct the metabolites layer by considering the combined score as threshold, if we take the minimum score (2) the whole network is constructed networks otherwise a part of the network is considered according the chosen percentile (see Fig. 12). We notice that the identified metabolites in the experiment present less than 0.05% from the total number of metabolites in the whole network. However the private metabolites networks extracted form the experiment present high density in comparison with the open ones (see Appendix B for more details).This means that the metabolites of the experiment are highly connected by pairs (see Table 7).
Degree distribution:
Results show that the identified metabolites have very high degree centralities (see Appendix B for more details). This means they are strongly connected connected by pairs according to the STITCH open data base (see Table 7). 3. Communities detection : We apply the Louvain algorithm to the metabolites layer (Blondel et al. 2008), we obtain a modularity of is 0.46. 39 communities are detected from the principal connected component for the minimum combined score (see Appendix B): • 22 have their cardinalities between 3 and 28 • 11 have their cardinalities between 1000 and 10000 • 6 have their cardinalities between 15000 and 30000 We notice that a majority of the metabolites are present in only two communities. This results is correlated with the high density value of the private metabolite network and means that metabolites are strongly connected and forms mainly two highly connected subnetworks. Table 7 shows a summary of results and observations concerning the metabolites layer analysis. Figure 13 shows that the distribution of the values of the combined scores extracted from the STITCH data base is similar to scale free network behaviour.
Fig. 12
Private metabolites networks: the left one corresponds to the minimum combined score (2), the right one corresponds to score values greater than 500. Warm colours for nodes indicate hight degree centralities. Warm colours for edges indicate hight weights
Networks construction:
We construct the protein-metabolite bipartite part by considering the combined score as threshold, if we take the minimum score the whole network is constructed otherwise a part of the network is considered according the chosen percentile. The 2-layer network is then constructed by considering the proteins and the metabolites layers. The private 2-layers network extracted from the experiment as well the private egocentric one are also constructed.
Results (see Appendix C) show that: • The ratio of identified molecules (proteins and metabolites) in the experiment is 0.1% compared to the open 2-layers networks but decreases to [0.9%, 1.42%] in the private egocentric network. • the density of the private networks obtained from the experiment is 100 to 165 bigger that the one of the open network but it is only 1,65 bigger than the one of egocentric network. Table 8 shows that private egocentric metabolites networks reaches (see "Layer and inter-layer reachability of a subnetwork" section) a set of proteins that contains 51% to 67% of the identified proteins despite a very low precision. Likewise, Table 9) shows that private egocentric proteins networks reaches (see "Layer and inter-layer reachability of a subnetwork" section) a set of metabolites that contains 53% to 84% of the identified metabolites despite a very low precision. These results show that a majority of identified metabolites (in the experiment) are reachable from all the identified proteins and vice-versa. This will help biologist to classify molecules into neighbours ones and distant ones.
Proteins and metabolites reachabilities:
We present in next sections two methods for proteins pathway analysis: the first one is based on analysing the shortest paths between pairs of private proteins and the second one is based on using the affiliation network extracted from the open KEGG Database (Kanehisa and Goto 2000).
Proteins pathways analysis using shortest paths
shortest path is biologically meaningful as energetically the most favorable. The private protein networks is composed of 142 proteins (see Table 16, shortest paths are computed between 20164 pairs of proteins. Table 10 shows the number and percentage of shortest paths classified by theirs lengths We can thus propose a classification of the obtained shortest paths into three classes according to their location in the whole proteins networks : 1. Shortest paths whose lengths are less than or equal two: these pathways belong to the private protein network. 2. Shortest paths whose lengths are less than or equal four: these ones can either reach the egocentric networks or belong completely to the private one. 3. Shortest paths whose lengths are more than four: these ones can either reach the open networks, or belong completely to the egocentric or the private one We notice that the majority of found shortest paths belong to the egocentric network (or the private one) so they are in the neighbours of the private network nodes. Only few of them (3,37 %) can be outside the egocentric networks. These few long shortest paths can be isolated and studied by biologists in order to understand the molecule interactions into these paths. Table 11 shows two shortest paths pf length 6 composed of proteins and metabolites: one have some nodes outside the egocentric network and the other one is completely inside the egocentric network and the other one From the KEGG data set we extract 4692 proteins which are associated to Pathway identifiers. Each identifier is also related to a pathway name that characterised the molecules (see Tables 12 and 13). There are 238 pathways .
Our goal is to use this data set in order to characterise the set of private proteins extracted from the experiment. We represent the affiliation of private proteins by an affiliation network extracted from the KEGG database and modelled by a bipartite graph containing the set of private proteins connected to pathway identifiers (see Fig. 14). Bipartite networks are a particular class of complex networks, whose nodes are divided into two sets X and Y, and only connections between two nodes in different sets are allowed. Bipartite networks can usually be compressed by one-mode projection. This means that the ensuing network contains nodes of only either of the two sets, X (or, alternatively, Y) nodes are connected only if when they have at least one common neighbouring Y (or, alternatively, X) node (see Fig. 15).
We consider the pathway networks obtained by one-mode projection on the pathways set. Table 14 shows the characteristics of this network.
In order to characterise the private proteins data set we proceed as follow: we firstly apply the Louvain algorithm (Blondel et al. 2008) to the pathway on-mode projection network in order to detect communities. Pathways that belong to the same community are similar in the the sens that they are associated to some proteins in commun.
Each community can be described by a set of pathway names (see Table 13). The community 9 have the better Jaccard index with the set of private proteins. It has the following pathways description :["Oxidative phosphorylation", "N-Glycan biosynthesis" , "Porphyrin and chlorophyll metabolism" ,"Ribosome biogenesis in eukaryotes", "RNA transport" ,"RNA degradation" "Spliceosome", "Ubiquitin mediated proteolysis", "Protein processing in endoplasmic reticulum", "Circadian rhythm"] We aim to study these pathways in order to show if the list of metabolites and proteins found on the pathways are biologically significant. We aim also to compare them to shortest paths and their relations to the private egocentric network.
Results discussion
We discuss in this section results obtained from the application of our methodology to the above biological application. We present observations and results related to one layer and those obtained for the whole network.
Observations and results obtained from one layer analysis
Our analysis methodology allows biologists to compare and assess identified molecules and private networks with the open ones as described follows (see Tables 5 and 7 Comparing these values helps biologists to appreciate the strength of connections between the identified molecules and all the other molecules in comparison with the strength of connection between all the molecules. In our case, degree values of identified proteins do not vary a lot in comparison to other proteins. On the other hand, identified metabolites have very high degree centralities, this means they are strongly connected to other metabolites (see Tables 5 and 7).
• In order to have an idea about the distribution of the identified molecules in the network, we apply the Louvain algorithm (Blondel et al. 2008) in order to detect communities, private molecules (identified from experiment) distribution is studied according to the detected communities. In our case, eight communities are detected for the protein layer. The rate of distribution of identified proteins in these communities is ∈[ 0, 48%; 0, 77%], this means Identified proteins are almost distributed in a balanced way in communities. Notice that theses rates is comparable to the global rate of identified proteins in the one network. On the other hand, 39 communities are detected for the metabolite layer. 80% of the identified metabolites belong only to 2 communities, this means that a majority of the Identified metabolites are strongly connected and forms two highly connected subnetworks. Biologists have confirmed these results by identifying two known categories of metabolites (see Tables 5 and 7).
Observations and results obtained from two layers analysis
• The computation of layer reachablities from metabolites to proteins and from proteins to metabolites allow biologists to appreciate the ratio of immediate interactions between private molecules (identified in their experience) in comparison with the open data. Table 8 shows that private egocentric metabolites networks reaches (see "Layer and inter-layer reachability of a subnetwork" section) a set of proteins that contains 51% to 67% of the identified proteins despite a very low precision (0.6% to 0.64%).
Likewise, Table 9) shows that private egocentric proteins networks reaches (see "Layer and inter-layer reachability of a subnetwork" section) a set of metabolites that contains 53% to 84% of the identified metabolites despite a very low precision (0.56% to 0.7%). • Analysing shortest paths which are biologically meaningful between pairs of private proteins could be very helpful for biologists. We propose to classify them according to their location in the open network (private, egocentric or extra-egocentric). In our case, we notice that the majority of found shortest paths belong to the egocentric network (or the private one) so they are in the neighbours of the private network nodes. Only few of them (3,37 % ) can be outside the egocentric networks. These few long shortest paths can be isolated and studied by biologists in order to understand the molecule interactions into these paths.
• By using the KEGG database (Kanehisa and Goto 2000), we proposed to characterise the set of private proteins identified from the experiment by pathways description presented as a list of pathway names. We aim to study these pathways in order to show if the list of metabolites and proteins found on the pathways are biologically significant. We aim also to compare them to found shortest paths and their relations to the private egocentric network.
Conclusion and perspectives
We presented in this paper a methodology including measures and methods which helps domain experts to understand, evaluate and complete their private data by comparing and/or combining them with open data, when both are modelled by multilayer networks.
We proposed a new formalism for multilayer network that allows to carry out fine analysis by considering two levels: the intra-layer level and the inter-layer one.
We introduced the notions of private multilayer network and private egocentric network which is defined around the private multilayer network. The private egocentric network is used to evaluate the connectivity strength between the different layers of private data in comparison to the open network. We showed how we can use these notions to define the layer and inter-layer reachability metrics of a given sub-network.
We illustrated our methodology through a biological application where interactions between molecules (proteins and metabolites) are extracted from open databases and modelled by a multilayer network. The private data is a set of proteins and metabolites collected experimentally and presented a set of nodes in the whole multilayer network. Current experimental results are relevant from biologists point of view.
We showed that the application of this methodology allows biologists to compare and assess identified molecules and private networks with the open one.
Table 16
Global measures values of the proteins-proteins networks extracted from the STRING data base and the private networks, according to combined score percentiles Page 24 of 28 Figure 17 shows the violin plots that allow to compare the degree distributions of the private metabolites networks. Table 20 shows the degree distributions of the open network, the identified nodes in the experimentation as well as the private networks. We notice that the identified metabolites have very high degree centralities. This means they are (Blondel et al. 2008), we obtain a modularity of is 0.46. 39 communities are detected from the principal connected component for the minimum combined score: • 22 have their cardinalities between 3 and 28 • 11 have their cardinalities between 1000 and 10000 • 6 have their cardinalities between 15000 and 30000 We show in Table 3 the distribution of the main identified metabolites.
Appendix C: Proteins-metabolites network analysis
• Networks construction: Table 21 shows global measures values of the bipartite component, the whole networks, the private one and the private egocentric network according to combined score percentiles.
Table 19
Global measures values of the metabolites networks extracted from the STITCH database and the private networks according to combined score percentiles Table 21 Global measures values of the proteins-metabolites networks according to the data base STITCH and to combined score percentiles | 8,220.8 | 2020-07-23T00:00:00.000 | [
"Biology",
"Computer Science",
"Environmental Science"
] |
Study of anisotropic variation of cosmic rays intensity with solar activity
The annual average values of amplitudes and phases of first two harmonics of cosmic ray anisotropy have been derived by using the harmonic analysis technique for the period 1989 to 2004, which covers mostly the major period of solar cycles 22 and 23. In this paper we have taken the pressure corrected hourly data for Kiel neutron monitor station (cut off rigidity ≈ 2.29 GV) to derive the harmonic component of cosmic ray daily variation and compared with the data of Halekala neutron monitor (cut off rigidity ≈ 13.2 GV) for the period 1991 to 2004. From the analysis it has been concluded that the diurnal amplitude and phase of daily variation of cosmic rays have been found to be correlated with solar activity. However, the semi-diurnal amplitude and phase are inversely correlated with solar activity for both the stations.
INTRODUCTION
The anisotropic variations in cosmic ray intensity which are observed only in the heliosphere can be easily detected by the ground based detectors [1][2][3][4][5][6].Among the various cosmic ray intensity variations, 27-day variations, Forbush decreases and solar daily variations have been widely investigated by number of researchers [6][7][8].The large differences in the diurnal and semi-diurnal variation of cosmic ray intensity indicate that large changes occur in interplanetary space for continuous periods, which are associated with the spatial distribution of cosmic ray intensity as well as geomagnetic disturbances.The amplitudes and phases of first two harmonics of cosmic ray daily variation and their average characteristics have been particularly emphasized in a series of papers [9][10].Since the realization of "in situ" observations; the convection-diffusion and the inter-planetary magnetic field (IMF) gradient as well as curvature drift phenomena in galactic cosmic ray particles; all together manifest itself as a time variation in the count rate of the monitor, a phenomena called solar daily variation or cosmic ray anisotropies [11][12][13][14][15].
In this paper we have collected the data of diurnal and semi-diurnal amplitudes and phases of cosmic ray anisotropies for the period 1989-2004 of Kiel neutron monitor (a high-latitude station) and for the period 1991-2004 of Haleakala (a low-latitude) neutron monitor station and correlated with sunspot number (Rz) covering the previous solar cycle 22 and present solar cycle 23.
METHOD OF ANALYSIS
Generally, cosmic ray intensity shows significant anisotropic variation on a day-to-day basis with most probable amplitude of 0.4% to 0.5% at high and low latitude neutron monitor stations.During the period 1989 to 2004, covering the major portion of solar cycles 22 and 23, the amplitudes and phases of the first two harmonics of the daily variation of high energy cosmic rays have been obtained on a day-to-day basis by using the pressure corrected hourly data of neutron monitors, well distributed particularly in latitudes, to cover different cutoff rigidities.Such data enable us to study the rigiditydependent variations.These observational results for first and second (diurnal and semi-diurnal) harmonics have been compared with the solar and geomagnetic parameters.The hourly pressure corrected cosmic ray neutron monitor data of Kiel (a high-latitude station with low cut-off rigidity) and Haleakala (a low-latitude station with high cut-off rigidity) neutron monitor stations have been obtained from the website www.cosmic ray neutron monitor data NGDC/WDC STP, Boulder-Cosmic Rays.The amplitudes and Phases (time of maximum) of the anisotropic variation of cosmic rays have been derived from these data by simple harmonic analysis.The annual average is calculated from individual daily vectors after rejecting the days with universal time (UT) associated cosmic ray variations.The daily values of solar and geomagnetic parameters have been taken from Solar Geophysical Data Books.
DISCUSSION AND CONCLUSIONS
The solar activities play a significant role in modulating the cosmic ray intensity.It modifies interplanetary and geomagnetic parameters.The cosmic ray daily variations which are due to spinning motion of the earth, are particularly described in this analysis.In fact, the largest amplitudes are observed during the declining phase of solar activity (Figures 1 and 2).In other words, we infer that the semi-diurnal amplitude for Kiel and Haleakala neutron monitor stations are negatively correlated with sunspot numbers, (Figures 3 and 4) which is opposite to that found for the diurnal amplitudes.
Nevertheless, the semi-diurnal phase i.e. the time of maximum for Kiel and Haleakala is positively correlated (Kiel r = 0.84, Haleakala r = 0.53) with sunspot number, as was also the case for the diurnal phase.The results are presented here for the recent periods. 1) Significant positive correlations of amplitudes and phase for both the stations as well as for the solar parameter (Rz) have been found for diurnal variation.From the analysis, it is observed that the diurnal amplitude and phase show a significant correlation with sunspot activity.2) The amplitude as well as the time of maximum of the diurnal phase has been found to increase with the increase of sunspot numbers, i.e. diurnal amplitude as well as phase is generally high during high solar activity period.The negative correlations of the semi-diurnal amplitudes with sunspot number signify that during maximum sunspot activity periods, the semi-diurnal amplitudes have least magnitudes.
3) The semi-diurnal amplitude for Kiel and Haleakala neutron monitor stations are negatively correlated with sunspot number, which is opposite to that found for the diurnal amplitudes.4) Nevertheless, the semi-diurnal phase i.e. the time of maximum for Kiel and Haleakala is positively correlated (Kiel r = 0.84, Haleakala r = 0.53)
Figure 1 .
Figure 1.The crossplot between the first harmonic (diurnal variation) annual average amplitude (for Kiel as well as for Haleakala in %) with sunspot numbers, for the interval 1989-2004 for Kiel and 1991-2004 for Haleakala.The best fit lines are also shown.
Figure 2 .
Figure 2. The crossplot between the first harmonic (diurnal variation) annual average phase values (in hours) for (Kiel/ Haleakala) neutron monitor with sunspot numbers for the interval 1989-2004 for Kiel and 1991-2004 for Haleakala.The best fit lines are also shown.
Figure 3 .
Figure 3.The crossplot between the second harmonic (semidiurnal variation) annual average amplitudes in (%) for Kiel/ Haleakala neutron monitor with sunspot numbers, for the interval 1989-2004 for Kiel and 1991-2004 for Haleakala.The best fit lines are also shown.
Figure 4 .
Figure 4.The crossplot between the second harmonic (semidiurnal variation) annual average phase values (in hours) for Kiel as well as for Haleakala neutron monitor station with sunspot numbers for the interval 1989-2004 for Kiel and 1991-2004 for Haleakala.The best fit lines are also shown. | 1,485.4 | 2011-02-28T00:00:00.000 | [
"Physics"
] |
The Lake Petén Itzá Scientifi c Drilling Project
Polar ice cores provide us with high-resolution records of past climate change at high latitudes on both glacial-to-interglacial and millennial timescales. Paleoclimatologists and climate modelers have focused increasingly on the tropics, however, as a potentially important driver of global climate change because of the region’s role in controlling the Earth’s energy budget and in regulating the water vapor content of the atmosphere. Tropical climate change is often expressed most strongly as variations in precipitation, and closed-basin lakes are sensitive recorders of the balance between precipitation and evaporation. Recent advances in fl oating platforms and drilling technology now offer the paleolimnological community the opportunity to obtain long sediment records from lowland tropical lakes, as illustrated by the recent successful drilling of Lakes Bosumtwi and Malawi in Africa (Koeberl et al., 2005; Scholz et al., 2006).
Introduction
Polar ice cores provide us with high-resolution records of past climate change at high latitudes on both glacial-to-interglacial and millennial timescales. Paleoclimatologists and climate modelers have focused increasingly on the tropics, however, as a potentially important driver of global climate change because of the region's role in controlling the Earth's energy budget and in regulating the water vapor content of the atmosphere. Tropical climate change is often expressed most strongly as variations in precipitation, and closed-basin lakes are sensitive recorders of the balance between precipitation and evaporation. Recent advances in fl oating platforms and drilling technology now offer the paleolimnological community the opportunity to obtain long sediment records from lowland tropical lakes, as illustrated by the recent successful drilling of Lakes Bosumtwi and Malawi in Africa (Koeberl et al., 2005;Scholz et al., 2006).
Tropical lakes suitable for paleoclimatic research were sought in Central America to complement the African lake drilling. Most lakes in the Neotropics are shallow, however, and these basins fell dry during the Late Glacial period because the climate in the region was more arid than today. The search for an appropriate lake to study succeeded in 1999 when a bathymetric survey of Lake Petén Itzá, northern Guatemala, revealed a maximum depth of 165 m, making it the deepest lake in the lowlands of Central America (Fig. 1 ). Although the lake was greatly reduced in volume during the Late Glacial period, the deep basin remained submerged and thus contains a continuous history of lacustrine sediment deposition. A subsequent seismic survey of Lake Petén Itzá in 2002 showed a thick sediment package overlying basement, with several subbasins containing up to 100 m of sediment (Anselmetti et al., 2006). region underwent profound climatic and environmental change from the arid Late Glacial period to the moist early Holocene, but the climate history on millennial or shorter time scales is not known for the last glacial period, and no paleoclimatic data exist beyond ~36 ka.
Objectives and Operations
The primary purpose of the Lake Petén Itzá Scientifi c Drilling Project (PISDP) was to recover complete lacustrine sediment sequences to study the following: the paleoclimatic history of the northern lowland Neotropics on decadal to millennial timescales, emphasizing marine-terrestrial linkages (e.g., correlation to Cariaco Basin, Greenland ice cores, etc.) the paleoecology and biogeography of the tropical lowland forest, such as the response of vegetation to disturbance by fi re, climate change, and humans the subsurface biogeochemistry, including integrated studies of microbiology, porewater geochemistry, and mineral authigenesis and diagenesis Drilling operations were conducted in February-March 2006 by Drilling, Observation and Sampling of the Earth's Continental Crust (DOSECC), Inc., using the Global Lake Drilling platform, GLAD 800 (Fig. 2). All primary sites (PI-1, PI-2, PI-3, PI-4, PI-7, and PI-9) and one alternate site (PI-6) were drilled with an average core recovery of 93.4% (Table 1). A total of 1327 m of sediment was recovered, and the deepest site (PI-7) reached 133 m below the lake fl oor. Multiple holes were drilled at most sites, and cores were logged in the fi eld for density, p-wave velocity, and magnetic susceptibility using a GEOTEK core logger provided by the International Continental Scientifi c Drilling Program (ICDP). Complete stratigraphic recovery was verifi ed in nearly real time using Splicer, a software program developed by the Ocean Drilling Program that permits alignment of features among holes using core logging data. Downhole logging was conducted by the ICDP Operational Support Group (OSG) at fi ve sites using their slimhole logging tools. Samples from at least one hole from most of the primary sites were squeezed for porewater geochemical analysis, and ephemeral properties such as alkalinity and pH were measured on site. Smear-slides • • • and related to the seasonal migration of the Intertropical Convergence Zone (ITCZ). The lake water today has a high pH (~8.0) and a low total ionic concentration (12.22 meq•l -1 ) dominated by calcium, magnesium, sulfate, and bicarbonate, and it is saturated for calcium carbonate. During the Late Glacial period, the lake volume was reduced by 87%, and the water was saturated for gypsum (Hillesheim et al., 2005).
The Petén Lake District has been a region of paleoenvironmental study for over thirty years, with most investigations focused on Holocene paleoecologic reconstruction, especially the impact of the Maya civilization on the lowland tropical environment. Previous studies showed that the on the basis of preliminary core-catcher descriptions and the split cores of Site PI-6. The boundaries between Units I, II, III and IV also correspond to changes in the character of the bulk-density curve and to changes in the seismic profi les (Figs. 3 and 4). Uppermost Unit I, coinciding with seismic sequence T (Fig. 2), consists primarily of gray clay with abundant charcoal, and this unit has been recovered previously from the basin in numerous Kullenberg piston cores (Hillesheim et al., 2005). Unit I spans the entire Holocene, but the bulk of the clay was deposited in a relatively short period between ~3000 and 1000 yrs BP as a consequence of soil erosion brought about by deforestation of the watershed for Maya agriculture. Unit II coincides approximately with seismic sequences G and R (Fig. 2) and consists of interbedded dense gypsum sand, clay, and carbonate mud that were deposited during the latest Pleistocene. The boundary were prepared from core-catcher samples to describe lithologic changes at each site. Cores were stored onsite in a refrigerated container that was shipped to the National Lacustrine Core Repository (LacCore) at the University of Minnesota (U.S.A.), where initial core descriptions are under way. All data collected on the drilling platform and in the fi eld laboratories were entered into the ICDP Drilling Information System (DIS), uploaded with daily reports and photos to the servers at the GeoForschungsZentrum, Potsdam, Germany, and made available online (http://peten-itza.icdp-online.org).
Preliminary Results
Two shallow sites (PI-9 and PI-7) were drilled in 30 m and 46 m water depth to a maximum depth below the lake fl oor of 16.4 m and 133.2 m, respectively (Table 1). The great thickness of sediment at PI-7 was surprising, as basement was thought to lie much shallower, at ~47 m (Fig. 2). The shallow sites were not expected to yield long, continuous lacustrine records because relatively short (<6 m) piston cores at these water depths contain paleosols, indicating subaerial exposure during the Late Glacial period (Hillesheim et al., 2005). Shallow-water facies consist primarily of carbonate-rich sediment with abundant shell material, gypsum sand, and indurated gypsum crusts. Deep-water facies consist of diatomrich, gray to brown clay that was deposited during lake highstands.
Continuous lacustrine deposition was expected for the intermediate (PI-1, PI-2, and PI-6) and deep-water sites (PI-3 and PI-4), with lowstands represented by shallow-water facies (e.g., gypsum sand), especially at the sites of intermediate water depth. Intermediate Site PI-2 is located in the eastern basin that was separated from the central basin during times of greatly reduced lake level (Fig. 1), thereby providing an opportunity to study a semi-independent basin during lowstands. At the deepest sites (PI-3 and PI-4), we were concerned about potential downslope transport, and, indeed, clear evidence of slumping (tilted beds) and sediment disturbance was observed in parts of the section at both sites.
The lithostratigraphy is similar for the intermediate-and deep-water sites, and four lithostratigraphic units are defi ned between Units II and I coincides with the Pleistocene/ Holocene boundary and refl ects a transition from an arid climate during the Late Glacial period to a moist climate during the early Holocene (Hillesheim et al., 2005). The boundary corresponds to a sharp change in sediment density (Figs. 5 and 6).
High-frequency variations in bulk density occur throughout lithologic Unit II and can be correlated among sites in the deep basin (Fig. 5). These density changes refl ect alternat-ing beds of gypsum and clay-rich sediment, which re pre sent lakelevel lowstands (gypsum) and highstands (clay). Initial radiocarbon dates sug gest an average sedimentation rate for the upper 35 mcd of about 1 m per thousand years. Unit III occurs below the gypsiferous deposits and correlates roughly to seismic sequence B (Fig. 2). It consists of a thick sequence of organic-rich carbonate clay and silt that is rich in diatoms and carbonate microfossils. The age of Unit III is not yet known, but radiocarbon and U/Th measurements are under way to date these deposits. At the bottom of the holes, Unit IV consists of gravels and angular pieces of indurated carbonate rock that likely represent bedrock.
One unexpected fi nding was the common occurrence of elemental sulfur nodules at several sites (Fig.6). These nodules form post-depositionally as they cut across bedding and tend to occur at the transitions from gypsum to clay facies. We speculate that abundant sulfate, both in the water column and at depth in the sediment, promotes sulfate reduction and production of H 2 S. In the absence of abundant Fe, the H 2 S may then be oxidized to elemental S. An integrated program of subsurface microbiology and pore-water geochemistry is planned to study this process.
Downhole logging with slimline tools was conducted at fi ve sites (PI-1, PI-2, PI3, PI-4, and PI-7) both through the drill pipe and in the open borehole where conditions permitted. Natural and spectral gamma radiation tools were run through the cased hole at all sites. Similar natural gamma radiation measurements are being made on whole cores, and these measurements will permit corelog integration and construction of another depth scale (i.e., equivalent logging depth) that will correct for stretching or compression of the cores. Comparison of core and borehole logging data with seismic profi les will enable correlation of seismic refl ections to lithologic changes and development of a seismic sequence stratigraphy for the entire lake basin.
Summary
The Petén Itzá Scientifi c Drilling Project achieved all of its fi eld objectives and recovered 1327 m of high-quality core at seven sites. Preliminary results with respect to sediment lithology, density, magnetic susceptibility, and downhole natural gamma logs display a high degree of climaterelated variability that can, in some cases, be correlated among sites. The overall post-drilling objective will be to place this variability in a fi rm chronologic framework and decipher the history of the northern Neotropic hydrologic cycle, its relation to changes in the position of the Atlantic ITCZ, and linkages to climate variability in the region (e.g., Cariaco Basin) and elsewhere (e.g., high-latitude North Atlantic). | 2,618 | 2006-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Physics"
] |
LP-based tractable subcones of the semidefinite plus nonnegative cone
The authors in a previous paper devised certain subcones of the semidefinite plus nonnegative cone and showed that satisfaction of the requirements for membership of those subcones can be detected by solving linear optimization problems (LPs) with O(n) variables and O(n2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(n^2)$$\end{document} constraints. They also devised LP-based algorithms for testing copositivity using the subcones. In this paper, they investigate the properties of the subcones in more detail and explore larger subcones of the positive semidefinite plus nonnegative cone whose satisfaction of the requirements for membership can be detected by solving LPs. They introduce a semidefinite basis (SD basis) that is a basis of the space of n×n\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n \times n$$\end{document} symmetric matrices consisting of n(n+1)/2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n(n+1)/2$$\end{document} symmetric semidefinite matrices. Using the SD basis, they devise two new subcones for which detection can be done by solving LPs with O(n2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(n^2)$$\end{document} variables and O(n2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(n^2)$$\end{document} constraints. The new subcones are larger than the ones in the previous paper and inherit their nice properties. The authors also examine the efficiency of those subcones in numerical experiments. The results show that the subcones are promising for testing copositivity as a useful application.
Introduction
Let S n be the set of n × n symmetric matrices, and define their inner product as ( Bomze et al. [7] coined the term "copositive programming" in relation to the following problem in 2000, on which many studies have since been conducted: Minimize ⟨C, X⟩ subject to ⟨A i , X⟩ = b i , (i = 1, 2, . . ., m) X ∈ COP n .
where COP n is the set of n×n copositive matrices, i.e., matrices whose quadratic form takes nonnegative values on the n-dimensional nonnegative orthant R n + : We call the set COP n the copositive cone.A number of studies have focused on the close relationship between copositive programming and quadratic or combinatorial optimization (see, e.g., [7][8] [15][33][34] [13][14] [20]).
Interested readers may refer to [21] and [9] for background on and the history of copositive programming.
The following cones are attracting attention in the context of the relationship between combinatorial optimization and copositive optimization (see, e.g., [21][9]).Here, conv (S) denotes the convex hull of the set S.
-The semidefinite cone -The copositive cone -The semidefinite plus nonnegative cone S + n + N n , which is the Minkowski sum of S + n and N n .
-The union S + n ∪ N n of S + n and N n .
-The doubly nonnegative cone S + n ∩ N n , i.e., the set of positive semidefinite and componentwise nonnegative matrices.
-The completely positive cone CP n := conv Except the set S + n ∪ N n , all of the above cones are proper (see Section 1.6 of [5], where a proper cone is called a full cone), and we can easily see from the definitions that the following inclusions hold: While copositive programming has the potential of being a useful optimization technique, it still faces challenges.One of these challenges is to develop efficient algorithms for determining whether a given matrix is copositive.It has been shown that the above problem is co-NP-complete [31][19] [20] and many algorithms have been proposed to solve it (see, e.g., [6][12] [30][29] [39][36] [10][16] [22][37] [11]) Here, we are interested in numerical algorithms which (a) apply to general symmetric matrices without any structural assumptions or dimensional restrictions and (b) are not merely recursive, i.e., do not rely on information taken from all principal submatrices, but rather focus on generating subproblems in a somehow data-driven way, as described in [10].There are few such algorithms, but they often use tractable subcones M n of the semidefinite plus nonnegative cone S + n + N n for detecting copositivity (see, e.g., [12] [36][10] [37]).As described in Section 5, these algorithms require one to check whether A ∈ M n or A ̸ ∈ M n repeatedly over simplicial partitions.The desirable properties of the subcones M n ⊆ S + n + N n used by these algorithms can be summarized as follows: P1 For any given n × n symmetric matrix A ∈ S n , we can check whether A ∈ M n within a reasonable computation time, and P2 M n is a subset of the semidefinite plus nonnegative cone S + n + N n that at least includes the n × n nonnegative cone N n and contains as many elements S + n + N n as possible.
The authors, in [37], devised certain subcones of the semidefinite plus nonnegative cone S + n + N n and showed that satisfaction of the requirements for membership of those cones can be detected by solving linear optimization problems (LPs) with O(n) variables and O(n 2 ) constraints.They also created an LP-based algorithm that uses these subcones for testing copositivity as an application of those cones.
The aim of this paper is twofold.First, we investigate the properties of the subcones in more detail, especially in terms of their convex hulls.Second, we search for subcones of the semidefinite plus nonnegative cone S + n + N n that have properties P1 and P2.To address the second aim, we introduce a semidefinite basis (SD basis) that is a basis of the space S n consisting of n(n + 1)/2 symmetric semidefinite matrices.Using the SD basis, we devise two new types of subcones for which detection can be done by solving LPs with O(n 2 ) variables and O(n 2 ) constraints.As we will show in Corollary 3.4, these subcones are larger than the ones proposed in [37] and inherit their nice properties.We also examine the efficiency of those subcones in numerical experiments.This paper is organized as follows: In Section 2, we show several tractable subcones of S + n + N n that are receiving much attention in the field of copositive programming and investigate their properties, the results of which are summarized in Figures 1 and 2. In Section 3, we propose new subcones of S + n + N n having properties P1 and P2.We define SD bases using Definitions 3.2 and 3.3 and construct new LPs for detecting whether a given matrix belongs to the subcones.In Section 4, we perform numerical experiments in which the new subcones are used for identifying the given matrices A ∈ S + n + N n .As a useful application of the new subcones, Section 5 describes experiments for testing copositivity of matrices arising from the maximum clique problem and standard quadratic optimization problems.The results of these experiments show that the new subcones are promising not only for identification of A ∈ S + n + N n but also for testing copositivity.We give concluding remarks in Section 6.
2 Some tractable subcones of S + n + N n and related work In this section, we show several tractable subcones of the semidefinite plus nonnegative cone S + n + N n .Since the set S + n + N n is the dual cone of the doubly nonnegative cone S + n ∩ N n , we see that and that the weak membership problem for S + n + N n can be solved (to an accuracy of ϵ) by solving the following doubly nonnegative program (which can be expressed as a semidefinite program of size O(n 2 )).
where I n denotes the n × n identity matrix.Thus, the set S + n + N n is a rather large and tractable convex subcone of COP n .However, solving the problem takes a lot of time [36], [38] and does not make for a practical implementation in general.To overcome this drawback, more easily tractable subcones of S + n + N n have been proposed.
We define the matrix functions N, S : S n → S n such that, for A ∈ S n , we have In [36], the authors defined the following set: Here, we should note that A = S(A) Also, for any A ∈ N n , S(A) is a nonnegative diagonal matrix, and hence, N n ⊆ H n .The determination of A ∈ H n is easy and can be done by extracting the positive elements A ij > 0 (i ̸ = j) as N (A) ij and by performing a Cholesky factorization of S(A) (cf.Algorithm 4.2.4 in [26]).Thus, from the inclusion relation (2), we see that the set H n has the desirable P1 property.However, S(A) is not necessarily positive semidefinite even if The following theorem summarizes the properties of the set H n .Theorem 2.1 ([25] and Theorem 4.2 of [36]).H n is a convex cone and The construction of the subcone H n is based on the idea of "checking nonnegativity first and checking positive semidefiniteness second."In [37], another subcone is provided that is based on the idea of "checking positive semidefiniteness first and checking nonnegativity second."Let O n be the set of n × n orthogonal matrices and D n be the set of n × n diagonal matrices.For a given symmetric matrix A ∈ S n , suppose that By introducing another diagonal matrix Ω = Diag (ω 1 , ω 2 , . . ., ω n ) ∈ D n , we can make the following decomposition: . ., n), then the matrix P (Λ − Ω)P T is positive semidefinite.Thus, if we can find a suitable diagonal matrix Ω ∈ D n satisfying then ( 7) and (2) imply We can determine whether such a matrix exists or not by solving the following linear optimization problem with variables ω i (i = 1, 2, . . ., n) and α: Here, for a given matrix A, [A] ij denotes the (i, j)th element of A.
Problem (LP) P,Λ has a feasible solution at which ω i = λ i (i = 1, 2, . . ., n) and For each i = 1, 2, . . ., n, the constraints . Thus, (LP) P,Λ has an optimal solution with optimal value α * (P, Λ).If α * (P, Λ) ≥ 0, there exists a matrix Ω for which the decomposition (8) holds.The following set G n is based on the above observations and was proposed in [37] we can see that the problem (LP) P,Λ is unique for any possible P ∈ O n .In this case, α * (P, Λ) < 0 with a specific P ∈ O n implies A ̸ ∈ G n .However, if this is not the case (i.e., an eigenspace of A has at least dimension 2), α * (P, Λ) < 0 with a specific P ∈ O n does not necessarily guarantee that A ̸ ∈ G n .
The above discussion can be extended to any matrix P ∈ R m×n ; i.e., it does not necessarily have to be orthogonal or even square.The reason why the orthogonal matrices P ∈ O n are dealt with here is that some decomposition methods for (6) have been established for such orthogonal P s.The property G n = com(S n , N n ) in Theorem 2.3 also follows when P is orthogonal.
In [37], the authors described another set G n that is closely related to G n .
where for A ∈ S n , the set PL Gn (A) is given by replacing O n in (12) by the space R n×n of n × n arbitrary matrices, i.e., PL Gn (A) := {(P, Λ) ∈ R n×n × D n | P and Λ satisfy (6) and α * (P, Λ) ≥ 0}.
If the set PL Gn (A) in ( 12) is nonempty, then the set PL Gn (A) is also nonempty, which implies the following inclusions: Before describing the properties of the sets G n and G n , we will prove a preliminary lemma.
Lemma 2.2.Let K 1 and K 2 be two convex cones containing the origin.Then conv (K Proof.Since K 1 and K 2 are convex cones, we can easily see that the inclusion holds.The converse inclusion also follows from the fact that K 1 and K 2 are convex cones.Since K 1 and K 2 contain the origin, we see that the inclusion K 1 ∪ K 2 ⊆ K 1 + K 2 holds.From this inclusion and the convexity of the sets K 1 and K 2 , we can conclude that The following theorem shows some of the properties of G n and G n .Assertions (i) and (ii) were proved in Theorem 3.2 of [37].Assertion (iii) comes from the fact that S + n and N n are convex cones and from Lemma 2.2.Assertions (iv)-(vi) follow from (i)-(iii), the inclusion (15) and Theorem 2.1.
A number of examples provided in [37] illustrate the differences between H n , G n .Moreover, the following two matrices have three different eigenvalues, respectively, and we can identify by solving the associated LPs.At present, it is not clear whether the set G n = com (S + n , N n ) is convex or not.As we will mention on page 18, our numerical results suggest that the set might be not convex.
Before closing this discussion, we should point out another interesting subset of S + n + N n proposed by Bomze and Eichfelder [10].Suppose that a given matrix A ∈ S n can be decomposed as (6), and define the diagonal matrix Λ + by [Λ + ] ii = max{0, λ i }.Let A + := P Λ + P T and A − := A + − A. Then, we can easily see that A + and A − are positive semidefinite.Using this decomposition A = A + − A − , Bomze and Eichfelder derived the following LP-based sufficient condition for A ∈ S + n + N n in [10].
Theorem 2.4 (Theorem 2.6 of [10]).Let x ∈ R + n be such that A + x has only positive coordinates.If Consider the following LP with O(n) variables and where f is an arbitrary vector and e denotes the vector of all ones.Define the set, for some feasible solution x of (17)}.
Then Theorem 2.4 ensures that L n ⊆ COP n .The following proposition gives a characterization when the feasible set of the LP of ( 17) is empty.
and hence, S +
n ̸ ⊆ L n for n ≥ 2, similarly to the set H n for n ≥ 3 (see Theorem 2.1).
Semidefinite bases
In this section, we improve the subcone G n in terms of P2.For a given matrix A of ( 6), the linear optimization problem (LP) P,Λ in (10) can be solved in order to find a nonnegative matrix that is a linear
. , n). This is done by decomposing A ∈ S n into two parts:
such that the first part ) are only n linearly independent matrices in n(n + 1)/2 dimensional space S n , the intersection of the set of linear combinations of p i p T i and the nonnegative cone N n may not have a nonzero volume even if it is nonempty.On the other hand, if we have a set of positive semidefinite matrices p i p T i ∈ S + n (i = 1, 2, . . ., n(n + 1)/2) that gives a basis of S n , then the corresponding intersection becomes the nonnegative cone N n itself, and we may expect a greater chance of finding a nonnegative matrix by enlarging the feasible region of (LP) P,Λ .In fact, we can easily find a basis of S n consisting of n(n + 1)/2 semidefinite matrices from n given orthogonal vectors p i ∈ R n (i = 1, 2, . . ., n) based on the following result from [18].
independent positive semidefinite matrices.Therefore, the set V gives a basis of the set S n of n × n symmetric matrices.
The above proposition ensures that the following set B + (p 1 , p 2 , . . ., p n ) is a basis of n × n symmetric matrices.
Definition 3.2 (Semidefinite basis type I). For a given set of n-dimensional orthogonal vectors p
We call the set a semidefinite basis type I induced by A variant of the semidefinite basis type I is as follows.Noting that the equivalence holds for any i ̸ = j, we see that B − (p 1 , p 2 , . . ., p n ) is also a basis of n × n symmetric matrices.
Definition 3.3 (Semidefinite basis type II). For a given set of n-dimensional orthogonal vectors
We call the set a semidefinite basis type II induced by p i ∈ R n (i = 1, 2, . . ., n).
Using the map Π + in (19), the linear optimization problem (LP) P,Λ in (10) can be equivalently written as The problem (LP) P,Λ is based on the decomposition (18).Starting with (18), the matrix A can be decomposed using Π + (p i , p j ) in ( 19) and Π − (p i , p j ) in (21) as On the basis of the decomposition ( 23) and ( 24), we devise the following two linear optimization problems as extensions of (LP) P,Λ : (LP) (LP) are feasible and bounded by making arguments similar to the one for (LP) P,Λ on page 5. Thus, (LP)
If the optimal value α + * (P, Λ) of (LP) + P,Λ is nonnegative, then, by rearranging (23), the optimal solution ω + * ij (1 ≤ i ≤ j ≤ n) can be made to give the following decomposition: In the same way, if the optimal value α ± * (P, Λ) of (LP) ± P,Λ is nonnegative, then, by rearranging ( 24), the optimal solution ω ) can be made to give the following decomposition: On the basis of the above observations, we can define new subcones of S + n + N n in a similar manner as ( 11) and (13).
For a given A ∈ S n , define the following four sets of pairs of matrices From the construction of problems (LP) P,Λ , (LP) + P,Λ and (LP) ± P,Λ , and the definitions ( 27) and (28), we can easily see that The corollary below follows from (iv)-(vi) of Theorem 2.3 and the above inclusions.
Corollary 3.4. (i)
(iii) The convex hull of each of the sets The following table summarizes the sizes of LPs ( 10), (25), and (26) that we have to solve in order to identify, respectively, (P, Λ) ∈ PL Gn (A) (or (P, Λ) ∈ PL Gn (A)), (P, Λ) ∈ PL F + n (A) (or (P, Λ) ∈ PL F + n (A)), and (P, Λ) ∈ PL F ± n (or (P, Λ) ∈ PL In this section, we investigate the effect of using the sets G n , F + n and F ± n for identification of the fact We generated random instances of A ∈ S + n + N n by using the method described in Section 2 of [10].For an n × n matrix B with entries independently drawn from a standard normal distribution, we obtained a random positive semidefinite matrix S = BB T .An n×n random nonnegative matrix N was constructed using N = C − c min I n with C = F + F T for a random matrix F with entries uniformly distributed in [0, 1] and c min being the minimal diagonal entry of C. We set A = S + N ∈ S + n + N n .The construction was designed so as to maintain the nonnegativity of N while increasing the chance that S + N would be indefinite and thereby avoid instances that are too easy.
For each instance A ∈ S + n + N n , we used the MATLAB command "[P, Λ] = eig(A)" and obtained (P, Λ) ∈ O n × D n .We checked whether (P, λ) ∈ PL Gn ((P, L) ∈ PL F + n and (P, L) ∈ PL F ± n ) by solving (LP) P,Λ in (10) ( (LP) + P,Λ in ( 25) and (LP) ± P,Λ in ( 26)) and if it held, we identified that Table 2 shows the number of matrices (denoted by "#A") that were identified as and the average CPU time (denoted by "A.t.(s)"),where 1000 matrices were generated for each n.We used a 3.07GHz Core i7 machine with 12 GB of RAM and Gurobi 6.5 for solving LPs.Note that we performed the last identification A ∈ S + n + N n as a reference, while we used SeDuMi 1.3 with MATLAB R2015a for solving the semidefinite program (3).The table yields the following observations: • All of the matrices were identified as A ∈ S + n + N n by checking (P, L) ∈ PL F ± n .The result is comparable to the one in Section 2 of [10].The average CPU time for checking (P, L) ∈ PL F ± n is faster than the one for solving the semidefinite program (3) when n ≥ 20.
• For any n, the number of identified matrices increases in the order of the set inclusion relation: G n ⊆ F + n ⊆ F ± n , while the result for H n ̸ ⊆ G n is better than the one for G n when n = 10.
• For the sets H n , G n and F + n , the number of identified matrices decreases as the size of n increases.
LP-based algorithms for testing A ∈ COP n
In this section, we investigate the effect of using the sets F + n , F + n , F ± n and F ± n for testing whether a given matrix A is copositive by using Sponsel, Bundfuss, and Dür's algorithm [36].
Outline of the algorithms
By defining the standard simplex ∆ S by ∆ S = {x ∈ R n + | e T x = 1}, we can see that a given n × n symmetric matrix A is copositive if and only if x T Ax ≥ 0 for all x ∈ ∆ S (see Lemma 1 of [12]).For an arbitrary simplex ∆, a family of simplices Such a partition can be generated by successively bisecting simplices in the partition.For a given simplex ∆ = conv{v 1 , . . ., v n }, consider the midpoint satisfies the above conditions for simplicial partitions.See [27] for a detailed description of simplicial partitions.
Denote the set of vertices of partition P by Each simplex ∆ is determined by its vertices and can be represented by a matrix V ∆ whose columns are these vertices.Note that V ∆ is nonsingular and unique up to a permutation of its columns, which does not affect the argument [36].Define the set of all matrices corresponding to simplices in partition P as The "fineness" of a partition P is quantified by the maximum diameter of a simplex in P, denoted by The above notation was used to show the following necessary and sufficient conditions for copositivity in [36].The first theorem gives a sufficient condition for copositivity.
then A is also copositive.
The above theorem implies that by choosing M n = N n (see ( 2)), A is copositive if V T ∆ AV ∆ ∈ N n holds for any ∆ ∈ P. Theorem 5.2 (Theorem 2.2 of [36]).Let A ∈ S n be strictly copositive, i.e., A ∈ int (COP n ).Then there exists ε > 0 such that for all partitions P of ∆ S with δ(P) < ε, we have The above theorem ensures that if A is strictly copositive (i.e., A ∈ int (COP n )), the copositivity of A (i.e., A ∈ COP n ) can be detected in finitely many iterations of an algorithm employing a subdivision rule with δ(P) → 0. A similar result can be obtained for the case A ̸ ∈ COP n , as follows.
A / ∈ COP n
2. There is an ε > 0 such that for any partition P with δ(P) < ε, there exists a vertex v ∈ V (P) such that v T Av < 0.
The following algorithm, from [36], is based on the above three results.
If A is not copositive, i.e., A ̸ ∈ COP n , then Algorithm 1 terminates finitely, returning "A is not copositive."
In this section, we investigate the effect of using the sets H n from ( 5), G n from (11), and F + n and F ± n from (28) as the set M n in the above algorithm.
At Line 7, we can check whether V T ∆ AV ∆ ∈ M n directly in the case where M n = H n .In other cases, we diagonalize V T ∆ AV ∆ as V T ∆ AV ∆ = P ΛP T and check whether (P, Λ) ∈ PL Mn (V T ∆ AV ∆ ) according to definitions (12) or (27).If the associated LP has the nonnegative optimal value, then we identify A ∈ M n .
At Line 8, Algorithm 1 removes the simplex that was determined at Line 7 to be in no further need of exploration by Theorem 5.1.The accuracy and speed of the determination influence the total computational time and depend on the choice of the set M n ⊆ COP n .
Here, if we choose , as proposed in [37].
At Line 8, we need to solve an additional LP but do not need to diagonalize V T ∆ AV ∆ .Let P and Λ be matrices satisfying (6).Then the matrix V T ∆ P can be used to diagonalize V T ∆ AV ∆ , i.e., while V T ∆ P ∈ R n×n is not necessarily orthogonal.Thus, we can test whether (V T ∆ P, Λ) ∈ PL Mn by solving the corresponding LP according to the definitions (14) or (27).If (V T ∆ P, Λ) ∈ PL Mn holds, then we can identify If (V T ∆ P, Λ) ̸ ∈ PL Mn at Line 8, we proceed to the original step to identify whether V T ∆ AV ∆ ∈ M n at Line 12. Similarly to Line 7 of Algorithm 1, we diagonalize V T ∆ AV ∆ as V T ∆ AV ∆ = P ΛP T with an orthogonal matrix P and a diagonal matrix Λ.Then we check whether (P, Λ) ∈ PL Mn by solving the corresponding LP, and if (P, Λ) ∈ PL Mn , we can identify V T ∆ AV ∆ ∈ M n .
At Line 18, we don't need to diagonalize V T ∆ p AV ∆ p or solve any more LPs.Let ω * ∈ R n be an optimal solution of the corresponding LP obtained at Line 8 and let Ω * := Diag (ω * ).Then the feasibility of ω * implies the positive semidefiniteness of the matrix
Numerical results
This subsection describes experiments for testing copositivity using as the set M n in Algorithms 1 and 2. We implemented the following seven algorithms in MATLAB R2015a on a 3.07GHz Core i7 machine with 12 GB of RAM, using Gurobi 6.5 for solving LPs: As test instances, we used the two kinds of matrices arising from the maximum clique problem (Section 5.2.1) and from standard quadratic optimization problems (Section 5.2.2).
Results for the matrix arising from the maximum clique problem
In this subsection, we consider the matrix where E ∈ S n is the matrix whose elements are all ones and the matrix A G ∈ S n is the adjacency matrix of a given undirected graph G with n nodes.The matrix B γ comes from the maximum clique problem.
The maximum clique problem is to find a clique (complete subgraph) of maximum cardinality in G.It has been shown (in [15]) that the maximum cardinality, the so-called clique number ω(G), is equal to the optimal value of ω Thus, the clique number can be found by checking the copositivity of B γ for at most γ = n, n − 1, . . ., 1. Figure 3 shows the instances of G that were used in [36].We know the clique numbers of G 8 and G 12 are ω(G 8 ) = 3 and ω(G 12 ) = 4, respectively.
The aim of the implementation is to explore the differences in behavior when using n or F ± n as the set M n rather than to compute the clique number efficiently.Hence, the experiment examined B γ for various values of γ at intervals of 0.1 around the value ω(G) (see Tables 3 and 4
on page 23).
As already mentioned, α * (P, Λ) < 0 (α + * (P, Λ) < 0 and α ± * (P, Λ) < 0) with a specific P does not necessarily guarantee that ).Thus, it not strictly accurate to say that we can use those sets for M n , and the algorithms may miss some of the ∆'s that could otherwise have been removed.However, although this may have some effect on speed, it does not affect the termination of the algorithm, as it is guaranteed by the subdivision rule satisfying δ(P) → 0, where δ(P) is defined by (29).
Tables 3 and 4 show the numerical results for G 8 and G 12 , respectively.Both tables compare the results of the following seven algorithms in terms of the number of iterations (the column "Iter.")and the total computational time (the column "Time (s)" ): The symbol "−" means that the algorithm did not terminate within 6 hours.The reason for the long computation time may come from the fact that for each graph G, the matrix B γ lies on the boundary of the copositive cone COP n when γ = ω(G) (ω(G 8 ) = 3 and ω(G 12 ) = 4).See also Figure 6, which shows a graph of the results of Algorithms 1.2, 2.1, 2.3, and 1.4 for the graph G 12 in Table 4.
We can draw the following implications from the results in Table 4 on page 24 for the larger graph G 12 (similar implications can be drawn from Table 3): • The lower bound of γ for which the algorithm terminates in one iteration and the one for which the algorithm terminates in 6 hours decrease in going from Algorithm 1.3 to Algorithm 3.1.The reason may be that, as shown in Corollary 3.4, the set inclusion relation • Table 1 on page 12 summarizes the sizes of the LPs for identification.The results here imply that the computational times for solving an LP have the following magnitude relationship for any n ≥ 3: On the other hand, the set inclusion relation G n ⊆ F + n ⊆ F ± n and the construction of Algorithms 1 and 2 imply that the detection abilities of the algorithms also follow the relationship described above and that the number of iterations has the reverse relationship for any γs in Table 4: It seems that the order of the number of iterations has a stronger influence on the total computational time than the order of the computational times for solving an LP.
• At each γ ∈ [4.1, 4.9], the number of iterations of Algorithm 2.3 is much larger than one hundred times those of Algorithm 1.4.This means that the total computational time of Algorithm 2.3 is longer than that of Algorithm 1.3 at each γ ∈ [4.1, 4.9], while Algorithm 1.4 solves a semidefinite program of size O(n 2 ) at each iteration.
• At each γ < 4, the algorithms show no significant differences in terms of the number of iterations.The reason may be that they all work to find a while their computational time depends on the choice of simplex refinement strategy.
In view of the above observations, we conclude that Algorithm 2.3 with the choices M n = F ± n and M n = F ± n might be a way to check the copositivity of a given matrix A when A is strictly copositive.
The above results are in contrast with those of Bomze and Eichfelder in [10], where the authors show the number of iterations required by their algorithm for testing copositivity of matrices of the form (30). On the contrary to the first observation described above, their algorithm terminates with few iterations when γ < ω(G), i.e., the corresponding matrix is not copositive, and it requires a huge number of iterations otherwise.
It should be noted that Table 3 shows an interesting result concerning the non-convexity of the set G n , while we know that conv (G n ) = S + n + N n (see Theorem 2.3).Let us look at the result at γ = 4.0 of Algorithm 2.1.The multiple iterations at γ = 4.0 imply that we could not find B 4.0 ∈ G n at the first iteration for a certain orthogonal matrix P satisfying (6).Recall that the matrix B γ is given by (30).It follows from E − A G ∈ N n ⊆ G n and from the result at γ = 3.5 in Table 3 Thus, the fact that we could not determine whether the matrix lies in the set G n suggests that the set G n = com(S + n , N n ) is not convex.
Results for the matrix arising from standard quadratic optimization problems
In this subsection, we consider the matrix where E ∈ S n is the matrix whose elements are all ones and Q ∈ S n is an arbitrary symmetric matrix, not necessarily positive semidefinite.The matrix C γ comes from standard quadratic optimization problems of the form, Minimize In [7], it is shown that the optimal value of the problem is equal to the optimal value of (32).
The instances of the form (32) were generated using the procedure random qp in [32] with two quartets of parameters (n, s, k, d) = (10, 5, 5.0.5) and (n, s, k, d) = (20, 10, 10.0.5),where the parameter n implies the size of Q, i.e., Q is an n × n matrix.It has been shown in [32] that random qp generates problems, for which we know the optimal value and a global minimizer a priori for each.We set the optimal value as −10 for each quartet of parameters.
Tables 5 and 6 show the numerical results for (n, s, k, d) = (10, 5, 5, 0.5) and (n, s, k, d) = (20, 10, 10, 0.5).We generated 2 instances for each quartet of parameters and performed the seven algorithms on page 16 for these instances.Both tables compare the average values of the seven algorithms in terms of the number of iterations (the column "Iter.")and the total computational time (the column "Time (s)" ): the symbol "−" means that the algorithm did not terminate within 30 minutes.In each table, we made the interval between the values γ smaller as γ got closer to the optimal value, to observe the behavior around the optimal value more precisely.
From the results in Tables 5 and 6 on page 25, we can draw implications that are very similar to those for the maximum clique problem, listed on page 17 (we hence, omitted discussing them here).A major difference from the implications for the maximum clique problem is that Algorithm 1.2 using the set H n is efficient for solving a small (n = 10) standard quadratic problem, while it cannot solve the problem within 30 minutes when n = 20 and γ ≥ −10.3125.
Concluding remarks
In this paper, we investigated the properties of several tractable subcones of S + n +N n and summarized the results (as Figures 1 and 2).We also devised new subcones of S + n + N n by introducing the semidefinite basis (SD basis) defined as in Definitions 3.2 and 3.3.We conducted numerical experiments using those subcones for identification of given matrices A ∈ S + n + N n and for testing the copositivity of matrices arising from the maximum clique problem and from standard quadratic optimization problems.We have to solve LPs with O(n 2 ) variables and O(n 2 ) constraints in order to detect whether a given matrix belongs to those cones, and the computational cost is substantial.However, the numerical results shown in Tables 2, 3, 4 and 6 show that the new subcones are promising not only for identification of A ∈ S + n + N n but also for testing copositivity.
Recently, Ahmadi, Dash and Hall [1] developed algorithms for inner approximating the cone of positive semidefinite matrices, wherein they focused on the set D n ⊆ S + n of n × n diagonal dominant matrices.Let U n,k be the set of vectors in R n that have at most k nonzero components, each equal to ±1, and define Then, as the authors indicate, the following theorem has already been proven.Theorem 6.1 (Theorem 3.1 of [1], Barker and Carlson [3]).
From the above theorem, we can see that for the SDP bases (22) and n-dimensional unit vectors e 1 , e 2 , • • • , e n , the following set inclusion relation holds: These sets should be investigated in the future.
Figure 1
draws those examples and (ii) of Theorem 2.3.Figure 2 follows from (vii) of Theorem 2.3 and the convexity of the sets N n , S + n and H n (see Theorem 2.1).
Figure 1 :
Figure 1: Examples of inclusion relations among the subcones of S + n + N n I
Figure 2 :
Figure 2: Examples of inclusion relations among the subcones of S + n + N n II
∈ S n .As stated above, if α * (P, Λ) ≥ 0 for a given decomposition A = P ΛP T , we can determine A ∈ G n .In this case, we just need to compute a matrix decomposition and solve a linear optimization problem with n + 1 variables and O(n 2 ) constraints, which implies that it is rather practical to use the set G n as an alternative to using S + n + N n .Suppose that A ∈ S n has n different eigenvalues.Then the possible orthogonal matrices P = [p 1 , p 2 , • • • , p n ] ∈ O n are identifiable, except for the permutation and sign inversion of {p 1 , p 2 , • • • , p n }, and by representing (6) as
Table 2 :
Results of identification of A ∈ S + n + N n : 1000 matrices were generated for each n.
Table 3 :
Results for B
Table 4 :
Results for B | 9,210.6 | 2016-01-26T00:00:00.000 | [
"Mathematics"
] |
Integrative analysis of chemical properties and functions of drugs for adverse drug reaction prediction based on multi-label deep neural network
Abstract The prediction of adverse drug reactions (ADR) is an important step of drug discovery and design process. Different drug properties have been employed for ADR prediction but the prediction capability of drug properties and drug functions in integrated manner is yet to be explored. In the present work, a multi-label deep neural network and MLSMOTE based methodology has been proposed for ADR prediction. The proposed methodology has been applied on SMILES Strings data of drugs, 17 molecular descriptors data of drugs and drug functions data individually and in integrated manner for ADR prediction. The experimental results shows that the SMILES Strings + drug functions has outperformed other types of data with regards to ADR prediction capability.
Introduction
Drug discovery and design is a complex process and is the backbone of the pharmaceutical industry. It involves lots of efforts with regard to time and cost. Many of the drugs get failed and are withdrawn from the market because of their adverse reactions. Adverse drug reactions (ADR) are defined as the undesired affects of a drug which occur even on consuming the prescribed dose of a drug [1]. The timely prediction of ADR based on different drug properties can be very helpful to save the time and other resources. Machine learning (ML) can play a vital role in ADR prediction [2][3][4]. Deep learning is a sub-branch of ML which deals with the deep neural networks (DNN) having multiple hidden layers. DNN learns the data abstraction in form of relevant features automatically from the given data which enhances its aptness for being used in ADR prediction.
The problem of ADR prediction is multi-label in nature as more than one ADRs can be associated with a single drug. Further, it faces the challenge of small data size where some of the drugs (data samples) across different ADRs (labels) are under represented. The objective of the present work is to analyse the prediction capability of two chemical drug properties namely -17 molecule descriptors and SMILES Strings, and drug functions individually and in integrative manner towards ADR prediction. The underlying principle towards ADR inferring capability of SMILES Strings is that the drugs with similar chemical structure induce similar drug target binding profiles (control similar biological pathways) and hence leads to similar ADRs. Further, the drugs with similar chemical structure can be deployed for different drug functions and if so then drugs even having same chemical structure being used for different drug functions can have different ADRs. Hence the SMILES Strings and Drug Functions ADR prediction capability has been analysed individually and in integrated fashion. For this purpose, a multi-label deep neural network (MLDNN) based methodology has been presented in this work. It employs Multilabel Synthetic Minority Over-sampling Technique (MLSMOTE) technique [5] to address the issue of data under representation by data augmentation. Though there are other data augmentations techniques such as over sampling, SMOTE etc. but these techniques are inapt for multi-label datasets whereas MLSMOTE is designed specifically for multi-label datasets which works by incorporate the multiple labels while generating synthetic samples hence it has been considered appropriate for the data under study. The proposed methodology has been applied and analysed on seven drug properties datasets extracted from PubChem database [6] and SIDER databases [7] and integrated by mapping on drugs in terms of precision, recall, F1-score, ROC-AUC and Hamming Loss (HL). The main contributions of the present work are: -A multi-label deep neural network and MLSMOTE based methodology to predict ADRs.
-The analysis of ADR prediction capability of 17 molecular descriptors of drugs, SMILES Strings of drugs and drug functions individually and in integrated manner has been presented.
Rest of this paper is organised as follows: Section 2 details the related work and Section 3 describes the problem statement addressed in the current work. Section 4 explains the proposed methodology to predict ADRs. Section 5 discusses the experimental results. Finally, Section 6 concludes the work.
Related work
This section provides a brief survey of the works related to ADR prediction using different machine learning and deep learning techniques and highlights the research scope. In Jamal et al. [8], the author used support vector machine to predict neurological adverse drug reactions from phenotypes, chemical, and biological properties. The author applied the proposed methodology on a single-level dataset, two-level dataset, and three-level dataset and found that the model's performance is better when phenotypic and chemical properties are combined and compared to other combination datasets. Liu et al. [9] proposed a large-scale prediction of adverse drug reactions using phenotype, biological and chemical structure using support vector machine, random forest, k nearest neighbor, naïve Bayes, and logistic regression. The author performed the proposed methodology on single, two, and three-level datasets. Further, the author obtained that their proposed methodology achieved better phenotypic properties when the support vector machine is used.
In Jamal et al. [10], the author predicts adverse drug reactions for cardiovascular drugs using biological information, chemical information, therapeutic indication, and their two-level and three-level combinations. The author used random forest, support vector machine, and Sequential minimization optimization and addressed class imbalance issue using the SMOTE technique. Lee et al. [11] proposed a three-interval method to predict adverse drug reactions by integrating chemical and biological properties of drugs, and the proposed method achieved better results compared to naive Bayes, k nearest neighbor, and random forest classifies. In [12], the author proposed a hybrid clustering-based methodology for determining quantitative relationships between adverse drug reactions and patient attributes. Das et al. [13] proposed a multi-label machine learning methodology by utilizing drug functions to predict adverse drug reactions with a class imbalance handling technique named MLSMOTE.
Wang et al. [14] proposed a deep neural network model for predicting adverse drug reactions by combining 17 molecules of chemical and physical computed properties, biological properties, and biomedical research article information. In Uner et al. [15], the author proposed a deep learning framework for predicting adverse drug reactions from gene expression, gene ontology, chemical structure, and META information. The author evaluated the proposed deep neural network model (multi-layer perceptron, residual multi-layer perceptron, multi-modal neural network, and multi-task neural network) at each dataset. After that, they combined twolevel and three-level drug properties and obtained that the combination of META data, gene expression, and chemical structure achieved better results than others.
Overall, it can be said that different types of drug properties have been considered for ADR prediction but the integration of drug properties and drug function for ADR prediction is yet to be explored. This observation frames the scope for the present work.
Problem statement
The problem to predict ADRs based one the integration of drug properties and Drug Functions (DF) is stated as follows: where each features set represent either a drug property or drug functions or features resulted on integrating one drug property with another drug property or with drug functions or multiple drug property with drug ADRs. Therefore, the prediction of ADRs for a drug is a multi-label classification problem. Here, F and ADR are represented by 1 and 0, where 1 indicates the presence and 0 indicates the absence of F and ADR for a specific drug. The diagrammatic view of the problem statement has been shown in Figure 1.
Proposed methodology
The architecture of the proposed methodology for ADR prediction has been given in Figure 2. It takes 7 drug properties datasets as input and then perform data augmentation using MLSMOTE technique. After that MLDNN models are trained to predict ADRs based on augmented data. The ADRs are provided as output. The details of datasets, MLSMOTE and MLDNN have been given in subsequent sections.
Dataset
This section initially gives the details of 17 molecular drug descriptors data, SMILES Strings data, Drug functions data and ADR data and then discusses integration of these to prepare 7 datasets for validation of the proposed methodology. The pictorial representation of which has been given in Figure 3. drug samples. Each sample is described with regard to 12 features (drug function) namely anti-infective, anti-inflammatory, antineoplastic, cardiovascular, central nervous system agents, dermatologic agents, gastrointestinal, hematologic agent, lipid regulating, reproductive control, respiratory system agents, urological denoted as DF 1 . . . . . . DF j in Figure 3. The occurrence of a drug function for a given drug is indicated as '1' and its non-occurrence is indicated as '0'. -Adverse Drug Reactions Data: The data related to adverse drug reactions has been extracted from SIDER database [7]. It comprises 1430 drug samples where corresponding to each sample information with regard to occurrence and non-occurrence of 6123 ADRs has been provided which are indicated as ADR 1 . . . ADR N in Figure 3. Table 1.
Multilabel synthetic minority over-sampling technique (MLSMOTE)
In the present work, MLSMOTE has been applied to address the issue of data under representation. It augments the data by generating synthetic data samples. The reason for selecting MLSMOTE is that it generates the data samples considering the multi-label nature of the given data. Initially it selects the minority labels. A minority label is that label corresponding to which on-an-average data samples are comparatively lower than other labels. After that all the samples corresponding to the minority labels are marked as minority samples. A data sample is selected randomly from the set of minority samples and then its k-nearest neighbors are selected as reference neighborhood. The attributes and labels of the new data sample are generated using interpolation method and majority voting technique respectively considering randomly selected minority samples and its reference neighborhood. In the present work, the value of k has been taken 7.
Multi-label deep neural network (MLDNN)
A deep neural network is a neural network which comprises multiple hidden layers and an output layer. An MLDNN is a DNN which consist of same number of nodes in output layer as the number of labels in the given dataset. In the present work an MLDNN model has been trained for ADR prediction having 2 hidden layers where first hidden layer have 1024 hidden nodes and second hidden layer have 2048 hidden nodes, and ReLU activation function in both hidden layers. The model is trained with a data batch size of 64 for 15 numbers of epochs considering binary cross entropy as loss function and Adam optimizer. Further, the dropout is added between the hidden layers to reduce the over-fitting of the DNN model with 0.4 dropout rate. As the number of labels in the given datasets are 6123 so output layer is designed with 6123 nodes where each node uses sigmoid function as activation function. The above mentioned values of parameters have been set by performing experimental analysis.
Experimental results and analysis
This sections presents the analysis of performance of the proposed methodology on SS, MD, DF, SS + MD, SS + DF, MD + DF and SS + MD + DF datasets in terms of precision, Recall, F1-score, ROC-AUC and HL as shown in Table 2. For this purpose, a k-cross validation with k = 5 has been implemented as training-testing strategy. As it helps in avoiding the over fitting. It can be observed from Table 2 that the performance of the proposed methodology is comparable on DF and SS datasets and poor on MD dataset in comparison to them. The reason for this is that the discriminating capability of molecular drug descriptors is comparatively low because of their drug alike properties because of which DNN gives high misclassification error in case molecular descriptors as opposed to SMILES Strings and Drug Functions.
Further, on integrating SS with DF, the performance is enhanced whereas on integrating SS and DF with MD, the performance degraded. Even on integrating MD with SS + DF, the performance degraded. The reason for this degradation is that features of MD data are not co-relating well with features of SS and DF data and are acting as random noise whereas features of SS and DF data are co-relating properly with each other's features and hence have more capability to differentiate between samples corresponding to different labels. Overall, the proposed methodology has achieved highest precision, recall, F1-score, ROC-AUC and lowest HL on SS + DF dataset which shows that the ADR prediction capability of SS and DF data jointly is better than SS, MD and DF individually and then integrated dataset viz. SS + MD, MD + DF and SS + MD + DF. The same observation can be made from the detailed view of ROC-AUC for the fifth validation iteration of the proposed methodology for all seven datasets as shown in Figure 4.
In conclusion, the seven datasets can be ranked as follows in terms of their prediction capability towards ADRs: Further, the effectiveness of MLSMOTE in handling data under representation has been analyzed by comparing performance of the MLDNN with MLSMOTE and without MLSMOTE in terms of precision, Recall, F1-score, ROC-AUC and HL as given in Table 3. It can be observed from Table 3 that there is rise of 38.01% in precision, 78.28% in recall, 70.88 in F1-score and 11.08 in ROC-AUC and fall of 1.31 in HL. Based on the above discussion, it can be said that the data under representation is an acute challenge in ADR prediction. It must be handled before training and DL model. MLSMOTE has performed fairly well in addressing this challenge.
Conclusions
In the present work, an MLSMOTE and DNN based methodology has been presented for analysing the ADR prediction capability of chemical drug properties viz. SMILE Strings and 17 molecule descriptors and drug functions individually and in integrated manner. MLSMOTE technique has been deployed for handling the issue of data under representation and DNN models have been trained for ADR predictions. The proposed methodology has been validated on seven datasets namely SS, MD, DF, SS + MD, SS + DF, MD + DF and SS + MD + DF. Based on the validation results, the datasets are ranked as MD < (MD + DF) < (MD + SS) < SS < DF < (SS + MD + DF) < (SS + DF) with regard to their ADR prediction capability. Further, the validation results signify the effectiveness of MLSMOTE technique in handling data under representation for multi-label dataset. | 3,332 | 2022-05-19T00:00:00.000 | [
"Chemistry",
"Computer Science",
"Medicine"
] |
Modeling of Ferrous Metal Diffusion in Liquid Lead Using Molecular Dynamics Simulation
Modeling of Iron metal diffusion in liquid lead using molecular dynamics simulation has been done. Molecular dynamics simulations are used to predict the value of physical quantities that we want to know based on the designed material model and on the input simulation data. In this research, effect of different geometry of material models was observed to know the diffusion coefficient. The material system was iron (Fe) in liquid lead (Pb). The material models is designed using Packmol software to get the initial configuration of atom's arrangement by inputting the material's characteristics such as mass, density, volume, number of atoms. This work examines the diffusion coefficient of iron in molten lead metal with the geometric shape of the simulation system in the form of iron in molten metal for various simulation models of boxes in a box, balls in a box and balls in balls. To design simulated geometric shapes we use the Packmol program. To calculate the diffusion coefficient we use the molecular dynamics simulation method. To find out which geometry is suitable, we compare the diffusion coefficient of the simulation results with existing references. The diffusion coefficient value of the spherical iron (Fe) system in the spherical liquid lead (Pb) has the best value compared to the other two forms with an accuracy rate of 99.94% because it is influenced by the even distribution of atoms in each part.
Introduction
Computation is an activity to obtain a solution of complex problems following a certain mathematical model [1]. Computer simulation is a tool for studying macroscopic systems by applying microscopic models, especially for prediction of materials properties [2]. One example of research using computer simulations is the prediction of diffusion coefficient of materials that is important data for many applications. Knowing the diffusion coefficient can help us to study the corrosion phenomena as in the field of nuclear reactor design [3,4].
There have been many corrosion experiments in search of superior steel for nuclear applications and determining the appropriate method for corrosion inhibition [5]. The high cost of installation for corrosion experiments and the need for a high safety level is the main constraints today. This is because the metal vapor produced is very toxic, and also, not all experiments can be carried out in an operating reactor. Particularly in Indonesia, this activity seems not possible due to inadequate facilities [6]. Therefore, computation and simulation methods are solutions to overcome these obstacles. One of the computational studies conducted is to use the molecular dynamics method [7][8][9].
When examining the corrosion of steel materials in the fast reactor, ferrous metal is the largest steel composition element. At the same time, molten lead is a suitable material as a coolant for sodium substitute rector. The most relevant property of lead is the massive difference between its melting and boiling points. The boiling and melting points of lead are 601 K and 2022 K respectively. These properties lead to higher reliability and safety for the reactor installation than the use of sodium [10]. The diffusion coefficient is obtained with a high degree of accuracy when compared with the experimental results.
This study using Packmol software for preparing the configuration of coordinate x, y, and z of the atoms of materials. The initial configuration of the arrangement of iron (Fe) and lead (Pb) atoms from Packmol will be running in LAMMPS (Large Scale Atomic / Molecular Massively Parallel Simulator: lammps.sandia.gov), an open-source software that has the advantage of being able to run large-scale computing. With the number of atoms up to millions of grains. LAMMPS software is also widely used for material simulation. This can be seen from the number of journal articles published from the results of LAMMPS-based computational research. Also, this software is always updated [7].
In this study, calculations were carried out to determine the best Mean Square Displacement (MSD) value in the variation of the simulation system model for ferrous metal (Fe) in the liquid lead (Pb) obtained when calculated using the molecular dynamics software LAMMPS. And to find out the value of the diffusion coefficient (D), which is the best in the variation of the system model of iron (Fe) in the liquid lead (Pb) obtained when calculated using the molecular dynamics software LAMMPS.
Theoretical Background
The interaction model between molecules needed in the simulation is the law of intermolecular force, which is equivalent to the potential energy function between molecules. The selection of the potential energy function must be made before any simulation is carried out. The choice of the interaction model between molecules greatly determines simulation correctness from a physics point of view. Because they are on the atomic scale, interactions must, in principle, be derived quantum, which is where the Heisenberg uncertainty principle applies. However, a classical mechanical approach can be used where the atom or molecule is considered a point mass [11]. For N, the number of atoms in a simulation, the potential energy function is U(R N ), where R N is the set position of the center of mass of the atom or molecule, R N = {R 1 , R 2 , R 3 ,…. R N } which can be expressed as: (1) Potential energy is the sum of the interactions between two isolated molecules [12].
There are potential energy models used in molecular dynamics simulations, including the Lennard-Jones potential. This model's use in the simulation provides a reasonably good level of accuracy in describing the interactions between atoms. The main characteristic of Lennard-Jones is that it is very responsive for small r and less attractive for large r [13]. These characteristics can be seen in the image below: attractive [14] This potential model equation can be formulated: (2) n and m are positive integers selected n>m, and i, j are the molecule indices, Rij = |Ri-Rj| or the distance between molecules i and j. Whereas σ is the distance parameter, and ε is the parameter that states the interaction's strength. k is the coefficient obtained from the equation: Common choices for m and n are m = 6 and n = 12 [15]. So that we get the equation: (4) The Lennard-Jones potential can be used to determine the characteristics of gases, liquids, clusters, and polycrystalline materials [16]. The Lennard-Jones potential for molecular dynamics simulations has parameters that can be taken from experimental data.
Preparation of Simulation Inputs
Based on the flow diagram in the simulation input preparation step, the run's initial simulation is a system consisting of iron (Fe) in the molten lead (Pb). The iron metal being modeled consists of 3527 atoms and 8000 metal atoms of lead (Pb). The temperature used is 1023 K or 750 o C because it is at this temperature that W. M. Robertson experimented calculating the diffusion coefficient of iron in the pure lead. The diffusion coefficient value from the experimental results is 2.8 x 10-9 m 2 /s [17]. The density of the system is calculated using the temperature-dependent density equation, as shown by equation 5 with T in Kelvin [18].
Determination of the Diffusion Coefficient (D)
Determination of the diffusion coefficient (D) of ferrous metal (Fe) can be seen from the data output of simulation from the Mean Square Displacement (MSD) in the format (.txt), which can be processed using Microsoft Office Excel. The MSD value data for iron metal (Fe) is represented in graphical form against simulation time. The diffusion coefficient D value is obtained from the gradient graph of the relationship between the MSD value and the simulation time. After The diffusion coefficient D value of the simulation results is determined, it can be compared with the diffusion coefficient D value of the experimental results so that the simulation results' accuracy level can be seen.
Results and Discussion
The simulation using the molecular dynamics method basically begins by determining the atoms of the material being studied, namely the iron atoms and the lead atoms. The arrangement of the atomic configuration is made after calculating the volume of the system to be made. Calculation of the iron and liquid lead atoms' volume size is carried out using the services (utilities) available on the Packmol website. The data input provided includes the density of the molecular mass to be made, the number of atoms contained in the molecule, and the molar mass of these atoms.
The simulation system made in this study is iron (Fe) in the form of a cube with BCC structure in the lead (Pb) in the form of a cube. This system consists of 3527 iron (Fe) atoms and 8000 lead (Pb) atoms. The visualization of this configuration can be seen in Figure 2 of the following page. Besides that, a spherical iron (Fe) simulation system with the structure of BCC in the lead (Pb) is also constructed, consisting of 3527 iron atoms (Fe) and 8000 lead atoms (Pb). Visualization of a spherical iron (Fe) system with a BCC inside structure cuboid of lead (Pb) is shown in Figure 3. While the third simulation system created is a system of 3527 iron atoms (Fe) with a BCC structure in the form of a ball in a spherical liquid lead (Pb) consisting of 8000 atoms.
Mean Square Displacement (MSD) System of Iron (Fe) in Liquid Lead (Pb)
The diffusion process can affect the crystal structure of a material. Even diffusion can cause crystal defects. In the diffusion process, there is also the movement of atoms to change their position. Each atom that undergoes movement, then the atom has the average square of the atomic movement, commonly referred to as MSD (Mean Square Displacement).
The MSD simulation results from LAMMPS against the simulation time are in the form of MSD value data in the format (.txt), which can be processed using Microsoft Office Excel. The MSD relationship curve to the simulation time of the iron (Fe) system in the liquid lead (Pb) is shown in Figure 5. In the curve shown in Figure 4.8, it can be seen that the MSD value ranges of three different crystal forms. The blue curve shows the simulation results of the iron (Fe) system in the form of a cuboid with BCC structure in the liquid lead (Pb) in the form of a cube, the red curve shows the simulation results of the spherical iron (Fe) system with the structure of BCC in the liquid lead (Pb) in the form of a cube. The green curve shows a spherical iron (Fe) system with BCC structure in the spherical liquid lead (Pb). In the curve shown in Figure 4, it can also be seen that the MSD value of the spherical iron (Fe) system in the liquid lead (Pb) has the highest value compared to the other two forms. The nonlinearity at the start is typical of the MSD curve. This is because the atom has not yet interacted with other atoms. After the nonlinear section at the start of the curve, it is followed by a straight line. The higher the slope of the MSD curve, the more random the atomic arrangement is. Meanwhile, the slope of the MSD curve is directly proportional to the diffusion constant of a material.
Diffusion Coefficient (D) of the Iron (Fe) System in Liquid Lead (Pb)
The diffusion coefficient can be said to be the most precise physical quantity known from a system. When a system gets disturbed by the influence of the environment, it can cause the movement of the atoms that make up the system randomly so that the coefficients in the system can be known. The movement of the atoms that make up the system can produce atomic trajectories. The atomic trajectories can represent each atom's positions so that each of these atomic positions can cause an interaction force between atoms. According to Newton's law, potential energy is a negative gradient of the interaction force, so from the interaction force between atoms, it can be seen the potential energy between atoms.
As explained in the paragraph above, the determination of the diffusion coefficient value of a system made for simulation is also influenced by the potential models used. The potential models used are the Lennard-Jones potential. This potential is recommended for use in a simulation system of iron (Fe) in the liquid lead (Pb) from the previous studies. The Lennard-Jones potential has two parameters, namely the distance parameter and the energy parameter. These two parameters significantly affect the simulation results used. The distance parameter and the energy parameter have a value that varies from one atom to another. This Lennard-Jones potential will also be an input in the simulation process using the LAMMPS software. The simulation results of the diffusion coefficient value can be shown in Table 1. The most significant diffusion coefficient value is obtained from the spherical iron (Fe) system with BCC structure, in the liquid lead (Pb) in the form of cubes, and the lowest is the spherical iron (Fe) system with BCC structure, in the liquid lead (Pb) in the form of a sphere. This shows that the diffusion coefficient is linear with the MSD calculation results. Based on Table 1, it is also known that the accuracy value in the spherical iron (Fe) system with BCC structure in the liquid lead (Pb) in the form of a ball is higher than the other two systems. The simulation accuracy level can be determined by comparing the calculation results of the simulation with the experimental results. W. M. Robertson has carried out experiments to calculate the diffusion coefficient of iron in the pure lead. The experimental results' coefficient value is 2.8 x 10-9 m 2 /s [13]. Thus, the spherical shape is an ideal form of a simulation system for simulating liquid iron (Fe) in lead (Pb) systems. This is because the shape of the iron (Fe) simulation system model in the spherical liquid lead (Pb) provides better accuracy values than other forms.
Based on table 1, it can also be seen that the forms of simulation system models made on the iron (Fe) system in the liquid lead (Pb) have an effect on the MSD value. This also impacts the diffusion coefficient value obtained because, in theory, the diffusion coefficient value is directly proportional to the MSD. This is inseparable from the distribution of the atoms in each part of the system. The lowest accuracy value is obtained from the spherical iron (Fe) system with a BCC structure in the liquid lead (Pb) in the form of cubes. This is because the lead (Pb) atoms in this system are not evenly distributed in certain parts. The number of lead (Pb) atoms is greater than the side direction in the corner direction. Based on table 2 it is known the number of atomic distributions in each part of the simulation system. Figure 6. Spherical iron (Fe) with a BCC structure in the liquid lead (Pb) in the form of a cubic visualized with OVITO with an add-on modification slice Figure 6 shows the distance between the corner points of the cube and the outer side of the sphere as labeled with (α) has a longer distance than the distance between the cube's side and the outer side of the sphere labeled with (β). Based on this distance factor and the number of atomic distribution causes the interaction force between the atoms to be uneven from each side of the system is created.
The accuracy value of the cube-shaped iron (Fe) system with BCC structure in the liquid lead (Pb) is better than the spherical iron (Fe) system with BCC structure in the liquid lead (Pb) in cubic form. Figure 7 shows that the atomic distribution is better than the atomic distribution. This is because the number of atoms in the cube-shaped iron (Fe) system with the BCC structure in the liquid lead (Pb) is more evenly distributed. If we compare the two, it can be seen that the α / β ratio for a spherical system in a cube is more significant than that for a cube system in a cube.
The most significant accuracy value is the ball in the ball model. Because theoretically, the distribution of atoms and energy in the system is almost evenly distributed for each part. This is also reinforced by the OVITO visualization results, which shows the value of α = β, as shown in Figure 7. The comparison of α and β distances for the three systems of iron (Fe) in the liquid lead (Pb) can be seen in table 3. Based on table 3, the α / β ratio for the spherical system in a sphere is smaller than the cube system in a cube and the spherical system in a cube. In simple terms, the α / β ratio for the three systems can be denoted in the equation below.
Where α is the angular distance of the liquid lead (Pb) from the outer side of the iron (Fe), and β is the angular distance of the liquid lead (Pb) to the outer side of the iron (Fe).
Conclusions
Based on the research that has been done, it can be concluded that, first, Making the ideal molecular dynamics simulation system for the iron (Fe) system in the liquid lead (Pb) begins by calculating the volume of the system made, namely iron (Fe) in the liquid lead (Pb) by ensuring that the mass density value is maintained. The mass density must also be calculated using the temperature-dependent density equation so that the simulation results can be close to the experimental results. These data become the basis for making the initial configuration. The initial configuration in the form of x, y, z coordinates is made using credible software, one of which is Packmol. Second, the mean square displacement (MSD) value of the spherical iron (Fe) simulation system in the spherical liquid lead (Pb) has the best value compared to the other two forms. Third, the spherical iron (Fe) system's diffusion coefficient value in the spherical liquid lead (Pb) has the best value compared to the other two forms with an accuracy rate of 99.94% because it is influenced by the even distribution of atoms in each part. | 4,251.8 | 2019-01-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
State-ofthe-Art Congestion Control Protocols in WSN : A Survey
Wireless Sensor Networks (WSNs) inherently are resource-constrained in terms of available energy, bandwidth, processing power and memory space. In these networks, congestion occurs when the incoming traffic load surpasses the available capacity of the network. There are various factors that lead to congestion in WSNs such as buffer overflow, varying rates of transmission, a many-to-one communication paradigm, channel contention and interference. Congestion leads to depletion of the nodes energy, deterioration of network performance and an increase in network latency and packet loss. As a result, energy-efficient and reliable state-of-the art congestion control protocols need to be designed to detect, notify and control congestion effectively. In this paper, we present a review of the latest state-of-the-art congestion control protocols. We analyze these protocols from various perspectives such as, their deployed environments, internal operational mechanisms, their advantages and disadvantages. Depending on their inherent nature of control mechanisms, these protocols are classified either as traffic-based congestion control or resource-based congestion control. Based on our analysis, we further subdivided these protocols based on their hop-by-hop and end-to-end delivery modes.
Introduction
Wireless sensor network (WSN) is a collection of nodes capable of sensing, processing and communication in an autonomous manner.These miniature sensor nodes are typically deployed in hazardous and human-inaccessible terrains to sense and monitor various applications [1].Some of these applications are seismic sensing, habitat monitoring, healthcare, intelligent transportation, home automation, industrial automation, agricultural monitoring and target tracking [2][3].These networks require application-specific, context-oriented Quality of Service (QoS) support and reliability because they support diverse range.
The deployment of heterogeneous sensor nodes and their capability to support a wide range of applications generate and forward a huge amount of data towards the sink.The data flow from a source node to the sink is applicationspecific in nature and may either be periodic or continuous in nature [4].In WSNs, the data flow from sensor nodes toward the base station, i.e., upstream, in a many-to-one fashion.
The flow of packets from source nodes towards the sink may overload, both the channel and nodes, and may exceed their handling capacity.An irregular upstream traffic may result in an increased delay, packet loss, energy utilization and an increase in the number of retransmission attempts.As a result, performance of the underlying network deteriorates which adversely affect the reliability of a monitored application.The scarcity of resources, the overloading of nodes and the presence of error-prone communication links coupled with irregular upstream traffic flows lead to an increased retransmission attempts, leading to network congestion.
Congestion arises when the number of transmitting packets exceeds the packet handling capacity of a particular network [5][6].This significantly decreases the performance of the network, which results in higher data losses at the node level.In a multi-hop environment, the intermediate nodes suffer from resource exhaustion due to an unfair traffic distribution, routed towards the base station via them.These nodes consume a considerable amount of resources as compared to the source nodes.As a result, energy-efficient congestion control protocols need to be designed that effectively alleviate congestion, ensure fairness and reliability of the network.To be precise, congestion control mechanisms that balance the load, avoid packet drops and prevent network deadlock, need to be designed.
In WSN, congestion can be avoided using two different mechanisms, i.e., traffic-based and resource-based.In a traffic-based congestion control mechanism, the data rate of incoming flows from the downstream nodes is adjusted against the forwarding capacity of the upstream node.Resource-based mechanism, on the other hand, exploits the idle network resources to balance the traffic load whenever congestion occurs.
The advantages and feasibility of these mechanisms may vary from one application to another.For example, traffic-based congestion control mechanism is feasible in situations where transient overload occurs [7][8].While resource-based congestion control protocols achieve a much higher data rate without compromising the network lifetime, as discussed in detail in the following sections.
The rest of the paper is organized as follow.In Section II, we present the latest state-of-the-art survey of congestion-based protocols in WSN.Finally, the paper is concluded, and future research directions are provided in Section III.
CLASSIFICATION OF CONGESTION CONTROL PROTOCOLS
In this section, we present a detailed survey of the latest state-of-the-art congestion protocols in WSNs.We categorize these protocols as either traffic-based, resource-based or hybrid.In case of traffic-based control protocols, we further classify them based on their hop-by-hop and end-to-end communication paradigms.Hybrid congestion control protocols, on the other hand, consider the distinguishing features of traffic-based and resource-based protocols, for network operation.We further explain these protocols in terms of their application-specific nature, their advantages and disadvantages.
Traffic-based Congestion Control Protocols
In this subsection, we discuss various protocols which control congestion based on the regulation of traffic flows in WSNs.Furthermore, we classify these protocols based on their traffic-control strategies, i.e., hop-by-hop and end-to-end, how they detect, notify and control congestion.Furthermore, their weaknesses and strengths are also discussed.A summary of these protocols is provided in Table I.
PASCCC [9] Priority-based application specific congestion control clustering (PASCCC) protocol was proposed for congestion detection in cluster-based hierarchical WSNs.PASCCC uses the mobility along with heterogeneity of the sensor nodes, for congestion detection and mitigation.Whenever the reading of a captured packet exceeds a predefined threshold value, each source node activates, senses the environment, collects the data and forwards it upstream towards the base station.During congestion, time-critical packets get prioritized to make sure their timely arrival at the base station.Simulation results show that PASCCC enhance network life time, energy efficient and other QoS parameters.Despite all these advantages, PASCCC limitations are that excessive delay occurs during setup phase.This is because, the position of a node changes at regular intervals.Furthermore, the dropping of humidity packets has an adverse effect on applications where, none of the packets need to be dropped in presence and absence of congestion.
PRRP [10]
Priority Rate-based Routing Protocol (PRRP) was proposed for Multimedia WSNs (MWSNs).The proposed protocol assigns priorities to the data traffic, based on their service requirements.PRRP arranges the nodes in a hierarchical tree structure in such a way that only the child node is capable to sense multimedia contents, forwards them to a parent node, which in turn, transmits to a base station.PRRP classifies the data traffic into four separate queues which are, high priority realtime traffic, high priority non-real time traffic, medium priority non-real time traffic and low priority non-real time traffic.The data in these queues are allocated bandwidth based on their priorities.There are three phases of the PRRP protocol, i.e., congestion detection, congestion notification and rate adaptation.The drawback of PRRP protocol is that, it does not select an optimal route from source node to the sink.In addition to that, during congestion, all the queues with the exception of the high priority queue, may suffer due to minimal allocation of network resources.As a result, fair allocation of network resources among various queues must be ensured.PRRP sets minimal value of active traffic sending rate to 0.1 for lower queues, i.e., the sensor nodes will not reduce the transmission rate below this value.However, a better value can be obtained from experiments.
Table 1. Traffic-based control based congestion control protocols WWCP [11]
Wireless multimedia congestion control protocol (WCCP) is a multi-layered protocol architecture proposed for alleviating congestion in multimedia WSNs.WCCP is a combination of two protocols, i-e., Source Congestion Avoidance Protocol (SCAP) and Receiver Congestion Control Protocol (RCCP).WCCP considers the features of frames within the multimedia packets at transport layer, also known as group of picture (GOP), at the application layer.GOP contains three different types of frames with varying combinations, i.e., one I frame and multiple P and B-frames.Furthermore, these varying types of frames have a varying effect on the overall quality of the received video.I-frames are the key frames in terms of quality and are highly important than the remaining two frames.As a result, loss of I-frames adversely effects the underlying multimedia applications.During feature selection of WCCP protocol, I Frames is over P and B frames.Once congestion is detected, SCAP protocol at the source node is informed.WCCP keeps I-frames at the base station, while ignores P and B frames during congestion.This improves the performance of the network and quality of the video received at the base station.Despite all these benefits, WCCP does not take into account the energy efficiency of the network.
Resource-based Congestion Control Protocols
In this subsection, we discuss various protocols which control congestion, based on the resource utilization in WSNs.The utilization of resources is controlled either by using dynamic alternative routes or efficient allocation of the available bandwidth.A summary of these resourcebased congestion control protocols is provided in Table 2.
PCCDC [12]
Priority-based Congestion Control Dynamic Clustering (PCCDC) is a novel application-specific congestion control protocol.It supports multi-class traffic by considering two application parameters, i.e., flooding and temperature in ice-capped mountains.Flooding occurs whenever the temperature value exceeds a predefined threshold value.Information about flooding is forwarded immediately to the base station without any further delay.In this case, flooding packets are real-time packets and continuous in nature.Temperature packets, on the other hand, are non-real time packets and periodic in nature, having less priority over flooding packets.The real and non-real types of flows intersect each other and create a congestion hotspot, also known as forwarded congestion.It is difficult to identify intersection points in this type of congestion due to network dynamics.One solution to this problem is dynamic clusters, which can handle forwarded congestion, in which a designated cluster head collects the traffic flow from its cluster members, aggregates, and communicates to the base station.One major drawback of PCCDC is that the base station may not be interested in every packet that meets the threshold value.This results in vast number of unwanted packets at the base station and may cause the degradation of network resources.
HTAP [13] Hierarchical Tree Alternative Path (HTAP) is a hop-byhop protocol proposed for event-based sensor applications.The concept of HTAP protocol is based on two algorithms, i.e., Alternative Path Creation (APC) and Hierarchical Tree Creation (HTC), and uses network density to choose the optimal one among the two.There are four stages of HTAP protocol, i.e., topology control, hierarchical tree creation, alternative path creation, and handling of powerless nodes.The main advantage of HTAP protocol is that, it is simple in its operational mechanism and as such results in a much lower overhead.The drawback of HTAP algorithm is that the receiver node receives the same data from multiple sensors, resulting in network redundancy.This problem is solved by using Redundancy Aware Hierarchical Tree Alternative Path (RAHTAP) algorithm [14].In RAHTAP, every node runs a redundancy detection technique.Whenever, a receiver node receives a packet, it checks its queue to see if the packet with the same ID already exists.If the packet with the same ID exists, then that packet is discarded.However, HTAP protocol does not provide fairness results and at the same time, is not energy efficient.
DAIPaS [15]
Dynamic Alternative Path Selection (DAIPaS) is efficient congestion alleviation protocol which dynamically selects the shortest route from source to the base station, while keeping the overhead to its minimum.This protocol utilizes the remaining buffer of a node, its energy level and channel interference for congestion detection.DAIPaS operates in three different phases, which are setup phase, topology control, and soft and hard stage.Backpressure messages are used for reducing traffic rate to avoid any further packet drop and congestion.The selection of an alternative route depends on the availability, buffer occupancy and energy level of the next-hop neighbor, and the number of hops towards the sink.
Flock-CC [16]
Flock-based Congestion Control (Flock-CC) is a scalable, robust, and self-adaptive congestion control protocol proposed for event-based applications.Flock-CC is based on the collective behavior of the bird flocks, inspired by swarm intelligence.The proposed algorithm directs the packets, synonym to birds, to form flocks and forwards them towards a sink, i.e., a global attractor, for avoiding obstacles.In this fashion, congested areas and dead ends are bypassed.The movement of a bunch of packets, i.e., packet flock, depends upon attraction and repulsion forces among neighboring packets, the field view, and the artificial magnetic field, guiding these packets towards the sink.As a result, idle resources of network are efficiently utilized.
Hybrid Congestion Control Protocols
In hybrid congestion control protocols, each protocol uses an integrated approach, i.e., by combining the desirable features of both traffic and resource-based congestion control approaches.In most cases, these protocols use a traffic-based congestion control approach.However, if the aforementioned technique is not feasible and optimal, they then use the resource-based congestion control approach.A summary of these protocols is provided in Table 3.
HRTC [17]
Hybrid Resource and Traffic Control (HRTC) is a hybrid congestion control algorithm in WSN.HRTC combines the desirable features of these two congestion handling techniques and provides a suitable solution, based on the network condition.During congestion, a congested node informs a source node using a hop-by-hop communication link to reduce its data rate, using a back-pressure message.When the back-pressure message traverses the affected downstream nodes on its way towards the source node, HRTC protocol examines to see if resource control technique can be applied to the traversed nodes before reaching the source node.If that is the case, HRTC aborts the transmission of back-pressure message.If HRTC is unable to find an alternative route, then the back-pressure message continues its journey towards its destination, i.e., the source node.Once this message reaches its destination, it then applies traffic control technique by altering the date rate of the source node.Moreover, the new data rate is adapted by nodes across the network.
HOCA [18]
Healthcare aware Optimized Congestion Avoidance (HOCA) is a data-centric congestion control protocol proposed for healthcare applications of WSNs.It employs the concept of active queue management (AQM) [19].In HOCA, the data is divided into two categories, i.e., sensitive and non-sensitive.The former requires a higher data rate while the latter requires a lower data rate.HOCA operates in four different stages.In the first stage, i.e, data dissemination request is performed by the sink (medical center) to all the nodes (patients) in the network.During the second stage of HOCA, the occurrences of events are reported by the nodes located on the patient's body, to the sink.During the third stage, route is established by the sink node using multipath and QoSaware routing techniques, to mitigate congestion.During the final stage, data is forwarded by adjusting hop-by-hop source traffic rate.This adjustment of traffic rate occurs specifically during congestion.HOCA avoids congestion by lowering end-to-end delay and maximizes the network lifetime by energy conservation.Furthermore, HOCA ensure fair use of the resources and links of the network.
HTCCFL [20]
Hierarchical Tree based Congestion Control using Fuzzy Logic heterogeneous traffic in WSN (HTCCFL), is a fuzzy-based congestion alleviation protocol.HTCCFL operates in three phases, i.e. hierarchical tree construction, fuzzy-based congestion detection and priority-based rate adjustment.During the first phase, a hierarchical tree is constructed using a topology control algorithm.During the second phase, congestion is detected using a fuzzy logic technique, based on input parameters such as, packet service ratio, number of contender nodes and buffer occupancy.The state of congestion is predicted from the outcome obtained using fuzzy rules.A node experiencing congestion informs all its neighbors about its status, using a control packet.For different classes of traffic, prioritized queues are maintained and weight values are assigned to each queue.Once congestion is detected, the next phase is congestion control.During the final phase, a dynamic rate adjustment is performed.If in case, the rate adjustment is not possible, each source node selects an alternative route from an already existing hierarchical tree for congestion alleviation.
Conclusion
Wireless Sensor Networks (WSNs) are resourceconstrained networks that suffer from various challenges.One such challenge is network congestion which mainly arises when the reception rate of a node exceeds its transmission rate.Congestion also arises due to contention and interference at the link layer.As a result, the performance and reliability of the network, i.e., QoS, deteriorate to a greater extent.In WSNs, congestion needs to be detected by one or more nodes along the upstream path towards the base station.Once congestion is detected, it needs to be notified either explicitly or implicitly across the network in order to take precautionary measures to control it.In this paper, we presented a detailed and comprehensive analysis of the latest congestion control protocols in WSNs.We classified these protocols into two categories, i.e., trafficbased and resource-based.The protocols in these two categories have their own strengths and weaknesses, and are application-specific in nature.Traffic-based congestion control protocols are simple in nature and cost-effective as well.However, they suffer from packet losses and delay, and as such, are not suitable for realtime applications.Resource-based protocols, on the other hand, require local knowledge of the nodes along with their topological deployment and bandwidth demand.One major challenge faced by protocols in this category is the selection of an optimal energy-efficient route from source node towards the base station.We have also examined various metrics for congestion detection.It is concluded that a single metric does not precisely detect congestion.Therefore, more than one metric needs to be used for precise congestion detection.In WSNs, energy-efficient congestion control protocols are needed that control congestion in order to ensure low energy consumption, fairness, and at the same time, achieve higher QoS.More sophisticated congestion control strategies may be explored that are based on automata, neural network, fuzzy logic and machine learning At the present, many such solutions exists but the effectiveness of these already proposed solution may be further investigated for various applications and under various scenarios.
Table 2 .
Hybrid Congestion Control Protocols | 4,023.6 | 2018-03-26T00:00:00.000 | [
"Computer Science"
] |
Sinc Collocation Method for Finding Numerical Solution of Integrodifferential Model Arisen in Continuous Mixed Strategy
One of the new techniques is used to solve numerical problems involving integral equations and ordinary differential equations known as Sinc collocation methods. This method has been shown to be an efficient numerical tool for finding solution. The construction mixed strategies evolutionary game can be transformed to an integrodifferential problem. Properties of the sinc procedure are utilized to reduce the computation of this integrodifferential to some algebraic equations. The method is applied to a few test examples to illustrate the accuracy and implementation of the method.
Introduction
Evolutionary game dynamics is a fast developing field, with applications in biology, economics, sociology, politics, interpersonal relationships, and anthropology. Background material and countless references can be found in [1][2][3][4][5][6][7][8]. In the present paper we consider a continuous mixed strategies model for population dynamics based on an integrodifferential representation. Analogous models for population dynamics based on the replicator equation with continuous strategy space were investigated in [9][10][11][12][13]. For the moment based model has proved global existence of solutions and studied the asymptotic behavior and stability of solutions in the case of two strategies [14].
In the last three decades a variety of numerical methods based on the sinc approximation have been developed. Sinc methods were developed by Stenger [15] and Lund and Bowers [16] and it is widely used for solving a wide range of linear and nonlinear problems arising from scientific and engineering applications including oceanographic problems with boundary layers [17], two-point boundary value problems [18], astrophysics equations [19], Blasius equation [20], Volterras population model [21], Hallens integral equation [22], third-order boundary value problems [23], system of second-order boundary value problems [24], fourth-order boundary value problems [25], heat distribution [26], elastoplastic problem [27], inverse problem [28,29], integrodifferential equation [30], optimal control [15], nonlinear boundary-value problems [31], and multipoint boundary value problems [32]. Very recently authors of [33] used the sinc procedure to solve linear and nonlinear Volterra integral and integrodifferential equations.
The content of this paper is arranged in seven sections. In Section 2, I discuss the modeling of the problem in an integrodifferential form. Section 3, introduces some general concepts concerning the sinc approximation. Section 4, contains some preliminaries in collocation method. In Section 5, the method is applied for solving the problem. In Section 6, some numerical examples has been provided. Finally, Section 7 provides the conclusion of this work.
Mathematical Model
The model we consider here is an integrodifferential model for continuous mixed strategies. In game theory, a dominant strategy is the one that gives a player the most benefit no matter what the other players do. A player's strategy in a game is a complete plan of action for whatever situation might arise; this fully determines the player's behavior. A player's strategy 2 Journal of Computational Engineering set defines what strategies are available for them to play. A pure strategy provides a complete definition of how a player will play a game. In particular, it determines the move a player will make for any situation he or she could face. A player's strategy set is the set of pure strategies available to that player. A mixed strategy is an assignment of a probability to each pure strategy. This allows for a player to randomly select a pure strategy. Since probabilities are continuous, there are infinitely many mixed strategies available to a player, even if their strategy set is finite.
A payoff is a number, also called utility that reflects the desirability of an outcome to a player, for whatever reason. When the outcome is random, payoffs are usually weighted with their probabilities. The expected payoff incorporates the player's attitude towards risk.
Assume that we have a game where there are pure strategies 1 to and that the players can use mixed strategies: this consists of playing the pure strategies 1 to with some probabilities 1 to with ≥ 0 and ∑ = 1. A strategy corresponds to a point q in the simplex The corners of the simplex are the standard unit vectors e , where the th component is 1 and all others are 0 and correspond to the pure strategies , = 1, . . . , . Let us denote by the payoff for a player using the pure strategy against a player using the pure strategy . Here Matrix = ( ) is called payoff matrix. An -strategist obtain the expected payoff ( q * ) = ∑ * against a q *strategist. The payoff for a q-startegist against a q * strategist is given by We consider a population of individuals as a player of the game and denote by ( , q) the density of population adopting the q strategy at time ; the evolution in time of , due to dynamics of the game, is driven by where the term represents the payoff of the strategy q against all the others strategies, (q, q * ) being the interacting kernel between the q-strategist and the q * -strategist. The last term of (3) is defined by and represents the average payoff of the population.
Since ∑ =1 = 1, we can reduce the number of variables, considering and obtaining the ( − 1)-dimensional model (3) on the simplex namely, with (p, p * ) defined by and defined by Remark 1 (see [14]). If we take an initial condition with ∫ −1 0 (p) p = 1, then it is easy to see that ≥ 0 for all > 0 and if 0 (p) = 0 for some p, then ( , p) = 0 for all > 0. We also know that This follows from the mass conservation; by integrating (8) with respect to p and using (10) and (12) we have Let us introduce the moments for : with k := ( 1 , 2 , . . . , −1 ). Using ( ), the payoff and the average payoff (10) where ∈ −1 is the standard unit vector with the th component equal to 1 and all others equal to 0. Moreover, In the final form of (8), that will be used later in this paper, the only integral terms are the first moments : ) .
Two Strategies Games.
Assume there are two different strategies, whose interplay is ruled by the payoff matrix: In this case the simplex 1 is just the interval [0, 1] and so we have a population where individuals are going to play the first strategy with probability ∈ [0, 1] and the second strategy with probability 1 − . The payoff (2) is given by The one dimensional Cauchy problem (17) reads with 0 ( ) ≥ 0 and ∫ 1 0 0 ( ) = 1. For more detail see [14].
Sinc Interpolation
The goal of this section is to recall notations and definition of the sinc function that are used. The sinc approximation for a function ( ) defined on the real line is given by where ( , ℎ) is sinc function defined by And the step size ℎ is suitably chosen for a given positive integer = 2 + 1. Sinc for interpolation points = ℎ is given by Assuming that ( ) is analytic on the real line and decays exponentially on the real line, it has been shown that the error of the approximation decays exponentially with increasing . The approximation may be extended to approximate ( ) on the interval [0, 1] by selection of an appropriate transfer function to transform the interval onto the real line and impose the exponential decay. We denote such variable transformation = ( ) and inverse transformation = ( ) such that (0) = −∞ and (1) = ∞. We may write the sinc approximation employing the transformation for the function ( ) to be where the mesh size ℎ represents the separation between sinc points on the ∈ (−∞, ∞) domain. In order to have the sinc approximation on a finite interval (0, 1) conformal map is employed as follows: This map carries the eye-shaped complex domain onto the infinite strip For the sinc method, the basic function on the interval (0, 1) for ∈ is derived from the composite translated sinc functions: Exhibiting kroneckor delta behavior on the grid points Thus we may define the inverse images of the real line and of the evenly space nodes { ℎ} ∞ =−∞ as And quadrature formulas for ( ) over where, = { V : |V| < ≤ /2} and on the boundary of (denoted by ) satisfying Interpolation for function in ( ) is defined in the following theorem whose proof can be found in [15].
where, 2 depends only on , , and . The above expressions show sinc interpolation on ( ) converge exponentially [17]. We also require derivatives of composite sinc functions evaluated at the nodes. The expressions required for the present discussion are [25] (0) (37)
Collocation Method
Here = ( , +1 ] and denote the space of polynomials of degree not exceeding , and it is easy to see The collocation solution is determined by ℎ that satisfies the given equation on a given suitable finite subset ℎ of , where ℎ contains the collocation points: is determined by the points of the partition ℎ and the given collocation parameters { } ∈ [0, 1]. The collocation solution is defined by the collocation equatioń It will be convenient (and natural) to work with the local Lagrange basis representations of ℎ . These polynomials in can be written as where ( ) belong to −1 . Also we have From (44) we can obtain the local representation of ℎ ∈ (0) ( ℎ ) on , hence we can achieve that The unknown approximations , ( = 1, . . . , ) in (44) are defined by the solution of a system of (generally nonlinear) algebraic equations obtained by setting = , := + ℎ in the collocation equation (42) and employing the local representation (44). This system is
Construction of the Method
Let {0 = 0 < 1 < ⋅ ⋅ ⋅ < = } is a partition of [0, ]. In every interval = ( , +1 ] we assume that ( , ), ∈ [0, 1], solution of one dimensional mixed strategy model is approximated by the finite expansion of sinc basis function and Lagrange polynomials: where, Ψ ( ) is a polynomial of degree . Also, initial value is according to If we replace approximation (51) in (21) we have where 1 ( ) is taken by By substituting collocation points for and and using quadrature rule (32), a nonlinear system is given. After solving this system we calculate , and finally ( , ) in , and also, ( , ) at +1 : where ( +1 , ) is used as an initial value for next interval +1 . After times, solution is achieved.
Numerical Examples
Prisoner's Dilemma Game. One interesting example of a game is given by the so-called Prisoner's Dilemma game in which there are two players and two possible strategies. The players have two options, cooperate or defect. The payoff matrix is the following: gets (suckers payoff) while the defector gets (temptation payoff). The payoff values are ranked > > > and 2 > + . We know that cooperators are always dominated by defectors.
For the numerical tests we fix the following normalized payoff matrix: with = 1.1 and = 0.001. In this case we have = 1 − + < 0 and = − < 0 and so / > 0. This means that stationary solutions are expected to be given by concentrated Dirac masses. For general perturbation we have that = 0 is linearly stable.
We In order to conform the results above, initial condition is considered as below: (1) (2) (3) For implementation of proposed method, I used Maple15 and plotted the numerical results in Figures 1, 2, and 3. Figure 1 shows that the density tends to concentrate at the point = 0, to what we expected.
Conclusion
In this paper, the collocation method with sinc and Lagrange polynomials are employed to construct an approximation to the solution of continuous mixed strategy. It is found that the results of the present works agree well with trapezoidal rule. Properties of the sinc procedure are utilized to reduce Journal of Computational Engineering 7 the computation of this integrodifferential to some nonlinear equations. There are several advantages over classical methods to using approximations based on sinc numerical methods. First, unlike most numerical techniques, it is now well-established that they are characterized by exponentially decaying errors. Secondly, approximation by sinc functions handles singularities in the problem. Thirdly, due to their rapid convergence, sinc numerical methods do not suffer from the common instability problems associated with other numerical methods. Also, in this case the advantages of collocation method are used. The method is applied to test examples to illustrate the accuracy and implementation of the method. | 3,050.6 | 2014-09-17T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Encapsulation of Bifidobacterium animalis subsp. lactis Through Emulsification Coupled with External Gelation for the Development of Synbiotic Systems
Aim of this work was the development of integrated and complex encapsulating systems that will provide more efficient protection to the probiotic strain Bifidobacterium animalis subsp. lactis (BB-12) in comparison to the conventional plain alginate beads. Within the scope of this study, the encapsulation of BB-12 through emulsification followed by external gelation was performed. For this purpose, a variety of alginate-based blends, composed of conventional and novel materials, were used. The results demonstrated that alginate beads incorporating 1% carrageenan or 2% nanocrystalline cellulose provided great protection to the viability of the probiotic bacteria during refrigerated storage (survival rates of 50.3% and 51.1%, respectively), as well as in vitro simulation of the gastrointestinal tract (survival rates of 38.7 and 42.0%, respectively). The incorporation of glycerol into the formulation of the beads improved the protective efficiency of the beads to the BB-12 cells during frozen storage, increasing significantly their viability compared to the plain alginate beads. Beads made of milk, alginate 1%, glucose 5%, and inulin 2% provided the best results in all cases. The microstructure of beads was assessed through SEM analysis and showed absence of free bacteria on the surface of the produced beads. Consequently, the encapsulation of BB-12 through emulsification in a complex encapsulating system was proved successful and effective.
Introduction
Probiotic bacteria, specifically bifidobacteria, constitute a significant part of the human gut microflora.Additionally, they have been widely incorporated into various fermented foods and dairy products [1].The consumption of bifidobacteria has been associated with certain health benefits conferred to the human host, including reduction of serum cholesterol levels, enhancement of immune function, diarrhea alleviation, decrease of lactose intolerance, modulation of the gut microflora, and allergy alleviation [2].However, in order to exert their health benefits, probiotic bacteria should be able to survive during food processing and storage, as well as under the harsh conditions of the gastrointestinal (GI) system, in order to successfully colonize the colon [3].Due to their high sensitivity to various environmental factors, such as heat, high acidity, oxidative stress, freezing, and moisture, probiotic bacteria are prone to cell wall deterioration, lipid oxidation, or undesirable alterations of the cell membrane [4].Therefore, the protection of probiotics is necessary and for this purpose their encapsulation in suitable carriers has been proposed.Various methods are reported for the encapsulation of probiotic bacteria, including extrusion, emulsification, coacervation, spray-drying, or freeze-drying [5].Εncapsulation of probiotic cells has been studied mainly through the application of the spray drying technology, using various materials such as alginate and chitosan [6], maltodextrin along with whey protein concentrate, skim milk powder or sodium caseinate, and/or trehalose or d-glycose [7] or even hydrolyzed black waxy rice flour [8].In contrast, milder techniques such as emulsification have not been extensively examined.
Rodrigues et al. [9] encapsulated probiotic bacteria in alginate beads through the extrusion method and studied their viability during storage at 5 °C.Encapsulation had a beneficial effect, whereas the double coating with chitosan or dextran sulfate did not significantly enhance the viability of the cells.The extrusion method has, also, been extensively examined in a previous study [10], using a variety of encapsulating blends and providing satisfactory results in the protection of probiotic cells during storage or in vitro simulation of the GI tract.In the current study, the application of the emulsification method for the encapsulation of the probiotic strain, Bifidobacterium animalis subsp.lactis (BB-12), was selected to be examined, as it involves mild conditions and presents low cost and high cellular retention [1].
The emulsification technique includes the dispersion of probiotic cells in a water-based polymer suspension (discontinuous phase), which is then added in an appropriate amount of oil (continuous phase) in order to form a waterin-oil emulsion; it is substantially based on the association and interactions between the discontinuous and continuous phase.The subsequent addition of a calcium chloride solution leads to the insolubilization of the water-soluble polymer and the formation of gels beads within the oil phase, thus encapsulating the probiotic bacteria.The beads produced by emulsification may be of a wide range of shapes and sizes, whereas their diameter may be sufficiently small, even below 300 μm [5].This technique also presents the potential for large-scale production, due to bulk beads' formation in short time [11].
Biocompatible and non-toxic materials are investigated for the incorporation of encapsulated products in food matrices [12].In particular, for the encapsulation through emulsification, the use of sodium alginate as encapsulating agent has already been reported [1], as it is inexpensive, nontoxic, and compatible with most other materials [13].Moreover, alginate is widely used as an encapsulating material due to its network developing ability under mild conditions [14].However, the application of alginate alone is not effective enough, due to its instability in the presence of Ca 2+ chelating agents and monovalent ions or harsh conditions [14].In order to improve the chemical and mechanical stability of alginate beads, the combination of alginate with other polymers has been proposed, such as gellan gum [15] or corn starch [16].However, research on the combination of sodium alginate with a variety of materials for the reinforcement of the beads is still limited.Emulsification can be further combined with spray-drying [17] or freeze-drying [18,19], since extension of probiotics' shelf life can be achieved by reducing the moisture levels [16].Intense drying conditions, however, have a detrimental effect on probiotics' viability.Thus, milder approaches are recommended so as to improve the existing drying systems.The incorporation of prebiotic substances (inulin or Hi-maize starch) into the alginate systems in order to stimulate the growth and activity of probiotic bacteria has also been studied [20][21][22][23].
In this work, the elaboration of integrated and complex encapsulating systems consisting of sodium alginate, other hydrocolloid materials (xanthan gum, carrageenan, pectin, and cellulose nanocrystalline-CNC), milk and/or milk proteins, glucose, and prebiotics (inulin) is investigated.Additionally, the incorporation of cryoprotectants (glycerol) or oxygen scavengers (l-cysteine-HCl) is examined.The occurring blends are evaluated and compared regarding their effectiveness, in terms of protecting BB-12 cells during refrigerated or frozen storage as well as their transition through a simulated gastrointestinal system.
Encapsulation of BB-12 Cells Through Emulsification
The solutions of the various encapsulating blends were prepared according to the formulations presented in Table 1, sterilized at 121 °C for 15 min, and cooled at 35-40 °C prior to the encapsulation procedure.The CaCl 2 solution was also sterilized at 121 °C for 15 min and cooled at ambient temperature.The glassware required for the encapsulation procedure was also sterilized and cooled under the same conditions.The probiotic strain BB-12 was incorporated at a concentration of 5% w/v into the encapsulating blend in order to form the aqueous phase.The oil phase was prepared by mixing Tween 80 (1.5% w/v) with olive pomace oil.Subsequently, the aqueous phase was dispersed into the oil phase at a ratio of 1:3.For the formation of the emulsion, the mixture was homogenized by a high-speed homogenizer (CAT Unidrive 1000; CAT Scientific, Paso Robles, CA, USA) at 1200 rpm for 5 min.Subsequently, a 0.5 M CaCl 2 solution was slowly titrated to the emulsion under magnetic stirring, in order to cross-link the water-soluble polymers and form particles within the oil phase.The formed beads were allowed to harden/cross-link for 30 min under magnetic stirring at room temperature and were then harvested by centrifugation (10,000 rpm-approx.11,000 g-for 15 min), washed with sterilized distilled water, and stored in sterile conical tubes at 4 °C.The above-described procedure was performed under aseptic conditions.
Viable Count of the Probiotic Cells
In order to evaluate the survival of BB-12 during the encapsulation process, cell counts were determined after the emulsification.Cell counts were obtained by determining the number of cfu in 1 g of beads.For this purpose, 1 g of the produced beads was suspended in 9 mL of citrate-phosphate buffer (pH 7.0) and disaggregated in a stomacher until cells were completely released.Samples were 10 times serially diluted in Ringer's solution and plated between 2 layers of modified MRS agar with 0.3% v/v l-cysteine hydrochloride and 0.5% v/v NNLP (Neomycine sulfate, Nalidixic acid, Lithium chloride and Paromomycine sulfate).After 72 h of anaerobic incubation at 37 °C, cell counts were determined and expressed as log cfu g −1 .
Determination of the Encapsulation Yield of the Beads
The encapsulation yield (EY) is a measurement combining the entrapment efficacy and the survival of viable cells during encapsulation, and was calculated as follows [24]: where N is the number of the viable encapsulated cells released from the beads and N o is the theoretical number of cells estimated according to the number of probiotic cells added prior to the encapsulation.
Survival of Encapsulated BB-12 During Storage
The beads containing the encapsulated BB-12 cells were stored at 4 °C and -18 °C for a 30-day period.Their survival was evaluated at 10-days intervals through microbiological analysis, as described in "Viable Count of the Probiotic Cells" section.
Survival of Encapsulated BB-12 Under simulated Gastrointestinal Conditions
This analysis was conducted according to our previous study [10] based on the research of Holkem et al. [25].Then, 0.5 g of beads were added in 5 mL of simulated gastric fluid (SGF) (0.025 g pepsin mL −1 in HCl 0.1 N, pH 2.0) and incubated at 37 °C for 90 min.Subsequently, 2.5 mL of simulated intestinal fluid (SIF) (12 g L −1 of bile extracts and
EY = (logN∕logN o ) × 100
Table 1 Composition and nomenclature of the blends used for the encapsulation of BB-12
Surface Morphology and Bead Size Determination
A scanning electron microscope (QUANTA 200, Thermo Fisher Scientific, USA) at an accelerating voltage of 25 kV was used to characterize the shape and the external surface of the beads produced according to the various formulations.The beads were freeze-dried, fixed in stubs with doublesided copper tape, and coated with a thin gold layer (180 s at a current of 40 mA) using a Baltzer evaporator (Baltec SCD50, Liechtenstein, Austria) before being observed in the microscope.
Statistical Analysis
All experimental results were submitted to analysis of variance (ANOVA) using the Statistica Software version 12 (Statsoft Inc., Tulsa, OK, USA).When significant differences were observed, the Duncan's test was applied in order to compare means at a 5% significance level.The experiments were performed in triplicate, the measurements were replicated 3 times, and their mean values are presented.
Influence of the Various Encapsulating Blends on the Encapsulation Yield
A variety of encapsulating blends were used for the encapsulation of the probiotic strain BB-12 through emulsification, as described in "Encapsulation of BB-12 Cells Through Emulsification" section.The blends were selected in order to reinforce the alginate beads by developing a denser or more stable grid.The conventional or novel materials used for this purpose were selected depending either on their gelation properties or their ability to provide BB-12 cells a nutritive, cryo-protective, or more anoxic environment.The entrapment of a satisfactory number of live probiotic bacteria inside the beads is of high importance as it is directly related to the number of viable probiotic cells at the end of the storage period or after their transit through the gastrointestinal (GI) system.The emulsification process applied in this study obtained high encapsulation yield (EY) values, indicating high survival of cells under the specific processing conditions, as shown in Table 2.According to the EY results, although alginate can provide a satisfactory level of protection, when combined with other hydrocolloid materials it can be much more effective, leading to greater EY values.Hence, when plain alginate (A) or alginate with inulin (AI) was used, the lowest EY values were obtained (86.9-84.7%).Similar results were found by Song et al. [11] who achieved EY of about 77-80% by encapsulating yeast cells in alginate through emulsification, and coating them with chitosan.
On the other hand, the incorporation of xanthan gum (AX), κ-carrageenan (AC), and nanocrystalline cellulose (ACNC) in the alginate beads, as well as whey in combination with pectin (AWP) provided significantly (p < 0.05) higher EY values, up to 98.9%.The utilization of milk for the development of milkalginate beads (AM) provided the most satisfactory EY values, reaching up to 99.2%.The effectiveness of the combination of these materials is attributed to the development of a denser and more stable grid that is able to retain and protect a greater number of probiotic cells.The above results come in agreement with other researchers that attempted the reinforcement of the alginate system with other materials, such as starch.Martin et al. [16] examined the development of alginate and alginate-starch beads by applying the same technique and also achieved increased EY values ranging between 74.4 and 97.3% by incorporating starch.Similarly, Khosravi Zanjani et al. [26] encapsulated the probiotic strains Lactobacillus (L.) casei and Bifidobacterium (B.) bifidum in alginate-gelatinized starch beads, with or without chitosan coating, achieving very high EY values of 96.4-98.1%.Furthermore, in the current study, the additional incorporation of inulin or l-cysteine-HCl did not significantly enhance the encapsulation efficiency.
Stability of Encapsulated BB-12 Under Refrigerated and Frozen Storage
Refrigerated and frozen storage are widely used for food preservation in order to extend the shelf life by delaying the growth of microorganisms and the chemical reactions that cause spoilage or quality degradation in food products.Thus, the viability of encapsulated BB-12 cells was investigated under these two storage conditions in order to resolve the potential of their incorporation into a variety of food products.The encapsulating blends examined were those that led to satisfactory EY values, as described in "Influence of the Various Encapsulating Blends on the Encapsulation Yield" section and, thus, their ability to maintain the viability of BB-12 under frozen or refrigerated storage was investigated.The survival rates of the encapsulated BB-12 cells over storage at 4 °C or -18 °C was monitored at 10-day intervals during a 30-day period and the results are illustrated in Figs. 1 and 2 respectively.
Survival Rates of Encapsulated BB-12 Under Refrigerated Storage (4 °C)
The BB-12 cells encapsulated in alginate alone or in alginate with inulin suffered significant reductions in their viability (Fig. 1a).The survival rates were decreased at 38.1-43.3%during the first 10 days of storage, whereas no viable cells were detected in alginate beads by the end of the storage.The incorporation of inulin in the alginate blend slightly enhanced the bacterial viability, leading to the survival of 22.5% of their initial load by the end of the storage.Alginate on its own cannot provide efficient protection to the BB-12 cells, therefore its combination with the examined conventional polymer materials, such as xanthan gum (AX) and κ-carrageenan (AC) was essential.This approach was successful, as the survival rates of the encapsulated cells were significantly increased (p < 0.05), exceeding 42.9% and 50.3%, respectively, by the end of the 30-day storage (Fig. 1b, e).The optimal protection was achieved in the case of AM beads, as the survival rates were above 52.7%,thus indicating that this formulation was effective in protecting BB-12 cells during refrigerated storage from external factors, such as moisture or oxygen.Whey is also another material widely used for the encapsulation of probiotic strains through spray-drying due to its protective properties [27][28][29].In the current study, it was combined with pectin and alginate (AWP), providing survival rates up to 48.1% by the end of storage at 4 °C (Fig. 1c).It must be taken into account that the heat denaturation of whey proteins occurring during sterilization may impact their emulsification properties and their encapsulation ability.The incorporation of glycerol (AGl) also provided increased protection to the BB-12 cells during refrigerated storage (Fig. 1d).Additionally, it must be noted that the utilization of the novel nanomaterial CNC for the development of probiotic beads (ACNC) contributed to the increase of the survival rates of BB-12 reaching 51.1% at the end of the 30-day storage period, indicating its reinforcing properties when combined with sodium alginate (Fig. 1f).Furthermore, the utilization of the prebiotic inulin significantly enhanced (p < 0.05) the BB-12 cells' survival in all cases, leading to 0.9-22.5% higher survival rates compared to samples containing alginate only.Our results are in agreement with other researchers that observed improved viability during storage by encapsulating various probiotic strains in particles containing inulin [30,31].
Although refrigerated storage is commonly recommended in order to maintain cells' viability, and encapsulation may provide the anaerobic conditions necessary for the oxygensensitive BB-12 strain, a significant decrease of the bacterial load was observed during the 30-day storage.To overcome the above, l-cysteine-HCl that can function as both an oxygen scavenger and a nitrogen source for BB-12 cells was incorporated in the specific encapsulating mixtures: ACLcys, ACNCL-cys, AML-cys.Thus, the viability of the BB-12 cells was enhanced up to 1.8-3.8%,coming in agreement with Sousa et al. [32] who observed improved storage stability of the same strain when l-cysteine-HCl was supplemented.
Survival Rates of Encapsulated BB-12 Under Frozen Storage (−18 °C)
Although frozen storage is widely applied for food preservation, it may have a negative impact on the viability of probiotic bacteria.The encapsulation of the probiotic strain BB-12 is expected to limit the damage commonly occurring during the freezing stage (freezing injuries) as well as during the entire storage period.The results presented in Fig. 2 indicate that the encapsulation of BB-12 in blends of alginate with other encapsulating agents significantly enhanced (p < 0.05) the survival of the probiotic cells during frozen storage compared to those encapsulated in plain alginate beads (A, AI).In all cases, significant viability loss of 3.1-6.2log cfu g −1 occurred during the first days of storage due to the sudden exposure of the BB-12 cells to the injurious low temperature.The formation of ice crystals that provokes damage to the membrane structure of the probiotic cells and, thus, changing their physiological state can lead to cells' death [33].In the current study, the reinforcement of the alginate beads with polymer materials such as xanthan gum (AX) and κ-carrageenan (AC) increased the survival of the encapsulated cells by 41.0-52.1% (Fig. 2b, e).Τhe combination of alginate with whey and pectin (AWP, AWPI) led to survival rates of 48.7-49.2% at the end of the 30-day storage.Similarly, the addition of CNC (ACNC) in the alginate blend aided the maintenance of the viability of the BB-12 cells at a percentage of 49.6-52.1% (Fig. 2c, f).Moreover, the new approach of water replacement with milk for the production of alginate beads (AM) led to enhanced viability of up to 55.2% survival rates by the end of the storage (Fig. 2g).All milk-based beads (AM, AMI, AML-cys) retained their bacterial load above the required minimum of 6 log cfu g −1 for up to 20 days of storage.As in the case of refrigerated storage, whey-pectin, carrageenan, CNC, and milk provided the highest protection of BB-12 viability during frozen storage as well.In the case of frozen storage, in particular, greater survival rates (57.1%) were achieved when the cryoprotectant glycerol (AGl) was included into the encapsulating mixture (Fig. 2d), in comparison to other water-based encapsulating blends.Sultana et al. [34] also found a 100-fold higher cells' survival when glycerol was incorporated into the alginate beads compared to alginate only or to the alginate-starch blend.
The addition of inulin in the alginate beads also improved the viability of BB-12 (p < 0.05) during frozen storage by increasing the survival by 0.6-24.4%.Raddatz et al. [19] also found that inulin had a protective effect on the probiotic cells during storage at -18 °C.Moreover, the beads containing l-cysteine-HCl demonstrated slightly increased stability during storage at -18 °C.Optimal results were achieved through the combination of milk, alginate, inulin and l-cysteine-HCl (AML-cys), as even after 30 days of storage the bacterial load was maintained at 6.1 log cfu g −1 .Similar results are reported by Sousa et al. [32] who also observed improved behavior of the alginate beads during storage at -18 °C when l-cysteine-HCl was incorporated.
Probiotics' Survival During In Vitro Simulation of the GI Tract
For the assessment of the coating materials' efficacy, the encapsulated cells were further exposed to in vitro simulated gastrointestinal conditions as described in "Survival of Encapsulated BB-12 Under simulated Gastrointestinal Conditions" subsection.Figure 3 shows survival of encapsulated BB-12 cells after this treatment.Reduction of the BB-12 populations was observed in all cases of encapsulating blends; however, the protection provided through encapsulation varied depending on the blend used (p < 0.05).The alginate, alginate-xanthan gum, and alginate-glycerol beads with (AI, AXI, AGlI) or without incorporated inulin (A, AX, AGl) presented the lowest bacterial loads, with survival rates of 22.3-29.3%at the end of the in vitro GI simulation.The addition of xanthan gum or glycerol in the mixture did not significantly affect the viability of BB-12 during the in vitro simulation.Similar results regarding the protective effect of glycerol in probiotic bacteria under GI conditions were also found by Sultana et al. [34].On the other hand, the combination of sodium alginate with specific polymers, such as carrageenan (AC, ACI), CNC (ACNC, ACNCI) or whey and pectin (AWP, AWPI), may lead to the development of stronger, thicker, and more rigid beads, thus limiting the diffusion rate of the gastric acids and providing survival rates up to 42.0%.Moreover, the utilization of milk for the development of alginate-milk beads (AM) leads to increased survival of BB-12 cells during GI simulation, probably due to its complex composition that creates a favorable and protective environment for BB-12 cells (survival rates up to 50.3%).The combination of different materials has, also, been attempted by other researchers to confer improved protection to probiotic bacteria.For example, Pankasemsuk et al. [22] encapsulated the probiotic strain L. casei 01, through emulsification, in alginate-starch blends and observed that the higher the percentage of incorporated starch into the alginate beads, the greater its survival under simulated GI conditions.The protective effect of the alginate-starch blend was also observed by Sabikhi et al. [35] who examined the survival of Lactobacillus acidophilus at different concentrations of bile salts (1%, 1.5%, and 2%).Gerez et al. [36] encapsulated the strain Lactobacillus rhamnosus CRL 1505 in pectin or whey protein-pectin beads through emulsification and coated the occurring particles with whey protein for enhanced protection, achieving significantly higher survival rates of the encapsulated cells than the free cells when exposed to simulated GI conditions.Moreover, Zou et al. [37] reported an increase of 0.5 log cfu g −1 in the survival of B. bifidum F-35 by further addition of pectin in the alginate beads.However, no significant improvement was observed when starch was added in the alginate mixture.Furthermore, the incorporation of inulin into the encapsulating blends significantly enhanced (p < 0.05) the viability and increased the survival rates by up to 7.0%.Our results are in agreement with other researchers that reported the beneficial effect of inulin under GI conditions [26,30,31].On the other hand, the addition of l-cysteine-HCl in the encapsulating blends provided only a slight increase of the survival rates (0.4-0.7%) when exposed to simulated GI conditions.
Scanning Electron Microscopy Analysis of the Produced Beads
The scanning electron microscopy (SEM) analysis micrographs for the beads produced with different encapsulating blends are presented in Fig. 4. The surface of the beads was examined at the same magnification of 500 × (Fig. 4) for comparison reasons.
However, for the needs of the SEM analysis, the beads were first subjected to freeze drying.This resulted to samples with irregular shape and size.Thus, the initially soft and smooth surface of beads turned into a rough one with irregular concavities and wrinkles, due to the removal of water from the hydrogel.This sponge-like external structure occurs due to the fast sublimation of the frozen water from the beads, leading to pores formed in the place of the ice crystals [38].According to Fig. 4, AI and AGl samples have similar surface characteristics (Fig. 4b, e).AC and AWP samples exhibit a more compact structure with a less porous surface (Fig. 4c, f), whereas the ACNC sample is characterized by a spongier structure (Fig. 4g).Moreover, the structure of A and AM samples is quite similar, with less concavities (Fig. 4a, f).
In order to provide a more detailed approach of samples' microstructure, a randomly chosen sample is presented in Fig. 4k, l, m under different magnifications (100 × , 1000 × , and 2000 ×).The lack of homogeneity, regarding the size and shape of the beads, is clearly captured in Fig. 4k.Beads of various sizes and shapes are dispersed, whereas clusters of beads have been created.This formation can be attributed to the cohesive nature of the encapsulating agents used [39].It must be noted that Fig. 4k indicates the absence of free bacteria, thus confirming the successful encapsulation of BB-12 cells.
Conclusions
The encapsulation of BB-12 cells through emulsification, in most cases, improved the survival of the BB-12 cells both during storage or transit through the GI tract.Alginate on its own was not efficient in maintaining probiotics' viability.On the other hand, its combination with certain conventional (carrageenan) or novel (CNC) materials enhanced the protective properties of the occurring beads.The best results were provided when water was replaced by milk during the encapsulation process (AM).Interestingly, these materials, due to the dense structure of the beads produced, were effective not only in protecting BB-12 at low storage temperatures (4 °C and −18 °C) but also during the in vitro simulation of the GI tract.Consequently, the emulsification with the use of encapsulating blends proposed in this study may significantly maintain the viability of probiotic bacteria during both storage and simulated GI conditions in a simple and cost-effective manner.The proposed encapsulation systems can be studied for the enrichment of food products, as they are promising protective matrices for BB-12 cells.Thus, they will be probably able to provide viability enhancement of BB-12 cells during food manufacturing and storage, as well as during simulated GI conditions.
Table 2
Values shown are means ± standard deviations (n = 3).Values with different superscripts are significantly different.Small lettered superscripts are used to differentiate values between rows (different encapsulating agents), while capital lettered superscripts to differentiate values between columns (different additives). | 5,969.6 | 2022-09-29T00:00:00.000 | [
"Agricultural And Food Sciences",
"Engineering"
] |
Sound Events Localization and Detection Using Bio-Inspired Gammatone Filters and Temporal Convolutional Neural Networks
The auditory brain circuits are biologically constructed to recand localize sounds by encoding a combination of cues that help individuals interpret sounds. The development of computational methods inspired by human capacities has established opportunities for improving machine hearing. Recent studies based on deep learning show that using convolutional recurrent neural networks (CRNNs) is a promising approach for sound event detection and localization in spatial sound. Nevertheless, depending on the sound environment, the performance of these systems is still far from reaching perfect metrics. Therefore, this work intends to boost the performance of state-of-the-art (SOTA) systems by using bio-inspired gammatone auditory filters and intensity vectors (IVs) for the acoustic feature extraction stage, along with the implementation of a temporal convolutional network (TCN) block into a CRNN model, to capture long term dependencies. Three data augmentation techniques are applied to increase the small number of samples in spatial audio datasets. The mentioned stages constitute our proposed Gammatone-based Sound Events Localization and Detection (G-SELD) system, which exceeded the SOTA results on four spatial audio datasets with different levels of acoustical complexity and with up to three sound sources overlapping in time.
surroundings [1]. The human auditory system processes sound arriving in our ears from sources distributed all over space. If we were to rely solely on our ears to recognize an unfamiliar environment, our auditory system would first recognize familiar sounds and then compare them with how those sounds were perceived in other familiar environments. This activity that seems so natural to us can be challenging for computers. Furthermore, considering that our natural listening is three-dimensional, why is it that most of the audio signals we usually listen to do not maintain the spatial information of the sound field? Based on this premise, the goal of spatial audio is to recreate the listener's perception in the real world, maintaining all the characteristics that allow our auditory system to process the content and direction of sound sources. In that sense, this area of machine hearing considers the use of spatial audio recordings, along with systems inspired by human hearing, to enhance the detection and localization of sounds. Several applications relate to machine hearing, such as intelligent meeting rooms [2], helping deaf people to know the sounds of their environment [3], [4], and acoustic monitoring of urban environments or wildlife [5], [6].
The sound events localization and detection (SELD) task implies multi-class sound events detection (SED) and sound source localization (SSL) of multiple directions of arrival (DOAs) with respect to the microphone. Regarding DOA estimation techniques, we found systems based on the time difference of arrival (TDOA) [7], the steered-response power (SRP) [8], the generalized side lobe canceller [9], and beamforming techniques such as compressive beamforming [10], and the minimum variance distortionless response (MVDR) beamforming [11]. These methods vary in algorithmic complexity, compatibility with microphone arrangements, and assumptions regarding the acoustic scenario. To overcome these complications and estimate the number of active sources directly from the input features, authors in [12] studied the use of deep neural networks (DNNs) for direction of arrival (DOA) estimation.
Recent studies have accomplished SELD with a multi-task perspective. In [13], the spectrogram is used as an intermediate representation of audio, which is processed by four convolutional layers and three fully-connected (FC) layers. The SELDnet system [14] also uses the spectrogram as input, but it extracts the phase and magnitude components as separate features. The SELDnet architecture comprises a convolutional neural network (CNN) with three convolutional blocks for feature extraction and dimensionality reduction. It also established the use of a recurrent neural network (RNN) based on gated recurrent units (GRUs) to learn temporal context information from the output of the convolutional blocks. Then, separate branches containing FC layers perform the classification and localization tasks. Based on SELDnet, an improved framework was presented as a baseline for the Task 3 of the DCASE2021 Challenge [15], which objective was the localization and detection of sound events in multichannel audio. This SELDnet-DCASE2021 version receives log-Mel spectrograms and intensity vectors (IVs) as intermediate audio representations. Instead of using separate branches for each task, it adopted the Activity-Coupled Cartesian Direction of Arrival (ACCDOA) representation [16] to unify both classification and localization losses.
In contrast, concerned about an efficient implementation of these types of systems on embedded hardware, the SELD-TCN system [17] proposed to substitute the recurrent blocks of SELDnet with temporal convolutional network (TCN) blocks containing dilated convolutions that capture long-term dependencies of data. TCNs also avoid the sequential computing of the input by processing the whole sequence in parallel via convolutions. The SELD-TCN framework maintains the original SELDnet characteristics regarding the intermediate audio representations and the separate output branches.
A modified version of SELDnet was implemented in PyTorch as a baseline for Task 2 of the L3DAS21 Challenge [18], which also aims to achieve the SELD problem. The phase and magnitude components of the spectrogram were used as features. As in the SELDnet system, the phase is expected to contain information on location and the magnitude of detection and classification. Regarding the architecture, one additional convolutional block and one more recurrent block were added to augment the network's capacity, while the two branches' output structure of SELDnet was maintained. The ability to detect multiple sound sources of the same class that overlap in time was also implemented through an augmented output matrix.
Regarding intermediate representations of audio, the Mel auditory model has been used in Automatic Speech Recognition (ASR) [19] and SELD [15] tasks. However, it still presents limitations in the attempt to model the human ear. By contrast, gammatone filter impulse responses were obtained from measures on the basilar membrane of small mammals. Moreover, applying a gammatone filter bank to the spectrogram has shown to be more robust against ambient noise in acoustic event monitoring compared with Mel-scale filter bank representations [20], [21]. The gammatone filter bank has also shown good performance in automatic audio captioning systems [22] and active noise control systems [23]. For this reason, a gammatone filter bank is explored in this work to obtain a log-gammatone spectrogram that will be used as the intermediate audio representation, along with the IVs.
We also propose a novel deep learning architecture for the SELD task, which joins the independent improvements proposed by the state-of-the-art (SOTA) systems. First, we question the plain inclusion of new convolutional and recurrent blocks aiming to improve the performance of the SELD system. Instead, we propose to include in the middle of the CNN and RNN blocks Fig. 1. Flowchart of the G-SELD system. In the preprocessing stage, the four channels spectrogram is processed into four log-gammatone spectrograms and three IVs. Regarding the neural network architecture, the number of blocks are depicted.
a TCN block that captures long-term dependencies and, at the same time, continues with the identification of core features by using dilated convolutions. Additionally, we adopt the single branch ACCDOA representation and modify it to detect multiple sound sources of the same class overlapping in time. The mentioned stages constitute our proposed Gammatone based -Sound Events Localization and Detection system, which will be referred to as the G-SELD system.
Since creating labeled datasets of spatial audio for the SELD task is a demanding and maybe imprecise process, the datasets usually contain less than a thousand samples. That restriction hinders the generalized learning of supervised deep learning approaches that require as many data samples as possible to be trained. Therefore, three suitable methods of data augmentation for spatial audio are also explored in this work: frequency masking, channel swapping, and random magnitude.
This article is organized as follows: Section II explains each stage of the proposed methodology. In Section III, we present the results of our G-SELD system by evaluating it on different polyphony levels and under different sound scene conditions. We also present an ablation study for each stage of the G-SELD system. Finally, Section IV presents the conclusions of this work.
II. METHODOLOGY
A methodology overview of the proposed G-SELD system is shown in Fig. 1. First, each audio input channel is processed into a spectrogram, from which the log-gammatone spectrogram and the IVs are obtained. The metadata is preprocessed to support the detection of up to three simultaneous sound events of the same class. Later, in the data augmentation stage, two new feature samples are generated using three techniques: frequency masking, random magnitude, and channel swapping. Subsequently, the original and the synthetically augmented samples input the deep learning G-SELD architecture, formed by a single branch containing CNN, TCN, RNN and FC blocks. The predicted classes and locations are obtained by processing the network output vector. Finally, we adopt four metrics used in analogous works to evaluate the system's performance. Further details of each stage will be explained in the following sections.
A. Datasets
The spatial audio datasets used in this work are provided in the Ambisonics format, which relies on the spatial decomposition of the sound field in the orthogonal basis of spherical harmonic functions [24]. The First Order Ambisonics (FOA) B-format consists of four signals that encode the overall sound in terms of pressure and particle velocity components. This format contains a signal W that represents an omnidirectional pattern and three orthogonal signals X, Y, Z aligned with the Cartesian coordinate axes.
The FOA spatial audio datasets used to evaluate the G-SELD system were selected due to their inherent diverse acoustic characteristics. These include anechoic and reverberant audio scenes, synthetic and measured impulse responses (IRs), background noise, and interference sound. Our objective is to evaluate the performance of the G-SELD system across various levels of difficulty that correspond to the level of effort required by humans to detect and localize sounds. For instance, we expect that the performance of G-SELD will be better in an environment without background noise than in a noisy scenario. All datasets contain FOA B-format audio files in which the sound events are spatially positioned, accompanied by a set of accurate metadata that includes time-boundaries, DOA, and sound type. Moreover, all datasets contain at least one subset in which up to three sound events may overlap in time. In order to provide a better understanding of why each dataset plays a significant role in the robust evaluation of the G-SELD system, we will briefly describe the acoustical and technical characteristics of each dataset.
1) ANSYN:
The TUT Sound Events 2018-Ambisonic, Anechoic and Synthetic Impulse Response (ANSYN) dataset contains static point sources associated with a spatial coordinate described in terms of azimuth, elevation, and distance. The anechoic environment was synthesized using artificial IRs, and the individual sounds were extracted from Task 2 of the DCASE 2016 Challenge [25], which objective was the detection of sound events in synthetic audio. The sounds were recorded in residential areas and home scenes, from which these 11 classes of sounds were selected: speech, laughter, cough, clear throat, door slam, page-turning, phone ringing, keyboard sounds, keys dropping, door knock, and drawing sound. This dataset is divided into three subsets: OV1, which consists of audio samples with no sound events overlapping in time, and the OV2 and OV3 subsets, in which up to two and three sound events overlap in time can be found, respectively. Each subset contains three cross-validation splits, all with 240 development samples and 60 evaluation samples, summing a total of 900 audio files sampled at 44.1 kHz for 30 s. The whole dataset containing OV1, OV2, and OV3 subsets consists of 2700 audio samples with their corresponding metadata files.
2) REAL: The TUT Sound Events 2018-Ambisonic, Reverberant and Real-life Impulse Response (REAL) dataset contains static points sources positioned in a reverberant threedimensional scene. The IRs were collected from a university corridor with classrooms. The isolated real-life sound events were extracted from the Urban-Sound8k dataset [26], from which eight classes of urban environment sounds are used: car horn, dog barking, drilling, engine idling, gunshot, jackhammer, siren, and street music. Air conditioner and children playing sounds are used as background noises. The subsets' distribution is the same as ANSYN dataset, as well as the sampling frequency of 44.1 kHz and the duration of 30 s.
3) L3DAS21: We use data related to the 3D SELD task of the L3DAS21 dataset [18], which IRs were recorded with two FOA microphones positioned in a small reverberant office environment equipped with typical office furniture. In this project, only the samples related to the microphone placed exactly in the center of the room are used. Fourteen clean types of sounds typical of an office environment were extracted from Librispeech [27] and FSD50K [28] datasets (computer keyboard, drawer open/close, cupboard open/close, finger-snapping, keys jangling, knock, laughter, scissors, telephone, writing, chink and clink, printer, female speech, and male speech). Four background noises (alarm, crackle, mechanical fan, and microwave oven) were selected from FSD50 K. The dataset contains four training splits, summing 600 audio samples and one evaluation split with 150 audio samples. The duration of each audio is 60 s, and the sampling frequency is 32 kHz. Each subset contains the same amount of files associated with one, two, and three sound events overlapping in time.
4) DCASE2021:
The DCASE2021 dataset [15] was provided for the Task 3 of the DCASE2021 Challenge. Besides containing time-overlapping sound events, this dataset includes directional interference events, moving sound sources, and an additional layer of background noise in all samples. The IRs were collected in 13 rooms with different reverberant conditions, in which circular and linear trajectories were recorded, changing the fonts' heights, distances, and elevations. The ambient noise of each room was recorded during 30 min, and later, 1 minduration segments were added to every spatial audio file with varying signal-to-noise ratios (SNR) ranging from 30 dB to 6 dB. Twelve classes of isolated sound events (alarm, crying baby, crash, barking dog, female scream, female speech, footsteps, knocking on the door, male scream, male speech, phone, piano) were extracted from the NIGENS general sound events' database [29], from which two additional classes (running engine and burning fire) were used as interference events, out of the target classes. The available development dataset consists of six folds, four for training, one for validation, and one for testing. Each split contains 100 one-minute-long audio samples with a sampling rate of 24 kHz.
B. Preprocessing
Each channel of the audio wave files is scaled from a 16-bit pulse-code modulation (PCM) to a float vector with values ranging from -1.0 to 1.0. Then, a spectrogram is computed for each Ambisonic B-format channel with a 40 ms Hanning window, 20 ms hop length, and a 1024-point fast-Fourier transform (FFT) with 512 frequency bins. Two intermediate representations are extracted from the multichannel spectrogram: four log-gammatone spectrograms that yield frequency information at different time instances, and three acoustic IVs that express net acoustic energy flux (Fig. 2). The metadata files contain information about every sound source in the recording, such as onset and offset times in seconds, class, and localization, which can be expressed in spherical or Cartesian coordinates. The preprocessing stage of SOTA systems such as SELDnet and SELD-TCN is restricted when more than one sound event of the same class is overlapping in time. In contrast, inspired by the L3DAS21 framework, in the proposed G-SELD system, we overcome the location overwriting for the second or third sound event of the same class, bringing the possibility of, for example, localizing up to three people simultaneously speaking. We process one annotation every 100 ms, which also allows us to track moving sound sources.
C. Data Augmentation
Considering the reduced number of samples in spatial audio datasets, we use three data augmentation techniques in the spectral domain features: frequency masking, FOA channels swapping, and random magnitude.
Frequency masking was proposed in [30] to be applied in one-channel Mel spectrograms for ASR. In this project, we adapt it to mask a maximum of F consecutive frequency bins of the log-gammatone spectrograms and the IVs every 100 ms, maintaining the same instantaneous mask for the seven channels of the feature map. We compute two augmented outputs: one with F = 16 and just one mask per time frame, and a second with two masks per time frame and F = 8 aiming to position two mask blocks in different frequency bins of the same time frame. The initial frequency bin, as well as the number of masked bins, are randomly selected. Fig. 3 shows an example of frequency masking with one mask per time frame. Note the different sizes of masks in different time frames. In this technique, the annotations did not need to be modified.
The FOA channels swapping strategy was initially proposed in [31] for increasing the number of DOAs of the sound events contained in the dataset. As proposed in [32], the input feature channels that correspond to the X, Y, Z FOA signals can be randomly swapped, and their signs randomly reversed in order to change the direction of the sound events. Due to its omnidirectional nature, the W channel is not modified with this technique. Considering the correlation between the log-gammatone spectrograms and the IVs, we equally transformed both. Two modified feature samples are computed for each original sample, and the original annotations were transformed.
The third data augmentation technique is inspired by the random magnitude technique proposed in [32], which modifies the overall volume of an audio sample by adding a random scalar value to the log-Mel spectrograms. We modify the magnitude of the log-gammatone spectrogram by adding random variables sampled from a normal distribution with a mean equal to 0 and a standard deviation of 0.02. For this technique, the intensity vectors and the annotations are not modified.
D. G-SELD Model
As depicted in Fig. 4, the G-SELD model receives a feature map with dimension 7 × T × 64, which passes by three CNN blocks responsible for identifying translation invariant patterns and reducing the dimension of the input data by maintaining the most important features. The CNNs are expected to learn inter-channel features from the four gammatone spectrogram channels and the three channels of IVs. Each convolutional block contains 2D convolutions with 64 filters, which kernel size is 3 × 3 and their stride size is 1 × 1. The 2D max-pooling values for the three convolutional blocks are 5 × 4, 1 × 4, and 1 × 2, respectively, and the dropout is 0.05. The first and second axis of the CNN blocks' output are permuted, aiming to let the time dimension T in the first position since the TCN block processes a sequence.
Originally proposed in [33] and later adapted for audio signals in [17], the use of dilated convolutions embedded in a residual block flexibly expands the feature map. As shown in Fig. 5, the output of the stacked layers is added to the input mapping using a shortcut connection, which then passes to the next residual block. By using this residual learning framework, the layers stacked in the residual block are optimized to learn the residual mapping instead of an individual mapping after each layer [34], [35]. Each residual block has a dilated convolutional layer with 256 filters of kernel equal to 3, followed by a batch normalization layer and the sigmoid and tanh activation functions. Regarding the purpose of the activation functions, the sigmoid function controls the flow of information through the input mapping, behaving like a gate, with its output ranging between 0 and 1. In contrast, the tanh function regulates the network values, preventing excessively large or small values that could hamper the network's learning process. Therefore, the tanh function ensures that the values range between −1 to 1. The activation function outputs pass by an element-wise multiplication, and later, the dropout rate is set to 0.5. Lastly, aiming to ensure the same dimension before adding the residual connection, a 1D convolutional layer is used to guarantee the exact shape of the input vector and the skip connection vector.
As shown in the TCN block of Fig. 4, we use four residual blocks with dilation factors that change in the range of [2 0 , 2 1 , 2 2 , 2 3 ]. Then, the TCN output passes by two recurrent blocks with 128 GRUs, as used in SELDnet. Finally, a fullyconnected layer reduces the dimension to a suitable prediction vector.
E. Prediction
The ACCDOA algorithm unifies the SED and SSL losses into a single weighted regression loss, avoiding the use of separate branches of dense layers for each subtask [16]. We adapted this algorithm to deal with three time-coincident sound events of the same class with different DOAs. The prediction vector contains up to three estimated locations in Cartesian coordinates for each possible class. However, it does not directly contain the probability of occurrence for each class. Therefore, class prediction is obtained from the vector norm or magnitude of each location estimator as x 2 + y 2 + z 2 , from which every magnitude greater than 0.5 is considered an active sound event of each class. Summing up, the predictor direction indicates the DOA, and its length constitutes the probability of occurrence of the corresponding sound class. Finally, the estimated DOAs are transformed from Cartesian into spherical coordinates to be consistent with the metrics computation.
F. Experimental Setup
We assessed three aspects of the G-SELD system, which include: 1) its ability to perform at varying levels of polyphony, 2) its ability to perform in different sound environments with varying levels of complexity, and 3) an ablation study to evaluate the individual contribution of each proposed improvement.
The evaluation of our G-SELD system followed the same experimental setup in all tested cases. The Adam optimizer [36] was used with an initial learning rate of 0.001, the batch size was set to 64, and the maximum number of training epochs was 100. The G-SELD system was developed using the TensorFlow framework and a computer with a 9th generation Intel Core i7 processor equipped with an NVIDIA Titan V GPU.
G. Evaluation Metrics
We adopted the metrics proposed in [37], which were used in the DCASE2021 Challenge [15]. It formulates locationsensitive detection metrics that evaluate sound event detection with specific spatial error allowance, and class-sensitive localization metrics that measure the spatial error between sound events with the same classification. The spatial error is calculated as the angular distance between reference and predicted DOAs, for which a threshold of 20 • is allowed. The Error Rate (ER) and the F score (F1) are location-sensitive detection metrics, whereas the Localization Recall (LR) and the Localization Error (LE) are class-sensitive localization metrics. A combination of the metrics (SELD score ) is used as the early stopping parameter and is defined as: The ideal metrics are: ER= 0, F score = 100%, LR= 100%, LE= 0 · , and SELD score = 0. Finally, the early stopping process monitors the SELD score with a patience of 30 epochs. The values presented in Tables I to V represent each metric average obtained from the cross-validation scheme.
III. RESULTS AND DISCUSSION
In this section, we provide a detailed account of the results obtained from the polyphony evaluation and the assessment of the G-SELD system's performance in sound environments with increasing complexity. Additionally, we present the findings of an ablation study conducted on the proposed improvements of the G-SELD system.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.
A. Polyphony Evaluation
The performance of the G-SELD system was evaluated under different polyphonic levels in audio scenes, starting with a scene containing sources that do not overlap in time (OV1), followed by scenes with higher polyphonic levels (OV2, OV3). This experiment was performed on two datasets that simulate a free field condition and a reverberant environment.
1) Free Field Condition: For this evaluation, we used the ANSYN dataset that represents an ideal free field condition with no reflections. We compare our results with SELDnet [14], which was trained for a maximum of 1000 epochs, against our G-SELD system trained for 100 epochs, saving the last best model. Table I shows the results of this experiment, in which, as expected, the best performance of the G-SELD system was achieved with no overlapping sound events, followed by the performances related to two and three overlapping sounds of the same dataset. The SELD score gives us a general idea of all metrics, simplifying the overall performance comparison. Note that the arrows show if the metric improves by increasing or decreasing its value. We show that the G-SELD polyphony evaluation metrics for a free field condition dataset surpass the equivalent SELDnet metrics.
This experiment can be partially compared with [38], in which the human listening ability to identify and localize the total number of simultaneous sound sources spatially distributed was studied. The estimation depends on the audio signal type (speech or tone stimuli) and the overlapped sounds. The percentages achieved by the listeners were in the range of 68 − 93% for a single sound source, 42 − 84% for two sounds overlapping in time, and 34 − 70% for three sounds overlapping in time. The metric that could be considered analogous is the LR, as it evaluates the number of sound events that were correctly located. Comparing the results obtained with human listeners with the performance of the G-SELD system, we noted that our machine hearing system surpasses by 6.6%, 11.7%, and 20.5% the best localization performances of the human auditory system in the aforementioned scenarios.
2) Reverberant Environment: The G-SELD system was also evaluated in the REAL dataset, which emulates a reverberant environment. Table II shows the results for the evaluation split using SELDnet and G-SELD systems. We identified the exact behavior of the G-SELD system on the ANSYN dataset, where the metrics worsen as the number of overlapping sound events increases. We also evinced that the SELD score and all metrics of
B. Sound Environment Evaluation
In this section, we present the results for the G-SELD system evaluated on four spatial audio datasets which represent different sound scene conditions. The selected datasets, previously presented in Section II-A, include ANSYN, REAL, L3DAS21, and DCASE2021 datasets. The system does not need to know beforehand the number of sound events in each sample, and we are not focusing on the polyphony level anymore but rather on the sound environmental conditions. However, polyphony is limited to three sound sources, which is the maximum number of target sound sources present in all the considered datasets. We applied the k-fold cross-validation technique to the training samples of each dataset, whereas the testing splits were always maintained. Note that training, testing, and validation are always made within the same dataset.
The results obtained on each test set are summarized in Table III. We compute the mean of the metrics obtained from the cross-validation models of each dataset (G-SELD mean values). The metrics of the best model over the cross-validation process are also included in Table III to exhibit the best performance of G-SELD for each dataset (G-SELD best model). In the following sections we analyze our results, compare them with the reported SOTA approaches used as a baseline for each dataset, and analyze the learning curves.
1) Free Field Condition:
We compare the metrics of the G-SELD system with the reported in [17] for the SELDnet and SELD-TCN approaches and evaluate them on the same dataset. As shown in Table III, the SELD score , which takes into account the joint performance of all four metrics, was surpassed by the G-SELD system. Considering the G-SELD mean values for each metric, the LR and LE metrics associated with the class-sensitive localization were exceeded by 13.67% and 9.22 points, respectively, compared with SELDnet. They surpassed the performance of the SELD-TCN system by 8.07% and 7.72%, respectively. The LR and LE metrics of our best model surpassed in 13.70% and 9.30 points the SELDnet approach and in 8.10% and 7.80 points the SELD-TCN approach. However, the location-sensitive detection metrics (F score and ER) did not surpass the values achieved by SELD-TCN, in 2% for the F score and 0.03% for the ER. Nevertheless, the percentage of improvement obtained for the SSL task allows our system to maintain the best performance according to the joint SELD score metric.
The learning curves, shown in Fig. 6, present the metrics' evolution on the ANSYN dataset's validation split. The solid lines represent the average value calculated from the metrics of each cross-validation model for each epoch. The colored shadow that wraps around each curve represents two standard deviations below and above the mean, giving us an idea of the values' variability across the cross-validation combinations. These learning curves show that the model fitted the data since the validation metrics reached almost optimal values. The standard deviation reaffirms that the performance of our models is consistent over the cross-validation splits. Additionally, it is possible to evince that in the validation subset, the model reached a better performance on the localization task, as exhibited for the test set results in Table III.
2) Reverberant Environment: According to the evaluation results of the REAL dataset, presented in Table III, we can corroborate that modeling this dataset became more complicated than the ANSYN dataset due to the nature of the acoustic scene in which there are reflections produced by the use of real IRs captured in a reverberant space. However, our results surpassed all the metrics obtained with SELDnet and SELD-TCN systems. The LR was the metric with the most significant improvement, exceeding the SELDnet LR by 30.58% for the G-SELD mean value and by 31.60% for the G-SELD best model. The SELD score also corroborates the general best performance of our model compared with SOTA systems.
The learning curves are shown in Fig. 7, in which the validation curves suggest that the characteristics learned from training data were not enough to perfectly generalize our model to unseen data. The learning curves' evolution does not show overfitting, and the reached values are comparable with the metrics of the testing split. Additionally, the colored shadows that represent two standard deviations show that the cross-validation models result on validation metrics close to the mean, showing a reduced variability across models.
A plausible explanation for the drop in performance of the G-SELD system on REAL dataset, compared with ANSYN dataset, is that a substantial multi-path interference caused by room reverberation can significantly impact localization [39]. As concluded by [40], reverberation tends to smear the periodic components across time, and thus some time-frequency (T-F) samples in the reverberation tail are incorrectly assigned to the detected sources. However, our results are still competitive, considering the complexity of the scenario.
3) Reverberant Environment With Background Noise: Next in line, we use the L3DAS21 dataset, which IRs contain reflections from room boundary surfaces and office furniture. Moreover, this dataset includes constant background noise. The L3DAS21 Challenge baseline system [18] computed two metrics: F score and LR, which can be compared with ours (Table III). As the baseline also computed precision of P = 52.00%, we calculate and compare our average precision P = 62.26%, demonstrating that all the comparable metrics were surpassed by more than 10% using the G-SELD system. As expected, the obtained metrics are lower than those achieved on ANSYN and REAL datasets since L3DAS21 is a more challenging scenario that combines reverberation with background noises. Authors in [41] demonstrated that early reflections produce phase misalignment that greatly decreases the ability to separate signals from noise. These facts help us understand the reasons for the decrease in the performance of our system in this environment. The mixed presence of reflections, background noise, and a 44% increment of sound classes to be identified reduced the performance of our system on the SELD task.
The learning curves of the G-SELD system applied to the L3DAS21 dataset are presented in Fig. 8. All curves reached almost flat slopes before completing 100 epochs, which means that the models could extract and learn features from original and slightly modified data during the number of training epochs.
We also note the presence of some peaks on the learning curves, which could be caused by the mini-batch gradient descent method used in Adam optimization. In other words, as training data is shuffled, the mini-batches may contain a more significant amount of unusual samples, causing a slight decrease in the metrics in a specific epoch. The colored shadows representing two standard deviations below and above the mean show that the cross-validation models produce slightly more variable validation metrics than the obtained for ANSYN and REAL datasets. This can be explained by the increased number of sound classes contained in a smaller set of audio samples of the L3DAS21 dataset.
4) Reverberant Environment With Moving Sound Sources and Directional
Interferences: DCASE2021 is the most challenging dataset in which our G-SELD system was evaluated, as it includes all the complexities presented in ANSYN, REAL, and L3DAS21 datasets. Moreover, different challenging conditions were included to simulate difficult real-life situations. Moving sources were incorporated from about 500 sound event samples of 12 types, and an additional layer of directional interferences was selected from 400 sound events. The network is expected to learn to ignore interferences; if not, they will be considered false positives.
We compare our results with the metrics published by the baseline system of DCASE2021 Challenge [15]. As shown in Table III, all metrics were surpassed. The F score and LR were exceeded in 12.42% and 15.32% respectively by the G-SELD mean values, and in 13.20% and 15.40% by the G-SELD best model. The ER was improved by 7% by the mean value and the best model, and the LE was surpassed by 1.27 and 2 points by the mean value and the best model, respectively. Our system reached an overall improvement of 10% according to the SELD score . The LE exhibits that the inclusion of moving sources turns the DOA estimation more complex, such that the localization of several detected samples of sound events does not satisfy the threshold to be considered as a correct localization prediction. However, the improvements are promising, considering that the G-SELD network architecture is dealing with the SELD problem without dividing it into specific branches for the localization and detection subtasks, maintaining conceptual simplicity on the implemented modifications.
The cross-validation models' variability, represented by the standard deviation shown in the learning curves of Fig. 9 increased compared with previous datasets. However, we consider it tolerable since this dataset includes a wider variety of sounds and DOAs in a few audio samples. We identified that the LR was the less aggravated metric compared to the increasing difficulty of the databases. This demonstrates that the G-SELD model can extract a significant amount of information from the feature vectors, which leads to the detection of a significant quantity of samples that contain sound events, even in challenging scenarios.
The difficulty related to directional interference increases when the target sound is similar to the sound that should be considered as an interference (inter-class similarity problem). We identified the mentioned problem in the DCASE2021 database, in which engines and fire sounds are used as interferences. Then, considering that a characteristic sound related to fire is a fire alarm, the system learned from many samples that an alarm-like sound should be considered interference. Therefore, in case the system detects a sound with comparable characteristics, it will wrongly disregard this sound as an interference.
C. Ablation Study
We conducted an ablation study to analyze the individual contributions of the Gammatone based SELD (G-SELD) system. However, we highlight that the G-SELD system as a whole encompasses all the proposed improvements, whose results were presented in Sections III-A and III-B. The k-fold crossvalidation technique was also used to train different split combinations of each dataset. The results in this section represent the mean value of the metrics obtained from the test split of the cross-validation scheme.
1) Gammatone Vs. Mel Filter Banks: The SELD-DCASE2021 architecture proposed in [15] was used to evaluate our hypothesis of using a gammatone filter bank instead of a Mel filter bank for obtaining a better performance on the SELD task. For this experiment, we changed the filter bank while maintaining fixed all other parameters related to the preprocessing stage. The results presented in Table IV were obtained for the test fold of the ANSYN dataset. This dataset was selected due to the high metrics achieved by baseline systems such as SELDnet and SELD-TCN. We noted that as metrics reach near-perfect values, it becomes more difficult to get improvements. Then, we sought to prove that just changing the filter bank applied to the spectrogram in the preprocessing stage results in a performance improvement in a dataset that has 2) Inclusion of a TCN Block: As previously explained, the G-SELD architecture contains four types of blocks: CNN, TCN, RNN, and FC. In order to visualize each group of blocks' contribution to the SED task, we apply the t-SNE visualization technique that reduces a high-dimensional feature vector into a two or three-dimensional map [42]. In this experiment, we restricted our data to samples that contain just one sound event at the same time to simplify the clusters' visualization. The ANSYN or REAL datasets could be used for this experiment since they provide a split of data containing audio with sound events happening one at a time. As in the previous experiment, we selected ANSYN dataset because it is more challenging to get improvements in a database that has reached near-perfect metrics. Fig. 10 shows the t-SNE representations of the output vectors taken after each group of blocks in the G-SELD architecture, with a perplexity value of 50. It is possible to recognize a clustering process that begins with the CNN blocks and finishes with the FC layers. However, despite being close to each other, the class-coincident samples of the CNN output are better clustered after passing through the TCN block. The clustering evidences how valuable the use of the TCN block is in the G-SELD architecture. Then, the RNN and FC layers, as final stages of the network, are used for learning temporal dependencies of data and reducing its dimensionality, respectively, which also contributes to the clustering evolution through the model.
3) Data Augmentation: We also experimented with the stage of data augmentation, aiming to demonstrate that our G-SELD The best-ranked results in Task 3 of the DCASE2021 challenge [15] showed that using data augmentation techniques applied to spectrograms results in a performance improvement on the DCASE2021 dataset [43], [44]. Therefore, we decided to conduct our experiment in a dataset that has not yet been used for this comparison. Concretely, we use the L3DAS21 dataset to explore the impact of data augmentation in the overall improvement of our model. For this trial, we used the L3DAS21 dataset and the G-SELD architecture. The neural network was first trained without data augmentation and then using the three data augmentation techniques detailed in Section II-C. The results for the test fold are shown in Table V. It is possible to compare two metrics of our results with the published for the L3DAS21 Challenge baseline system: the F score = 45.0 and the LR = 40.0 were improved by the G-SELD system without data augmentation at 7.7% and 16.5% respectively and by 13.75% and 19.60% with the use of data augmentation. In conclusion, the G-SELD system improves the metrics of the SELD task even without the data augmentation stage.
Based on the experiments presented in the last sections, we show that each proposed improvement in the G-SELD system is a valuable addition to the whole performance of the system.
IV. CONCLUSION
In this work, we used a deep learning approach to develop a system for sound event detection and localization in spatial audio.
A combination of acoustic features inspired by the human auditory system and IVs containing phase information were implemented to provide appropriate cues for estimating the location in time and the direction of arrival of a sound event. It was demonstrated that gammatone filters are a viable alternative to modify the frequency linear resolution of the spectrogram since they model the tonotopic frequency distribution produced in the cochlea.
Based on a deep learning model that includes CNN and RNN layers, the architecture of our model is improved by incorporating a TCN block that is capable of learning core features in the structure of sequential data, due to its ability to capture long-term dependencies. This modification generates a deeper feature extraction, producing a more significant number of trainable parameters.
In summary, the G-SELD system was evaluated on four databases that provide different ambient conditions, from a controlled environment without reflections to various reverberant scenes. The G-SELD system maintains a good performance for polyphony up to level three in anechoic and reverberant environments. The performance decays when background noises and directional interferences are included in addition to the target classes because the system must learn to overlook those specific types of sounds. However, our results surpassed the ones obtained using the baseline systems proposed along with each dataset, maintaining a conceptual simplicity of the network architecture. | 9,595.6 | 2023-01-01T00:00:00.000 | [
"Computer Science"
] |
Syntheses , Structures and Properties of 3 d-4 f Heterometallic Coordination Polymers Based on Tetradentate Metalloligand and Lanthanoid Ions
Based on tetradentate metalloligand L ([Cu(2,4-pydca)2], 2,4-pydca = pyridine-2,4-dicarboxylate) and lanthanides (Sm, Dy), two 3d-4fheterometalliccoordination polymers, namely, {[Sm2(DMSO)4(CH3OH)2][L]3·7DMSO·2CH3OH}n 1 and {[Dy2(DMSO)3(CH3OH)][L(DMSO)]·4DMSO·CH3OH}n 2 (DMSO = dimethyl sulfoxide), have been synthesized and well characterized by elemental analysis, Fourier-transform infrared spectroscopy, thermogravimetric and single-crystal X-ray diffraction analysis. Single-crystal X-ray analysis reveals that both 1 and 2 crystallize in the triclinic crystal system with P-1 space group and possess the 3D framework structures, which are constructed from metalloligands L connecting with {Sm2} and {Dy2} clusters, respectively. The 3D structure of 1 has a 6-connected single-nodal topology with the point symbol {4 × 6}, while 2 features a different framework with the point symbol of {4 × 6}. Thermogravimetric analysis exhibits that the skeleton of both 1 and 2 collapse after 350 ̊C. Magnetic properties of 1 and 2 have also been investigated.
On the other hand, the molecular structures and properties of CPs are also highly influenced by several critical factors during the synthetic process, such as pH values, metal-ligand ratio, solvent polarity, auxiliary ligands and synthetic strategy.For example, although several 3d-4f heterobimetallic CPs containing L Cu structure have been reported [28]- [33], in which the L Cu come from the reactions between cupric oxide/cupric nitrate and pyridine-2,4-dicarboxylic acid via the hydro-thermal reaction.However, the direct application of metalloligand L Cu in the construction of 3d-4f heterometallic CPs has not been reported yet.In addition, the coexistence of 3d transition metal and 4f lanthanide ions in one molecule may lead to various structures and physical properties due to the rich coordination environments of lanthanide and transition metal ions, which will finally affect the spatial configurations and magnetic couplings [34] [35] [36].
Meanwhile, benefited from the large spin values, spin-ion anisotropy and large spin-orbit couplings of lanthanide ions, 3d-4f heterometallic CPs may exhibit fascinating and complicated magnetic behaviors [37] [38] [39].Therefore, metalloligand L Cu and lanthanide centers (Sm 3+ and Dy 3+ ) will be introduced for the construction of 3d-4f coordination polymers with magnetic properties through Scheme 1. Structure of L Cu .Hydrogen atoms are omitted for clarity.the inter-diffusion method.
In this work, we have successfully synthesized two new 3d-4f heterometallic CPs from the metalloligand L Cu , i.e., graphic studies reveals that both CPs 1 and 2 exhibit the 3D framework structures, which are constructed from metalloligands L Cu connecting with {Sm 2 } and {Dy 2 } clusters.1 and 2 possess the different 6-connected single-nodal topology with point symbol {4 9 × 6 6 } and {4 12 × 6 3 }, respectively.According to the molecular formula of CP-2, which has been determined definitively from the crystal structure, one of the metalloligand L Cu was changed into L Cu (DMSO) during the reaction process.Further, the TGA behaviors of two CPs have been measured in the temperature range of 25˚C -800˚C, while the magnetic properties of 1 and 2 have also been investigated.
Materials and Physical Measurements
All the chemicals and solvents were reagent grade and purchased from commercial sources and used without further purification.Pyridine-2,4-dicarboxylate acid and metalloligand L Cu were synthesized according to procedures already reported outlined in the literature [21].Element analyses for C, H and N were performed with a PerkineElmer 240C elemental analyzer.Infrared spectra were obtained from a sample powder pelletized with KBr disks on a Nicolet Nexus 470 spectrometer (Germany) over a range of 400 -4000 cm −1 .Thermogravimetric analysis (TGA) measurements were carried out in the temperature range of 25˚C -800˚C on a PerkineElmer Pyis 1 system in a nitrogenpurge with a heating rate of 10˚C/min.The temperature dependence of molar magnetic susceptibility was measured under an applied field of 1000 G in the form of χ m T versus T in the range of 1.8 -300 K by Quantum Design MPMS XL-5.The influence of sample holder background was subtracted by the automatic subtraction feature of the software.
Single-Crystal Structure Determination
Sizeable and high-quality single crystals of two compounds were selected carefully from little glass tubes, and mounted on a glass fiber with epoxy resin covered.All measurements were obtained by a Rigaku Saturn 724 + CCD imaging plate diffractometer with graphite-monochromated Mo-Ka radiation (λ = 0.71073 Å) at room temperature.The two crystals structures were solved by direct methods, while the non-hydrogen atoms were subjected to anisotropic refinement on F 2 through full-matrix least-squares with SHELX-97 package [40] [41] [42].All the non-hydrogen atoms were determined with anisotropic thermal displacement coefficients.Hydrogen atoms were treated isotropically according to a riding model, beyond that the hydrogen atoms were located in idealized positions.The contribution of missing solvent molecules (DMSO, CH 3 OH) to the diffraction pattern was subtracted from the reflection data by the "SQUEEZE" method as implemented in PLATON [43].Details of the crystal parameters, data collection and refinement of CPs 1 and 2 are listed in Table 1, while the selected bond lengths are listed in Table 2.
Synthetic Method
CPs 1 and 2 were crystallized from the reactions between metalloligand [Cu(2,4-pydca) 2 ] and Sm(NO 3 ) 3 •6H 2 O/Dy(NO 3 ) 3 •6H 2 O, respectively.According to the literature, the reported pydca-based 3d-4f structures were obtained from rare earth hydrates, copper oxide/copper acetate hydrate and pyridine-2,4-dicarboxylic acid through the hydro-thermal synthetic approach [28]- [33].Compared to the hydro-thermal syntheses of above pydca-based 3d-4f structures, the inter-diffusion method was applied as a mild way for the crystallization of 1 and 2. In this work, our synthetic strategy uses DMSO as the buffer solution, which can slow the interactions between L Cu and lanthanide ions.As a result, well shaped crystals of 1 and 2 can be obtained from the cushion breaker.
It is obviously that the use of blank solvent as buffer solution provides a stable condition for the reaction between two different reactive components [21].
Compared with the conventional hydro-thermal/solvent-thermal synthetic approach, the inter-diffusion method here plays an important role in the crystallization process of CPs 1 and 2.
Crystal Structure of 1
The result of single crystal X-ray structural analysis reveals that CP-1 crystallizes in the triclinic crystal system with P-1 space group and exhibits a 3D framework the vertexes of the geometry, while the sides are formed by L Cu metalloligands.
The selected bond lengths of CPs 1 and 2 are listed in Table 2.As shown in Table 2, the lengths of Cu-N bond range from 1.
Crystal Structure of 2
The X-ray crystallography study identifies that CP-2 also crystallizes in the tric-
Thermogravimetric Analysis
Thermal properties of CPs 1 and 2 were examined by thermogravimetric analysis (TGA) from 25˚C to 800˚C in a nitrogen atmosphere with a heating rate of 10˚C/min.Thethermogravimetric curve of CPs 1 and 2 are shown in Figure 9.
As shown in the Figure 9, the first weight loss of CP
Magnetic Properties
The temperature dependence of magnetic susceptibility is recorded for crystalline samples of CPs 1 and 2 at an applied magnetic field of 1000 Oe in the temperature range of 1.8 -300 K.The measurement results are shown in Figure 10 and Figure 11, respectively, in which χ m is the molar magnetic susceptibility.As in Figure 10, The χ m T values of CP-1 at room temperature is 1.62 cm 3 K mol −1 , which is a little smaller than the theoretical value (1.69 cm 3 K mol −1 ) for a two isolated Sm 3+ ions (S = 5/2, g = 2/7) and three Cu 2+ ions (S = 1/2, g = 2) without magnetic interaction.Upon decreasing the temperature, the χ m T product drops slowly to a minimum of 0.47 cm 3 K mol −1 at 1.8 K.This decrease in χ m T may originate in the antiferromagnetic interaction between metal centers.The magnetic data in the range of 50 -300 K followed the Curiee-Weiss fitting with a Curie constant of C = 1.8 cm 3 K mol −1 and negative Weiss constant of θ = −57.7 K.As shown in Figure 11, the room temperature χ m T value of CP-2 is 29.36 cm 3 K mol −1 , which is a little smaller than the theoretical value of 29.48 cm 3 K mol −1 for three Cu 2+ ion (S = 1/2, g = 2) and two Dy 3+ ions (S = 5/2, g = 4/3) due to the thermally populated excited states of Dy 3+ [44].Upon sample cooling, the χ m T value decreases continuously to a minimum value of 17.55 cm 3 K mol −1 at 7 K.
After that, the χ m T value increases sharply to18.49cm 3 K mol −1 at 1.8 K.The transformation trend of χ m T curve below 7 K suggests the presence of weak intramolecular ferromagnetic correlation.And this magnetic difference is due to the different electron spin of the two center metals.The data in the range 100 -
Figure 3 . 1 Å 3 (
Figure 3.As in Figure 3, CP-1 exhibits a porous structure with various channels traversing the framework.Due to the small steric hindrance of DMSO molecules and long lengths of L Cu units, 3D framework of 1 possesses big cavities in these channels.The solvent accessible volume of CP-1 calculated by PLATON is 855.1 Å 3 (38.3%),which is large enough for hosting the solvent molecules (seven DMSO and two CH 3 OH).The network analysis based on TOPOS program reveals that CP-1 can be simplified to a (6,6)-connected network with {4 9 × 6 6 } topology, which is depicted in Figure 4.As in Figure 4, the {Sm 2 } clusters act as
Figure 3 .
Figure 3. Packing structure of CP-1 (a axis).Hydrogen atoms and DMSO molecules are omitted for clarity.
981(3) to 1.984(3) Å, while the value of Cu-O fall into the range of 1.971(3)-2.238(4)Å.Among the Cu-O bond, the lengths between Cu atoms and O atoms from water molecules are larger than the distances between Cu atoms and O atoms from carboxylates.Bond lengths of Sm-O vary from 2.316(4) to 2.531(8) Å.In the {Sm 2 } cluster, the shortest distance between two Sm atoms is 4.491(1) Å.
Figure 5. (a) Coordination modes of Dy1 and Dy2 atoms; (b) Connecting mode of L Cu unit (Cu1) in CP-2; (c) Connecting mode of L Cu unit (Cu2) in CP-2; (d) Connecting mode of L Cu unit (Cu3) in CP-2.Hydrogen atoms are omitted for clarity.
Figure 7 .
Figure 7. Packing structure of CP-2 (c axis).Hydrogen atoms and DMSO molecules are omitted for clarity.
Figure 10 .
Figure 10.Plots of the temperature dependence of χ m T and χ m (insert) for CP-1.
Figure 11 .
Figure 11.Plots of the temperature dependence of χ m T and χ m (insert) for CP-2.
Table 1 .
Crystal data for 1 and 2. Cu 3 N 6 O 42 S 11 Sm 2 C 60 H 60 Cu 3 N 6 O 36 S 8 Dy 2 273˚C -341˚C, four coordinated and one free DMSO molecules (15.84%) lose with the rise of temperature (calcd: 15.92%).After 350˚C, the organic groups of CP-1 start to lose and the skeleton structure starts to crumble.As for CP-2, the first weight loss of 2.86% (calcd: 2.92%) is observed from 50˚C to 110˚C for two to 118˚C, corresponding to the loss of four methanol molecules (calcd: 5.22%).Further weight loss (19.05%) appears from 142˚C to 236˚C, corresponding to the loss of six free DMSO molecules (calcd: 19.10%).In the temperature range of Journal of Materials Science and Chemical Engineering | 2,661.4 | 2018-04-12T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Efficient Optical Energy Harvesting in Self-Accelerating Beams
We report the experimental observation of energetically confined self-accelerating optical beams propagating along various convex trajectories. We show that, under an appropriate transverse compression of their spatial spectra, these self-accelerating beams can exhibit a dramatic enhancement of their peak intensity and a significant decrease of their transverse expansion, yet retaining both the expected acceleration profile and the intrinsic self-healing properties. We found our experimental results to be in excellent agreement with the numerical simulations. We expect further applications in such contexts where power budget and optimal spatial confinement can be important limiting factors.
Since their introduction in optics from the field of quantum mechanics 1-3 , Airy beams and, more generally, self-accelerating beams have received a significant amount of attention from the scientific community, from both a theoretical and an experimental point of view 4 . For instance, Airy beams have led to the generation of curved plasma channels 5 , electron self-accelerating beams 6 , photo-induced waveguides 7 and optical light bullets 8,9 . In parallel, their widespread applications range from material micro-processing 10 to optical trapping and manipulation 11 . The richness of the topic has been further highlighted in recent years by the possibility to generate self-accelerating beams propagating along any arbitrary convex trajectory, either by engineering the beam in real space [12][13][14][15] or its spectral counterpart in the Fourier domain 16 . In most of the work reported to date, the pattern of the generated two-dimensional (2D, i.e., in two transverse directions) accelerating beams occupies a large area filled by several sub-lobes. While accelerating beams with reduced sub-lobes expansion have already been demonstrated, their study have actually been restricted to only a few special trajectories [17][18][19] , in which the required pattern was obtained by directly solving the related wave functions. Although a peak intensity enhancement for these confined accelerating beams is not unexpected, the fundamental issue of optimal energy efficiency has not been directly addressed, particularly in the context of arbitrary trajectories.
In this paper, we show both theoretically and experimentally that an appropriate shaping of the spatial spectra is capable of leading to the generation, the "narrowing", and the peak intensity enhancement of 2D accelerating beams without significant degradations in both their propagation characteristics and intrinsic properties. For instance, we show that the intensity localized in the main lobe of self-accelerating beams propagating along several types of trajectories can be increased up to as much as 60% providing that an optimal shaping of the initial beam is done beforehand. Moreover, these generated beams exhibit significantly reduced tails; nevertheless they follow the original trajectories and, most importantly, they retain their self-healing properties.
The spectral density within the area dxdy, given as the inverse determinant of the Hessian matrix Hμ(k x , k y ), shows a singularity at each propagation distance z (accounting for the beam trajectory) when the following condition is satisfied: ( ) Assuming that the imposed transverse phase modulation is a separable function, i.e., ρ (k x , k y ) = ρ x (k x ) + ρ y (k y ), Eq. (4) can be reduced to: thus leading to the possibility of building a mapping relation between distance and frequency, while defining the key spatial frequencies associated with the acceleration trajectory (indicated as k xc (z) and k yc (z)). By solving Eq. (5), the key frequencies can be estimated and the beam trajectory can therefore be predicted as a parametric representation of the propagation distance z by means of: We note that the phase mask can also be designed for any desired convex beam trajectory, due to the ability of spectrum-to-distance mapping. In this case, starting from the trajectory, we can estimate the key frequencies from Eq. (6) and engineer the phase mask in a straightforward way from Eq. (5).
Experimental Setup. In the following, we consider three typical cases of convex trajectories: a parabolic (related to the well-known Airy beam), a cubic polynomial and an exponential trajectory, whose characteristics (calculated numerically) are presented in Table 1.
As seen from Table 1, we chose the same form of phase modulation along the x and y directions so that both paths associated with the accelerating beams and corresponding key spatial frequencies are expected to project on a 45° diagonal line in the transverse plane. For the sake of clarity, we study the beam propagation characteristics along this 45° line, defining the radial position of the main hump as = + s x y 2 2 in the real space and = + k k k s x y 2 2 in the spectral domain. The motivations of these design choices are in fact related to the main goal of this work. As it will be discussed later on, this specific spectral structure offers the possibility to easily increase the peak intensity of a 2D self-accelerating beam by means of a practical method based on the reshaping of the input beam. In our experiment, an incident Gaussian beam (CW at λ = 633 nm, w 0 = 2.45 mm) directly illuminates a spatial light modulator (SLM) located in the Fourier plane of a spherical lens (f = 150 mm). By applying an appropriate phase mask ρ(k x , k y ) on the SLM, we generate a beam whose main intensity lobe follows the desired trajectory after the lens-induced Fourier transform.
As an illustrative example, the beam intensity experimentally obtained at z = 4.2 cm for the case of the parabolic path is presented in Fig. 1(a). One can easily see a main lobe of high intensity surrounded by decreasing intensity side-lobes i.e., a 2D Airy beam. To confirm the reliability of our setup, we additionally measured the main lobe displacement along propagation (i.e., trajectory) of the considered accelerating beams. These results are shown in Fig. 1(b) and highlight the excellent agreement obtained between the predicted (lines) and measured (markers) convex trajectories. In the Fourier domain, the Airy beam spectrum maintains a Gaussian shape over the whole propagation range. Nevertheless, the spectral content associated with the main intensity lobe (i.e., filtering the side-lobes) corresponds to a spot of reduced size moving along the k s axis. This feature is illustrated in Fig. 1(c) where one can see that most of the energy associated to this spot (95% cut-off) is contained within a limited diagonal stripe bounded by white dashed lines. Using such an approach, we measured the locations of these spots along propagation for the three different accelerating beams, whose results are summarized in Fig. 1(d). Again, the spectral shifts of the main lobe obtained experimentally (markers) exhibit an excellent agreement with the longitudinal evolution of the key spatial frequencies (lines) predicted by Eq. (5). Transverse Energy Confinement through Spectral Reshaping. Owing to the fact that only the spectral components surrounding the key spatial frequencies are associated to the main lobe, the remaining spectral components are therefore linked to the beam sub-lobes of the self-accelerating beams. In numerous applications where the exact beam shape has a limited interest compared to its main lobe propagating along a given trajectory, the energy stored in the sub-lobes might therefore be considered as unwanted or even wasted.
Moreover, several related applications of these beams may indeed require an optimal spatial confinement of the beam intensity (e.g., pump-probe measurements or optical mapping), an important requirement that is usually hampered by several experimental factors such as the numerical apertures or the spatial resolutions of the optical elements employed. A practical and straightforward method that can be used to achieve this intensity enhancement (i.e., confinement) is to reshape the incident circular (Gaussian) beam into an elliptical (Gaussian) beam, whose minor diameter (orthogonal to the k s -axis and noted ds') matches closely the stripe width determined by the spectrum of the main lobe (see Fig. 1(c)). This approach is illustrated in Fig. 2(a) where the initial circular Gaussian beam (blue shading) incident on the SLM phase mask has been reshaped into an elliptical Gaussian beam (red shading) by using a one-dimensional telescope (i.e., two conjugated cylindrical lenses of focal lengths f 1 = 200 mm and f 2 = 50 mm, respectively). In the second case, presented in Fig. 2(b), the experimentally recorded acceleration profiles for all studied convex trajectories (markers) remained almost unchanged and in excellent agreement with the numerical predictions (lines). The corresponding experimental transverse intensity maps generated from the elliptical beam considering the case of the parabolic trajectory are shown, as an illustrative example, in Fig. 2(c-d). The beam profile, similar to those of zero-order accelerating parabolic beams 17,18 , exhibits smaller side lobes compared to the uncompressed case presented in Fig. 1(a) and one would thus expect the beam peak intensity to be enhanced in terms of energy conservation. Interestingly, such a beam propagates without any significant deformation while following the expected acceleration profile.
Characterization of the Peak Intensity Enhancement.
In order to explore the effect of beam "squeezing" in relation to the peak intensity enhancement, we have performed numerical simulations for the case of the parabolic trajectory, which are shown in Fig. 3(a). According to these, we expect our experimental elliptical beam (red circle) to provide an enhancement of the peak intensity of about 60% when compared to the initial circular Gaussian beam (blue square). As illustrated in Fig. 3(a), the amount of "squeezing" of the input beam will determine the expected peak intensity enhancement and is intrinsically related to the associated numerical aperture of the system. For instance, considering the experimental setup described in this paper, one would expect the energy harvesting to be optimal whenever the beam spectrum exhibits a maximal overlap with the main lobe spectral components of the associated trajectory (i.e. for a beam minor diameter of 2 mm in this particular case). Obviously, further increasing the eccentricity of the beam shape would in fact be detrimental as one would basically approach the case of a one-dimensional (1D) beam (when the value of the minor diameter decreases largely). In this framework, and to confirm the validity of our numerical predictions, we have also compared the peak intensity evolution along propagation for the uncompressed and the compressed cases. These results, obtained experimentally for the trajectories given in Table 1, are presented in Fig. 3(b-d). For instance, the case of the parabolic trajectory is shown in Fig. 3(b) by comparing the peak intensity for a circular (blue squares) and an elliptical (red circles) incident beam. We can see a very good agreement with the 60% intensity enhancement expected from the simulations, while the overall longitudinal evolution of the peak intensity shows a similar behavior in both cases. For the cases of cubic and exponential trajectories presented Beam minor diameter at 95% intensity cuto (mm) Peak intensity (a.u)
Intuitively, one may infer that the reduced sub-lobes would limit the self-healing over only a limited longitudinal range for the newly generated accelerating beams. Nonetheless, we did not observe significant changes in this important property before and after the compression of the spatial spectrum. In the experiment, we verified this issue by blocking the main lobe at the propagation onset (z = 0). The result obtained in this case is shown in Fig. 4 where one can directly observe the self-healing behavior of the beam associated to the parabolic trajectory illustrated in Fig. 2(c-d). This result further highlights the capability of our approach to provide energetically confined beam patterns while retaining the peculiar properties of 2D accelerating beams.
Discussion
The scheme discussed in this report demonstrates the possibility to significantly and efficiently enhance the peak intensity of the main lobe of 2D diffraction-free beams propagating along several convex trajectories while reducing their equivalent transverse expansion. This energy confinement can be readily obtained by reshaping the incident beam to properly match the profile of the spectra associated to the main lobe. Such a simple and realistic approach is very useful since power is always an important concern in various applications of nonlinear optics, micro or nano-manipulation, laser writing, and ultra-intense field optics, where accelerating beams targeting various applications have been recently implemented. By using our method, one can achieve the same results previously demonstrated but using much lower input powers (more than 60% less than those required when employing conventional self-accelerating beams). Further work will aim at extending this scheme to the case of accelerating beams propagating in a nonlinear medium as well as studying optimal energy confinement in their spatio-temporal analog (i.e., 2 + 1 D optical bullets) 20 .
Methods
Initial experimental measurements were performed using an incident Gaussian beam (CW at λ = 633 nm, w 0 = 2.45 mm) directly illuminating a phase-only Pluto Spatial Light Modulator (SLM) produced by Holoeye (Pluto − 1920 × 1080 pixels of 8 × 8 μ m 2 area, 8-bit grey phase levels). The SLM was located in the Fourier plane of a spherical lens (f = 150 mm) and a CCD camera (Sony XC-ST50 − 640 × 480 pixels of 8.4 × 9.8 μ m 2 area, 8-bit dynamic range) mounted on a translation stage was used to image the beam transverse patterns and corresponding spectral intensity distributions at different longitudinal distances Fig. 1(a) and Fig. 1(c). In the latter case, an adjustable aperture slit was used to filter out the contributions of the beam side-lobes and the residual components were imaged in the Fourier plane of a second spherical lens (f = 100 mm) also mounted on a translation stage. Beam trajectories and key spatial frequencies [in Figs. 1(b,d) and 2(b)] were extracted by numerical methods from the intensity beam patterns and spectral spot distributions, taking into account the magnification and the transverse spatial resolution of our imaging system. In the second set of measurements, the initial circular Gaussian beam was reshaped into an elliptical Gaussian beam by using a one-dimensional telescope (i.e., two conjugated cylindrical lenses of f 1 = 200 mm and f 2 = 50 mm) while ensuring that the overall transmitted power was the same in both cases. Note that all figures illustrating the transverse intensity distributions (and the associated spectral maps) are presented using a color scale normalized with respect to the maximal intensity measured. This is done to provide a more visual illustration of the reduced beam expansion obtained when the input beam has been reshaped. Furthermore, the peak intensity enhancement shown in Fig. 3(b-d) has been normalized (without any adjustment) to the maximal peak intensity detected on the CCD, by considering the propagation of each beam trajectory for the circular Gaussian input case. As the overall power for both input beams (respectively circular and elliptical) has been carefully characterized to be the same at the input and output of the imaging system (through both power measurements and transverse spatial integration of the CCD signals), this approach gives us a direct and straightforward measurement of the peak intensity enhancement obtained by our method. | 3,477.6 | 2015-08-24T00:00:00.000 | [
"Physics"
] |
Evaluation of Three Methods for CPR Training to Lifeguards: A Randomised Trial Using Traditional Procedures and New Technologies
Background and objectives: When the drowning timeline evolves and drowning occurs, the lifeguard tries to mitigate the event by applying the last link of the drowning survival chain with the aim of treating hypoxia. Quality CPR (Cardiopulmonary Resuscitation) and the training of lifeguards are the fundamental axes of drowning survival. Mobile applications and other feedback methods have emerged as strong methods for the learning and training of basic CPR in the last years so, in this study, a randomised clinical trial has been carried out to compare the traditional method as the use of apps or manikins with a feedback system as a method of training to improve the quality of resuscitation. Materials and Methods: The traditional training (TT), mobile phone applications (AP) and feedback manikins (FT) are compared. The three cohorts were subsequently evaluated through a manikin providing feedback, and a data report on the quality of the manoeuvres was obtained. Results: Significant differences were found between the traditional manikin and the manikin with real-time feedback regarding the percentage of compressions with correct depth (30.8% (30.4) vs. 68.2% (32.6); p = 0.042). Hand positioning, percentage correct chest recoil and quality of compressions exceeded 70% of correct performance in all groups with better percentages in the FT (TT vs. FT; p < 0.05). Conclusions: As a conclusion, feedback manikins are better learning tools than traditional models and apps as regards training chest compression. Ventilation values are low in all groups, but improve with the feedback manikin.
Introduction
Drowning is a public health problem, and according to the World Health Organization, "every day and every hour, more than 40 people lose their lives to drowning in the world" [1]. Epidemiology is different depending on the country, although there are variables that seem to influence the number of deaths by drowning such as age, sex, and the location of the event [2]; drowning is common in inland waters and during summer months [3].
Drowning timeline sorts and reflects the triggers, actions, and interventions associated to the drowning process [4]. At the time the person must be rescued, the timeline has failed and event mitigation is required through the application of the last link cover, "provide care as needed", of the drowning survival chain [3,5]. This link reflects the basic life support (BLS) sequence in drowning, where treating hypoxia is critical [3].
Since oxygen deficiency is the main cause of death by drowning, oxygenation and ventilation are the priority actions in drowning situations, either in the aquatic environment or out of it. Resuscitation begins with ventilation; mouth-to-mouth is more effective and rapid, but it is better using a bag-valve-mask, which is the most reasonable option when performed by 2 rescuers [2], as well as 100% oxygen administration [2,3].
The lifeguard should know the physiopathology of drowning and the particularities of the drowned patient so as to provide the best care. Among the possible caring actions, quality CPR (Cardiopulmonary Resuscitation) with chest compressions and effective ventilations is essential, being a determining factor for the survival of drowned victims [2,3]. In addition, another determining element is effective training [6].
In Spain, lifeguards have seasonal work and they have other occupations the rest of the year. This particular situation results in a need for training at the beginning of the summer, as skills decrease at 3-6 months after training [7].
In recent years, different training and self-directed education methods have been developed, being alternatives to the training of health and lay staff [6]. Some of the new trends in CPR training are mobile applications (apps) that provide support during the early stages of the survival chain and/or during CPR compressions training by using the ability to detect and calculate the compression depth and the frequency of compressions per minute in real time. The main advantage of this tool is that it is a more universal, easy-to-manage and cost-saving method [8].
In recent years, manikins have been developed with feedback capability. They mainly report CPR skills: Compression frequency, depth, chest recoil, and hands positioning, being an important alternative in the training of non-experts and also healthcare professionals [6,9,10].
The objective of our study was to compare the traditional method with the use of apps or manikins with feedback system as training methods to improve the quality of resuscitation.
Materials and Methods
The study was designed as a non-blinded randomised trial, involving a convenience sample of 30 beach lifeguards during the months of June to September 2018.
Pre-Phase
The sample was randomised by distributing the 30 lifeguards into 3 groups with 3 different training methods: Traditional training (TT), who received training with a manikin without feedback system guided by the instructor; app training (AP), who received training using an app (Massage cardiaque et DSA) and, finally, feedback training (FT), who received training with a manikin with feedback ( Figure 1). The lifeguards received the same training; a 12-min session of training and practised at least 6 min of CPR. The maximum lifeguard/teacher and lifeguard/manikin ratio was 4/1. The teacher was a nurse with training in emergency situations and teaching experience in BLS. All lifeguards had been trained in BLS in the previous 2 years.
Test Phase
Between 7 and 15 days after the training, each lifeguard performed an evaluation of a 3-min CPR simulation scenario. In the study, the figure of 70% was used as overall CPR quality, which some experts have appointed as a cut-off point for sufficient quality [11].
Instruments
The instrument used in the training were the app, the Resusci Anne QCPR Skillreporter and Resusci Anne manikin without feedback system ( Figure 1).
The app was Massage cardiaque et DSA designed by IMAOS SAS, free and downloaded on a smartphone. The app provided the following parameters related to the quality of compressions as total number of compressions, mm of depth achieved, % compressions at correct depth, % correct chest recoil, mean rate, and % correct rate compressions.
The final evaluation was conducted by the Resusci Anne QCPR Skillreporter (Laerdal, Norway).
Test Phase
Between 7 and 15 days after the training, each lifeguard performed an evaluation of a 3-min CPR simulation scenario. In the study, the figure of 70% was used as overall CPR quality, which some experts have appointed as a cut-off point for sufficient quality [11].
Instruments
The instrument used in the training were the app, the Resusci Anne QCPR Skillreporter and Resusci Anne manikin without feedback system ( Figure 1).
The app was Massage cardiaque et DSA designed by IMAOS SAS, free and downloaded on a smartphone. The app provided the following parameters related to the quality of compressions as total number of compressions, mm of depth achieved, % compressions at correct depth, % correct chest recoil, mean rate, and % correct rate compressions.
The final evaluation was conducted by the Resusci Anne QCPR Skillreporter (Laerdal, Norway).
Variables
The dependent variables of the study were those derived from the Resusci Anne QCPR Skillreporter and calculated the QCC (quality of compressions) score using the equation [12,13]: [(% compressions at correct depth + % correct chest recoil + % correct rate compressions)/3].
Ramdomisation
It was done by randomisation of lifeguards, carried out through the Excel ® software for Windows 10 (Microsoft, Redmond, WA, USA) RAND function.
Statistical Analysis
For the study of the quantitative variables, normality was tested using the Shapiro-Wilk test. The quantitative variables were expressed by central trend and dispersion measures (mean (standard deviation or SD); median (interquartile range)) according to normality. The qualitative variables were expressed by using absolute and relative frequencies. The mean comparison was done by the Mann-Whitney U test, and the multiple comparison of means was performed through the ANOVA test with Bonferroni correction (assuming equal variances) or the Games-Howell test (not assuming equal variances). To assess the variances homogeneity, the Levene ANOVA test with Bonferroni correction was used for those variables with normal distribution, and the Kruskall-Wallis test for those that did not meet this assumption. The data was processed and analysed using the SPSS v.21.0 statistical package (IBM, Armonk, NY, USA). A significance level of p < 0.05 was established.
Ethical Considerations
All lifeguards obtained previous information on the purpose and procedure of this study and stated their desire to voluntarily participate through the informed consent. The participation in the study did not imply any risk or benefit for the participants. Data were anonymously collected and recorded, maintaining at all times the confidentiality of the information. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee for Research of the University of León (ETICA-ULE-022-2018) on 20 July 2018.
Results
Thirty lifeguards participated with a mean age of 26.9 (SD: 7.1), and 86.67% (26/30) of them were men. No difference was found between the groups regarding age (p = 0.179) or sex (p = 0.315).
Compressions
With regard to the mean depth variable, it was observed that the TT did not reach the 50 mm recommended by the ERC (European Resuscitation Council), with no difference found between the groups. The compression percentage with the correct depth showed differences between the groups (TT vs. AP vs. FT) (p = 0.047), being significant between the TT and the FT (p = 0.047) ( Table 1 and Figure 2).
Medicina 2020, 56, x FOR PEER REVIEW 5 of 9 The mean rate of chest compressions/minute in the TT was higher than the one recommended by the ERC, and 75% of lifeguards had a higher rate than recommended ( Table 1).
As for the quality of compressions (QCC) ( Table 1), significant differences were observed, with FT obtaining better results, as compared to the TT and AP (p = 0.039), the difference between FT and TT was significant (p = 0.045).
The variables that exceeded 70% were % hands positioning, percentage of complete chest recoil and quality of compressions in all groups.
Ventilations
The FT achieved better results than the TT, with significant differences in the total number of ventilations (p = 0.043) ( Table 1 and Figure 2). The mean insufflation volume was higher than the one recommended by the ERC in the TT and AP (Table 1) although no significant differences were observed between groups.
CPR Quality
Differences were observed in the overall quality of CPR (QCPR), with a higher percentage in the FT than in the TT and AP (p = 0.027). Significant differences were found between FT and AP (p = 0.014) and between FT and TT (p = 0.028) ( Table 1). The mean rate of chest compressions/minute in the TT was higher than the one recommended by the ERC, and 75% of lifeguards had a higher rate than recommended ( Table 1).
As for the quality of compressions (QCC) ( Table 1), significant differences were observed, with FT obtaining better results, as compared to the TT and AP (p = 0.039), the difference between FT and TT was significant (p = 0.045).
The variables that exceeded 70% were % hands positioning, percentage of complete chest recoil and quality of compressions in all groups.
Ventilations
The FT achieved better results than the TT, with significant differences in the total number of ventilations (p = 0.043) ( Table 1 and Figure 2). The mean insufflation volume was higher than the one recommended by the ERC in the TT and AP (Table 1) although no significant differences were observed between groups.
CPR Quality
Differences were observed in the overall quality of CPR (QCPR), with a higher percentage in the FT than in the TT and AP (p = 0.027). Significant differences were found between FT and AP (p = 0.014) and between FT and TT (p = 0.028) ( Table 1).
Discussion
This study has shown how feedback manikin training improves results in parameters that determine the quality of compressions and ventilations, as compared to other training methods.
Apps are readily available and easy-to-use resources that are not regulated, as most of them cannot guarantee their validity and reliability [14]. Fernandez et al. [8] studied 3 different apps for CPR training without obtaining differences, but did find improvements in the rate of chest compressions per minute. Our study used one of the apps assessed in this mentioned study.
The decision to accommodate the duration of CPR manoeuvres to this time period is based on the quality loss of the compressions after 2 min of resuscitation, as described in a study that covered the effect that physical fatigue had on lifeguards when applying the former four minutes, the authors observed differences regarding non-fatigued lifeguards [15].
The measured parameters obtained a quality level similar to the one found in previous studies [6,15] that used similar training methods as regards the variables: Correct hands positioning, correct chest recoil, and rate of compression per minute.
The app achieved a different percentage of compressions with correct depth (no significant differences between AP vs. TT and AP vs. FT), as compared to the standard manikin and the feedback manikin, as in previous studies [16].
Significant differences were only found between the manikin with feedback as compared to the other 2 groups regarding the overall quality of CPR because, although without finding significant differences, the results are better with the feedback manikin as regards the variables that determine the quality of CPR.
Other CPR quality parameters that showed improvement without statistical differences were the mean volume and the percentage of adequate ventilations, and both were higher in the manikin with feedback as it provided real-time information. However, the data from our study do not match those by Zapletal et al. [16], where training with an app or a standard manikin obtained the mean volume within the suggested range (500-600 mL). Also, in our study, the percentage of adequate ventilations was very low in the three groups. This must be highlighted due to the importance of lifeguards performing appropriate ventilations at cardiac arrest situations where treating hypoxia is fundamental. Therefore, and from what has been observed in other studies, ventilation training with feedback can be an appropriate resource to improve the quality of ventilations.
Anyway, the practise of ventilation is currently unsafe and should be avoided, although resuscitation attempts will be considered if safety conditions exist and if the lifeguard team has specific training and PPE. This is because with the advent of Covid-19, there are extra complications during resuscitation due to the aerosol generation during compressions and the need to perform ventilations safely, using barrier materials, antiviral filters and personal protection [17].
In our study, we have compared feedback manikin training and standard manikin training. This goes in line with the study by Baldi et al. [18], where better results and significant differences were obtained in the percentage of compressions with the right depth when training with feedback manikin as compared to the standard manikin, just like in our study.
The availability of manikins with real-time feedback has improved the quality of training, as compared to other types of training in CPR skills [16]. Other tools such as mobile apps may be helpful in training parameters such as rate of compressions per minute and in training with limited resources.
Our study presents several limitations. On the one hand, some lifeguards had already received previous training, which was minimised with practical training prior to evaluation given by the same supervisor. On the other hand, the sample size which, despite being small, included a group of lifeguards with a very high degree of motivation. Also, the assessment was done in a simulated context which does not exactly represent a real CPR situation, so this may be a limitation when extrapolating results to real possible victims. Effectivity of ventilations was not assessed; instead, ventilations in which an airflow was produced were measured, allowing the person in training to try these but in a non-effective way. It is possible that by increasing the time and/or modifying the training method, the obtained results may be better. Other relevant limitations may be motivational factors, stress, etc., which can only be evaluated in real events.
Finally, it is highlighted as a limitation of the study that this intervention has not been able to be performed again after a reasonable period of time of 2 weeks, after which the retention of abilities usually begins to disappear. So that, a new intervention is proposed as a future line of investigation to compare the retention capacity in all three groups after a while since training.
Conclusions
It can be concluded that feedback manikins are better learning tools for chest compression training, as compared to manikins without feedback and the app, regarding parameters that determine the quality of compressions, such as the mean depth.
Ventilation values are low in all groups and improve in the feedback manikin, but without reaching the agreed quality value (70%). In this case, further studies focusing on the feedback about ventilation training may be needed. | 3,949.4 | 2020-10-30T00:00:00.000 | [
"Medicine",
"Engineering"
] |
A Dataset of Scalp EEG Recordings of Alzheimer’s Disease, Frontotemporal Dementia and Healthy Subjects from Routine EEG
: Recently, there has been a growing research interest in utilizing the electroencephalogram (EEG) as a non-invasive diagnostic tool for neurodegenerative diseases. This article provides a detailed description of a resting-state EEG dataset of individuals with Alzheimer’s disease and frontotemporal dementia, and healthy controls. The dataset was collected using a clinical EEG system with 19 scalp electrodes while participants were in a resting state with their eyes closed. The data collection process included rigorous quality control measures to ensure data accuracy and consistency. The dataset contains recordings of 36 Alzheimer’s patients, 23 frontotemporal dementia patients, and 29 healthy age-matched subjects. For each subject, the Mini-Mental State Examination score is reported. A monopolar montage was used to collect the signals. A raw and preprocessed EEG is included in the standard BIDS format. For the preprocessed signals, established methods such as artifact subspace reconstruction and an independent component analysis have been employed for denoising. The dataset has significant reuse potential since Alzheimer’s EEG Machine Learning studies are increasing in popularity and there is a lack of publicly available EEG datasets. The resting-state EEG data can be used to explore alterations in brain activity and connectivity in these conditions, and to develop new diagnostic and treatment approaches. Additionally, the dataset can be used to compare EEG characteristics between different types of dementia, which could provide insights into the underlying mechanisms of these conditions.
Summary
Alzheimer's disease (AD) and frontotemporal dementia (FTD) are both progressive neurodegenerative disorders that affect the elderly [1]. AD is the most frequently diagnosed dementia type, accounting for 60-80% of cases, while FTD is relatively rare, accounting for 5-10% of cases [2]. Both neurological conditions are characterized by cognitive decline and behavioral changes, and affect the brain in different ways, resulting in distinct (yet possibly overlapping) symptoms [3]. An initial AD sign is difficulty in recalling events Data 2023, 8, 95 2 of 10 related to short-term memory and it progresses to speech and orientation difficulties, lack of self-care, or behavioral alterations. The initial sign of frontotemporal dementia (FTD) can vary depending on which part of the brain is affected first. However, behavioral changes and personality changes are often the initial symptoms in the behavioral variant of FTD, which is the most common variant of the disease [3]. Currently, there is no cure for either condition, while available treatments provide limited symptomatic relief [4].
A combination of clinical evaluation, neurological testing, and neuropsychological testing is used to make the diagnosis of Alzheimer's disease and frontotemporal dementia. These disorders may also be diagnosed with the use of imaging tests such as positron emission tomography (PET) [5] or magnetic resonance imaging (MRI) [6]. The early symptoms of both frontotemporal dementia and Alzheimer's disease might be mild and overlap with other neurodegenerative illnesses or mental problems, making a diagnosis difficult in both cases. Better detection technologies are thus required in order to help in the early identification of these illnesses. A timely diagnosis is essential because early care can help postpone the emergence of symptoms that grow more severe and enhance quality of life [7]. A neurodegenerative condition can be challenging to live with, but an early diagnosis enables the installation of safety precautions, legal and financial planning, and emotional support services, which can help people and their families manage. Therefore, there is an urgent need for new detection methods that can help with the early detection of frontotemporal dementia and Alzheimer's disease, which can eventually lead to better outcomes for those who have these disorders.
Electroencephalography (EEG) has become a potential method for the identification and monitoring of Alzheimer's disease and frontotemporal dementia, in addition to clinical evaluation and imaging testing [8]. EEG measures brain electrical activity and can identify anomalies in brain waves linked to certain disorders [9][10][11][12]. Then, using machine learning techniques, these signals can be automatically analyzed to find patterns that might point to sickness. For instance, machine learning models can spot a slowing of brain waves in specific areas of the brain in AD, and they can spot alterations in connectivity between distinct brain regions in FTD [2]. The automatic identification of these illnesses using machine learning and EEG readings is still in its infancy and needs more study and validation. However, the promise of EEG and machine learning as non-invasive, affordable, and accessible detection methods emphasizes the necessity of ongoing study in this area, which may ultimately result in the improved and more prompt diagnosis of Alzheimer's disease and frontotemporal dementia.
The aim of this study was to collect the electrical activity of the brain of elderly patients with AD and FTD, and healthy age-matching controls, during the eye resting state using EEG. These recordings are structured in the Brain Imaging Data Structure (BIDS) format, which is a standardized format for organizing and describing neuroimaging data [13]. BIDS was developed to improve the consistency, compatibility, and ease of use of neuroimaging data across different research groups and institutions. Researchers focusing on these neurodegenerative disorders will find the released dataset of EEG recordings from people with Alzheimer's disease and frontotemporal dementia, and healthy controls (CN), to be an essential tool. Researchers will be able to examine the diseases' underlying mechanisms, find potential biomarkers for early identification, and test new treatments thanks to the dataset. The development and testing of machine learning methodologies, which can be used to automatically detect and categorize diseases based on EEG signals, require the availability of datasets like this. With the use of this dataset, researchers may test and refine their algorithms, advancing the area of machine learning-based neurodegenerative disease diagnosis. Overall, this dataset has the potential to significantly advance our understanding of Alzheimer's disease, frontotemporal dementia, and the role of EEG in their diagnosis and management. As a result of this work, EEG recordings from 88 subjects have been registered and cleared of artifacts and have been made available to the cognitive neuroscience research community. In total, 36 of them were diagnosed with AD, 23 with FTD, and 29 were CN. Prior to the publication of this dataset, two studies regarding machine learning methodologies for the classification or severity quantification of AD and FTD have been published, using a subset of participants [2,14].
Data Description
This dataset contains the EEG resting state-closed eyes recordings from 88 subjects in total. A total of 36 of them were diagnosed with Alzheimer's disease (AD group), 23 were diagnosed with frontotemporal dementia (FTD group), and 29 were CN. The cognitive and neuropsychological state was evaluated by the international Mini-Mental State Examination (MMSE) [15]. The MMSE score ranges from 0 to 30, with a lower MMSE indicating more severe cognitive decline.
Participants
All the recordings were acquired from routine EEG of patients of the aforementioned groups. The duration of the disease was measured in months and the median value was 25 with the IQR range (Q1-Q3) being 24-28.5 months. Concerning the AD group, no dementia-related comorbidities have been reported. The initial diagnosis for the AD and FTD patients was performed according to the criteria provided by the Diagnostic and Statistical Manual of Mental Disorders, 3rd ed., revised (DSM-IIIR, DSM IV, ICD-10) [16] and the National Institute of Neurological, Communicative Disorders and Stroke-Alzheimer's Disease and Related Disorders Association (NINCDS-ADRDA) [17]. The average MMSE for the AD group was 17.75 (SD = 4.5), for the FTD group it was 22.17 (SD = 8.22), and for the CN group it was 30. The mean age of the AD group was 66.4 years (SD = 7.9), for the FTD group it was 63.6 (SD = 8.2), and for the CN group it was 67.9 (SD = 5.4). Table 1 presents a detailed description of each participant. Participants have been anonymized and personal information has not been disclosed, following GDPR restrictions.
Dataset Structure
This dataset was preprocessed and formed in its current structure in the Human Computer Interaction Laboratory of the Department of Informatics and Telecommunications, University of Ioannina, Greece. It is structured in the BIDS format. The BIDS format specifies the file organization structure and naming convention for all neuroimaging data, including structural and functional MRI and EEG. It also defines metadata that describe the data, in JSON format, such as subject and session identifiers, acquisition parameters, and task information. Making the dataset BIDS compatible ensures the ease of use for other researchers because open-source software (such as EEGLAB [18]) provide tools for analyzing and processing neuroimaging data of BIDS-compliant datasets. Figure 1 provides a description of the dataset structure.
The dataset consists of the following: (1) The dataset_description.json file, which provides information regarding the authors of the dataset, the acknowledgment of the research project that made this work possible, the DOI, the BIDS version of the dataset, the license under which it is published, and the ethics approval statement. (2) The participants.json file, which contains definitions regarding the attributes of the participants, as shown in Table 1. This metadata file is used by software such as EEGLAB to automatically group and label the EEG recordings to the participants. (3) The participants.tsv file, which is a tab-separated file containing the information of Table 1. (4) A folder system of folders named as sub-0XX. Each folder is associated to one participant-id of the participant table. Additionally, each folder contains three files: (A) A sub-0XX-task_eyesclosed_eeg.json file, which contains all the necessary EEG recording information, such as the placement scheme (10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20), the reference (A1 and A2), the model of the device and amplifier used, the channel count, the sampling frequency, the recording duration, and more. (B) A sub-0XX_task-eyesclosed_channels.tsv file, which provides information about electrode location. (C) A sub-0XX_task-eyesclosed_eeg.set file, which contains the EEG recordings of the participant in a .set format, which is one of the four BIDS-allowed EEG formats (those being the European data format .edf, the BrainVision Core Data Format .vhdr or .eeg, the EEGLAB format .set, and the Biosemi format .bdf). The following two facts should be noted. First, the .set files contain all the necessary recording information; thus, they can also be accessed in a non-BIDS setting. Second, the sub-0XX_task-eyesclosed_channels.tsv and sub-0XX-task_eyesclosed_eeg.json files are the same for each participant, since the same recording setting has been used (except for the recording duration information, which differs); thus, users do not need to examine all of them. (5) The folder derivatives, which contain subfolders with the same structure described before, with the difference that the EEG recordings are preprocessed. The dataset consists of the following: (1) The dataset_description.json file, which provides information regarding the authors of the dataset, the acknowledgment of the research project that made this work possible, the DOI, the BIDS version of the dataset, the license under which it is published, and the ethics approval statement. (2) The participants.json file, which contains definitions regarding the attributes of the participants, as shown in Table 1. This metadata file is used by software such as EEGLAB to automatically group and label the EEG recordings to the participants. (3) The participants.tsv file, which is a tab-separated file containing the information of Table 1. (4) A folder system of folders named as sub-0XX. Each folder is associated to one participant-id of the participant table. Additionally, each folder contains three files: (A) A sub-0XX-task_eyesclosed_eeg.json file, which contains all the necessary EEG recording information, such as the placement scheme (10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20), the reference (A1 and A2), the model of the device and amplifier used, the channel count, the sampling frequency, the recording duration, and more. (B) A sub-0XX_task-eyesclosed_channels.tsv file, which provides information about electrode location. (C) A sub-0XX_task-eyesclosed_eeg.set file, which contains the EEG recordings of the participant in a .set format, which is one of the four BIDS-allowed EEG formats (those being the European data format .edf, the BrainVision Core Data Format .vhdr or .eeg, the EEGLAB format .set, and the Biosemi format .bdf). The following two facts should be noted. First, the .set files contain all the necessary recording information; thus, they can also be accessed in a non-BIDS setting. Second, the sub-0XX_task-eyesclosed_channels.tsv and sub-0XX-task_eyesclosed_eeg.json files are the same for each participant, since the same recording setting has been used (except for the recording duration information, which differs); thus, users do not need to examine all of them. (5) The folder derivatives, which contain subfolders with the same structure described before, with the difference that the EEG recordings are preprocessed.
Recording
The recordings of this dataset were collected to investigate functional differences in the EEG activity of AD versus CN, FTD versus CN, and even AD versus FTD. These recordings took place in a clinical routine setting. Recordings were acquired from the 2nd Department of Neurology of AHEPA General Hospital of Thessaloniki by an experienced team of neurologists. A clinical EEG device (Nihon Kohden 2100), with 19 scalp electrodes (Fp1, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1, and O2) and 2 electrodes (A1 and A2) placed on the mastoids for an impedance check and as reference electrodes, was used for the recording of the EEG signals. The electrodes were placed according to the 10-20 international system. Each recording was performed according to the clinical protocol with participants being in a sitting position with their eyes closed. The recording montage was referential using Cz for common mode rejection. The sampling rate was 500 Hz and the resolution was 10 uV/mm. This study was approved by the Scientific and Ethics Committee of AHEPA University Hospital, Aristotle University of Thessaloniki, under protocol number 142/12-04-2023. The investigations were carried out following the rules of the Declaration of Helsinki of 1975 (http://www.wma.net/en/30publications/10policies/b3/, accessed on March 2019), revised in 2008.
Preprocessing
Only the derivatives folder, where the preprocessed data is kept, is covered by this section. The following is the EEG signals' preprocessing pipeline. The signals were rereferenced to the average value of A1-A2 after applying a Butterworth band-pass filter with a frequency range of 0.5 to 45 Hz. The signals were then subjected to the ASR routine, an automatic artifact reject technique that can eliminate persistent or large-amplitude artifacts, which removed bad data periods that exceeded the maximum acceptable 0.5 s window standard deviation of 17 (which is regarded as a conservative window). The ICA method (RunICA algorithm) was then used to convert the 19 EEG signals to 19 ICA components [19]. ICA components categorized as "eye artifacts" or "jaw artifacts" by the EEGLAB platform's automatic classification method "ICLabel" were automatically excluded. It should be mentioned that, even though the recording was done in a resting state with the eyes closed, eye movement artifacts were still identified in certain EEG recordings. Figure 2 represents a snapshot of the same signal in raw form, and in preprocessed form. It can be observed that severe high frequency artifacts have been removed and baseline correction has been applied.
Preprocessing
Only the derivatives folder, where the preprocessed data is kept, is covered by this section. The following is the EEG signals' preprocessing pipeline. The signals were rereferenced to the average value of A1-A2 after applying a Butterworth band-pass filter with a frequency range of 0.5 to 45 Hz. The signals were then subjected to the ASR routine, an automatic artifact reject technique that can eliminate persistent or large-amplitude artifacts, which removed bad data periods that exceeded the maximum acceptable 0.5 s window standard deviation of 17 (which is regarded as a conservative window). The ICA method (RunICA algorithm) was then used to convert the 19 EEG signals to 19 ICA components [19]. ICA components categorized as "eye artifacts" or "jaw artifacts" by the EE-GLAB platform's automatic classification method "ICLabel" were automatically excluded. It should be mentioned that, even though the recording was done in a resting state with the eyes closed, eye movement artifacts were still identified in certain EEG recordings. Figure 2 represents a snapshot of the same signal in raw form, and in preprocessed form. It can be observed that severe high frequency artifacts have been removed and baseline correction has been applied.
Classification Benchmark
In order to benchmark the classification performance of the EEG dataset on the classification of AD vs. CN and FTD vs. CN, a variety of relatively simple feature extraction and classification techniques that can be easily reproduced and extended by other researchers were applied. While more complex algorithms (such as deep learning) and feature extraction techniques may provide better performance, the goal was to establish a basic benchmark for the dataset that could be easily validated and reproduced.
Feature Extraction
One of the most commonly extracted features for EEG classification tasks is the Relative Band Power (RBP) of the five frequency bands of interest of the brain activity. The five frequency bands are defined as [ Moreover, according to the literature, AD patients exhibit changes in the RBP such as reduced alpha power and increased theta power.
In this paper, the EEG signals were first epoched to 4 s time windows with 50% overlap to create the population of the dataset that would be used for classification. Each epoch was labeled as AD, FTD, or CN.
In order to obtain the RBP, the Power Spectral Density (PSD) of the time-windowed signal for each frequency band was obtained using the Welch method [20], which splits the signal into overlapping segments and calculates each segment's squared magnitude of the discrete Fourier transform. A final estimate of the PSD is then created by averaging the obtained values. Finally, the relative ratio of PSD of each band for each epoch was calculated, resulting in the feature matrix that consisted of 5 features for each row. To calculate the relative ratio of PSD of a band, the PSD of the band is calculated and then divided by the PSD of the whole frequency range of interest, namely 0.5-45 Hz.
To illustrate the differences between the PSD of each group for each frequency band, Figure 3 is provided, which consists of heatmaps describing the PSD across the scalp, averaged across the AD, FTD, and CN groups.
Classification
The most used machine learning algorithms were used for the classification of AD-CN and FTD-CN to benchmark this dataset. The Leave-One-Subject-Out (LOSO) validation method has been used for the performance evaluation of the algorithms [2]. In this
Classification
The most used machine learning algorithms were used for the classification of AD-CN and FTD-CN to benchmark this dataset. The Leave-One-Subject-Out (LOSO) validation Data 2023, 8, 95 9 of 10 method has been used for the performance evaluation of the algorithms [2]. In this validation methodology, all the epochs of one subject are left out as the test set while all the other epochs comprise the training set. This is repeated iteratively for every subject, and then the averaged performance metrics are calculated from the confusion matrix and presented. The performance metrics that were calculated were accuracy (ACC), sensitivity (SENS), specificity (SPEC), and the F1 score (F1). The machine learning algorithms used were LightGBM (hyperparameter optimized by Hyperopt [21]), Multilayer Perceptron (MLP) (1 hidden layer of 3 neurons), Random Forests, Support Vector Machine (SVM) (polynomial kernel), and kNN (k = 3). The results for the AD-CN classification are presented in Table 2 and the results for the FTD-CN classification are presented in Table 3.
User Notes
We encourage researchers to use the preprocessed data found in the derivatives folder. Moreover, when publishing a work based on this dataset, please check the "How to | 4,523.4 | 2023-05-27T00:00:00.000 | [
"Computer Science"
] |
The Hardest Paradox for Closure
According to the principle of Conjunction Closure, if one has justification for believing each of a set of propositions, one has justification for believing their conjunction. The lottery and preface paradoxes can both be seen as posing challenges for Closure, but leave open familiar strategies for preserving the principle. While this is all relatively well-trodden ground, a new Closure-challenging paradox has recently emerged, in two somewhat different forms, due to Backes (Synthese 196(9):3773–3787, 2019a) and Praolini (Australas J Philos 97(4):715–726, 2019). This paradox synthesises elements of the lottery and the preface and is designed to close off the familiar Closure-preserving strategies. By appealing to a normic theory of justification, I will defend Closure in the face of this new paradox. Along the way I will draw more general conclusions about justification, normalcy and defeat, which bear upon what Backes (Philos Stud 176(11):2877–2895, 2019b) has dubbed the ‘easy defeat’ problem for the normic theory.
These kinds of points have I think been somewhat neglected in discussions of Closure 1 -but ~P100. Perhaps the fact that I'm fallible and that carefully checked claims occasionally turn out to be false gives me justification for believing that it is very likely that the book contains falsehoods, but does not give me justification for believing that the book does contain falsehoods. Even if we pursue this route, however, the preface paradox has a sting in the tail; if Closure is correct then not only will I lack justification for believing that P1 P2 … P100 is false, I will have justification for believing that P1 P2 … P100 is true -justification for believing that the book is completely falsehood-free. This is a serious cost, as many have rightly emphasised (Christensen, 2004, chap. 3). Whether accepting this is more costly than abandoning Closure is I think less clear -but a topic for another occasion.
These two familiar paradoxes are, in a way, opposite sides of the same coin. In each case we have a logically inconsistent set of 101 propositions: {P1, P2, …, P100, (~P1 ~P2 … ~P100)}. In the lottery paradox, it is stipulated that I have justification for believing ~P1 ~P2 … ~P100 (justification for believing that some ticket has won) and it is assumed that I must have justification for believing each of P1, P2, … , P100 (justification for believing, of each ticket, that it has lost). In the preface paradox, it is stipulated that I have justification for believing each of P1, P2, …, P100 (justification for believing each claim in the book) and it is assumed that I must have justification for believing ~P1 ~P2 … ~P100 (justification for believing that some of the claims in the book are false). These stipulations should be acceptable to all non-sceptics, but the assumptions provide an avenue of response for the defenders of Closure.
Put differently: In the lottery paradox, ~(P1 P2 … P100) is meant to be certain, but the evidence that I have for believing each of P1, P2, …, P100 -the evidence that there is one winning ticket and 99 losers -leaves each of these propositions with a distinctive kind of uncertainty. By denying that belief can be justified on the basis of such 'purely statistical' evidence one can avoid the paradox and preserve Closure (see, for instance, Ryan, 1996, Nelkin, 2000, Smith, 2010, 2016, section 3.1, Smithies, 2012. In the preface paradox, it is effectively left open what kind of evidence I have for believing each of P1, P2, …, P100, allowing us to fill in the details as we see fit, but the evidence I have for believing (~P1 ~P2 … ~P100) is fixed -it consists in the fact that I am fallible and that comparably ambitious books have always turned out to contain falsehoods. By denying that belief can be justified on the basis of this 'pessimistic inductive' evidence, one can avoid the paradox and preserve Closure (see, for instance, Ryan, 1991, Kaplan, 2013, Kim, 2015, Smith, 2016.1).
In each paradox, a substantial assumption about epistemic justification is needed -it isn't possible to disprove Closure using stipulations alone. Or is it? In recent work, a new Closurechallenging paradox has been set out, in somewhat different forms, by Marvin Backes (2019a) and Francesco Praolini (2019). This paradox, which combines elements of the lottery and the preface, relies on no obvious assumptions about epistemic justification and appears to resist the familiar Closure-preserving strategies.
II THE HYBRID PARADOX Following Praolini, suppose again that I have secured justification for believing each of 100 logically independent factual claims P1, P2, …, P100 which I compile in a book. Suppose I then send my book manuscript to the Perfectly Omniscient Press for consideration for publication, and am informed by their perfectly omniscient and truthful referee that there is one false claim in the book, providing me with justification for believing ~P1 ~P2 … ~P100. If Closure holds, I must have justification for believing an outright contradiction -P1 P2 … P100 (~P1 ~P2 … ~P100). As in the preface paradox, it is simply stipulated that I have justification for believing each of P1, P2 … P100 -my evidence in favour of these propositions is left unspecified and can be filled in as we wish. As in the lottery paradox, it is simply stipulated that I have justification for believing ~P1 ~P2 … ~P100 -this proposition is meant to be certain. As a result, this 'hybrid' paradox does not appear to rely on any substantial assumptions 3 .
In fact, ~P1 ~P2 … ~P100 doesn't quite exhaust the content of the referee's report, as Praolini describes it. In Praolini's example, the referee reveals not just that there is at least one falsehood in the book, but that there is exactly one falsehood in the book -which entails, but is not entailed by, ~P1 ~P2 … ~P100. The extra content of the report could be captured by making this an exclusive rather than an inclusive disjunction 4 . The extra content, however, is inessential to the paradox, and makes no difference for what follows -with one possible exception which I will note. As Praolini points out, when I receive the report from the perfectly omniscient referee, one might think that this serves to defeat my justification for believing the claims in the book. In this case, when I acquire justification for believing ~P1 ~P2 … ~P100, I would lose justification for believing P1, P2, …, P100 and there would be no single time at which I have justification for believing all 101 propositions. While this is one possible way of blocking the paradox, according to Praolini it is implausible to think that the referee's report would defeat my justification for everything that I've written in the book. Rather than imagining a book with a mere 100 claims, Praolini considers a book which contains 'all and only logically independent propositions that you are justified to believe' (Praolini, 2019, p720). For any set of propositions, there may be many subsets with mutually logically independent members, and none of these need be maximal -but suppose we imagine a book detailing some suitably large set of logically independent propositions that I believe. Now the stakes are even higher. Defeating all of the claims in the book would be tantamount to defeating my justified beliefs en masse -and surely the referee's report would not have this effect.
Backes (2019a) describes a case that is structurally similar: Suppose I am slipped a pill that ensures that some small proportion of the justified beliefs that I form, during a certain period, will be false. When the period has elapsed, I learn of the pill and its effects, putting me in much the same situation as in Praolini's example. Backes is also aware that Closure could be saved if the information 3 There are some anticipations of this paradox in earlier literature. For instance, the paradox bears a relation to Ryan's 'third version' of the preface paradox (Ryan, 1991, p304) in which one receives a defeasible report to the effect that one's book contains a falsehood. She concludes that, if one has genuinely secured justification for believing each claim in the book, then one would not be justified in believing the report and should simply dismiss it. This closure-preserving strategy is not available in Praolini's case. In the 'homogeneous preface paradox' described by Easwaran and Fitelson (2015, section 3) one writes a well-researched ambitious factual book arguing precisely that all well-researched ambitious factual books contain at least one error. In this case, Easwaran and Fitelson suggest that one has something much stronger than the usual 'pessimistic inductive' evidence to the effect that one's book contains an error -which, if correct, would give the paradox something like the same character as Praolini's. In fact, I think it's far from clear that this suggestion is correct -something I will return to in n9. about the pill is taken to defeat all of the justified beliefs that I formed during this period -but he too regards it as implausible that this information could have such a devastating defeating effect 5 . For Backes, the lesson of this paradox is that Closure fails. For Praolini, the lesson of the paradox is that either Closure fails or justification is factive -one can only have justification for believing true propositions (see for instance Sutton, 2007, Littlejohn, 2012. As Praolini points out, if justification were factive, then the very set-up of the paradox would be impossible. If I had justification for believing each of P1, P2, …, P100, and justification were factive, then the omniscient, truthful referee couldn't inform me of ~P1 ~P2 … ~P100 -for this would be false.
In the next section, I will set out a rather different response to the paradox -one that revives the idea that the referee's report may function as a kind of defeater. The first thing to observe is that, in order to block the paradox, it is not necessary that the report defeat all of the claims in the bookit is enough that it defeat some. Since the report is so general, and fails to single out any individual claims, the idea that it would serve to defeat some claims and not others may look like a nonstarter. I will argue that this idea is more promising than it first appears.
III THE PRINCIPLE OF DIFFERENTIAL DEFEAT
Suppose I've been invited to a drinks reception, and I know that a very eminent, world-leading primatologist will be in attendance. In preparation for the event, I arm myself with three 'primate facts' that I can causally drop into the conversion in case the primatologist and I are introduced. First, I read in the current edition of Encyclopedia Brittanica that bonobos are capable of passing the mirror self-recognition test (P1). Second, I read in a newspaper article that Madagascar was once home to lemurs that were larger than humans (P2). Finally, a few days before the reception, I hear in conversation that the barbary macaque is the only species of old world monkey that lacks a tail (P3). I come to believe each of P1, P2 and P3 and, plausibly, I am justified in doing so. At the reception I am introduced to the primatologist and, over-eager to impress, I blurt out all three 'facts' in quick succession. The primatologist furrows her brow and says 'I'm afraid something that you just said there is wrong'. Before she can elaborate further, however, she is quickly whisked away to meet another guest.
How, at this point, should I revise my beliefs? If I accept what the primatologists says, it's clear that I can't just continue to believe each of P1, P2 and P3 -that would be irrational. One thing I could do is to give up all of these beliefs, and suspend judgment on P1, P2 and P3. This may be a permissible response to my new evidence -but, in a way, it seems like an overreaction. After all, my evidence for these three propositions is not equal -it's natural to think that my justification for believing P1 is stronger than my justification for believing P2 which, in turn, is stronger than my 5 Dutant and Littlejohn (2020, section 2) discuss two further examples which have the same structure as those described by Praolini and Backes. First, imagine a person undergoing an eye test who is asked to identify various letters and numbers on a series of slides. While she is able to answer all of the questions easily and forms a series of justified perceptual beliefs over the course of the test, she is subsequently told by the optometrist that she made one error. Second, imagine a judge who has convicted many defendants over her career. Suppose that, in each such case, there was strong incriminating evidence and she justifiably believed the defendant to be guilty. Suppose she is then reliably informed that one of the people she has convicted is innocent. Dutant and Littlejohn also consider, and dismiss, the possibility that the new evidence from the optometrist or the informant could defeat all of the beliefs that fall within its scope -all of the eye-test beliefs or guilt beliefs. Though I will focus on Praolini's example in the main text, I will have a bit more to say about these examples in n7 -and I hope to discuss them at length elsewhere. justification for believing P3. While the current edition of Encyclopedia Brittanica is a highly reliable source for information about primates, a newspaper is somewhat less reliable and a snippet of conversation is less reliable still. Another permissible response to the new evidence, I suggest, is to retain my beliefs in P1 and P2 and to give up my belief in P3. If this suggestion is right, then it is only my justification for P3 that is defeated by the primatologist's remark -my justification for P1 and P2 survives.
Turning back to Praolini's example, if we imagine a book containing most of the propositions that I justifiably believe, this will include claims such as 'Two and two is four' and 'I am not a turnip' through to claims such as 'Edinburgh Waverly has the highest annual footfall of any train station in Scotland' and 'Oswald acted alone'. When I discover that the book contains a falsehood, it's highly implausible that all of these claims suddenly become equally doubtful. Rather, the claims that are made most doubtful by this discovery are the ones that were the most doubtful to begin with.
Suppose one has justification for believing each proposition in a set {P1, P2, …, Pn}. According to what I will call the Principle of Uniform Defeat, if one learns ~P1 ~P2 … ~Pn then this will defeat one's justification for every proposition in {P1, P2, …, Pn}. The Principle of Uniform Defeat does have an initial appeal and may be a consequence of principles that some philosophers have explicitly endorsed -such as Ryan's 'avoid falsity principle' (Ryan, 1996) 6 . I propose instead a Principle of Differential Defeat: If one learns ~P1 ~P2 … ~Pn then this will defeat one's justification for all and only those propositions in {P1, P2, …, Pn} that were the least justified, prior to the discovery. According to the Principle of Differential Defeat, when I learn ~P1 ~P2 … ~Pn this serves to defeat my justification for believing all and only those claims in the bottom tier. In this case, when I receive the referee's report, I lose justification for believing P64, P39, P7 and P90 and the paradox is blocked.
The Principle of Differential Defeat offers a way of resolving the hybrid paradox without abandoning Closure or embracing the claim that justification is factive. Praolini is aware of this potential solution to the paradox and offers four replies (see Praolini, 2019, pp722-723), which I will consider in turn. First, Praolini suggests that, far from defeating some of the claims in the book, the referee's report should actually increase my justification for each of the claims. Recall that, in Praolini's example, the referee informs me that there is exactly one falsehood in the book and no more -and, according to Praolini, this should come as good news. To motivate this, he reiterates some reasoning that is familiar from the preface paradox; given that I'm fallible, and there are so many claims in the book, we should expect it to contain numerous falsehoods. As a result, to discover that there is only one false claim in the book is to discover that things have turned out much better than expected, and the discovery should boost my justification for each individual claim. As discussed above, this reasoning does have a certain appeal -but it is reasoning that a defender of Closure is already committed to rejecting. If Closure holds then, prior to receiving the referee's report, I have justification for believing that there are no false claims in the book. As a result, there is at least one sense in which I should not 'expect' the book to contain numerous falsehoods, and in which the report does not represent better news than expected. Praolini's first reply should, then, leave a defender of Closure unmoved -it effectively takes it for granted that Closure fails.
Second, Praolini suggests that, since the referee report fails to specify any particular claims, it is counterintuitive that it would defeat some claims and not others. That is, it is counterintuitive that the referee report should defeat only P64, P39, P7 and P90 -even if these do happen to be the least justified in the book. As noted above, there is something attractive about this suggestion, but it fails to stand up to scrutiny. Even if a defeater weighs equally against each of a set of justified beliefs, the beliefs themselves will typically vary with respect to how vulnerable they are to defeat -as the above examples illustrate. Broadly speaking, the stronger one's justification for a proposition, the more resistant it is to defeat. The reason that P64, P39, P7 and P90 buckle under the strain of the report is not that it weighs extra heavily against them (it doesn't) -it is because these are the claims that are least able to bear the weight.
Third, according to Praolini, it is ad hoc to maintain that the referee's report only defeats the least justified claims in the book, when a conjunction of other claims in the book may have an even lower level of justification. That is, it would be ad hoc to maintain that the referee's report defeats the least justified claims P64, P39, P7, P90 if a conjunction of further claims -say, P19 P71 P4 P96was less justified still. I am inclined to think that Praolini's third reply, like the first, effectively begs the question against those who would defend Closure.
Consider the following principle which we might call Comparative Closure: One's justification for believing P1 P2 … Pn is no weaker than one's justifications for believing each of P1, P2, …, Pn.
Comparative Closure is a stronger principle than Closure, but the most common motivations for accepting the latter would seem to carry over to the former. This would certainly seem to be so for the motivation that I sketched at the outset; if one automatically counts as believing P1 P2 … Pn whenever one believes P1, believes P2, … and believes Pn then this gives us reason to deny, quite generally, that one's epistemic standing with respect to P1 P2 … Pn could be worse than one's epistemic standing with respect to each of P1, P2, …, Pn. In any case, defenders of Closure are under strong pressure to accept Comparative Closure, which effectively rules out the kind of possibility that Praolini envisages.
To pursue this line of thought a little further, it is also very plausible that my justification for P1 P2 … Pn cannot be any stronger than my justification for the least justified member of {P1, P2, …, Pn}. If we combine this with Comparative Closure, we can derive the following: The degree of one's justification for believing P1 P2 … Pn is equal to the degree of one's justification for believing the least justified member {P1, P2, …, Pn}.
We might call this the Minimum Conjunct Rule. Using this rule, we could add conjunctions to the above ranking as follows: I will return to this in the next section.
Finally, Praolini points out that, if the referee report defeats the least justified claims in the book, and the claims all happened to be equally justified, then they would all be defeated. This result is, indeed, unavoidable -if the propositions in a set are equally justified, then the Principle of Differential Defeat and the Principle of Uniform Defeat will make exactly the same predictions. Such a case would, however, be very different from what Praolini initially asks us to imagine. Suppose I'm looking at a row of cereal boxes on a supermarket shelf. Presumably I'm justified in believing, of each box, that it contains cereal. Suppose I then learn that one of the boxes is empty and has been placed on the shelf by mistake. This appears to be a case in which I learn that one amongst a set of equally justified propositions is false. After all, I have no more reason to think that any one box contains cereal than any other -I can't tell this just by looking. In this case, though, it is plausible that the new evidence would serve to defeat my justification for believing each proposition in the set. That is, it's plausible that I would no longer be justified in believing, of any one box, that it contains cereal -not without picking it up or looking inside. Even if we find the idea of en masse defeat implausible when it comes to Praolini's original example, we should not assume that this intuition will persist when the example has been adjusted in such a way as to ensure that all of the beliefs in question are equally justified 7 .
In any event, defenders of Closure may have a particular reason for tolerating en masse defeat in cases of this kind. If I have equal justification for each of a series of claims and I learn that exactly one of them is false, then the situation that I confront is very similar to that presented by the lottery paradox. Both situations involve a large set of propositions, each of which is very likely to be true but one of which is sure to be false. In both situations, the propositions are on a par, in that any one could be the false proposition just as easily as any other. Indeed, if we wrote each of the claims down on slips of paper, then there would be one 'winning ticket', which featured the one false claim, and a multitude of 'losing tickets', which featured the true claims. To believe any particular claim would be tantamount to believing, of one particular ticket, that it's a loser. For a defender of Closure, accustomed to denying that one can justifiably believe, of a single ticket, that it has lost a fair lottery, embracing en masse defeat in this situation is, I think, a small step. I will have a bit more to say about these sorts of cases in the final section 8 .
In this section I have provided a prima facie motivation for the Principle of Differential Defeat, and argued that this principle will allow us to preserve Closure in the face of the hybrid paradox. It remains to be shown that there is a viable theory of justification that will vindicate both Closure and the Principle of Differential Defeat. I turn to this next.
IV THE NORMIC THEORY
In both the lottery and the preface paradoxes, we are invited to infer that I have justification for believing a proposition from the premise that it is highly likely, given my evidence. In the lottery paradox the proposition in question is that ticket #72 has lost, while in the preface paradox the proposition in question is that the book contains falsehoods. The inference is a very tempting one. After all, most epistemologists are fallibilists who agree that one can have justification for believing a proposition even if one's evidence doesn't make it completely certain. But what else can we require, then, except that one's evidence make the proposition likely? What else could the evidence do?
On reflection, I think that there is something else that the evidence might do. Sometimes our evidence in favour of a proposition P is such as to make the falsity of P abnormal in the sense of requiring special explanation. Suppose I wander into a room I've never been in before and notice that the wall before me appears to be red. Clearly this evidence makes it very likely that the wall before me really is red -but this is not its only effect. If the wall appears to me to be red, but it isn't red, then there would have to be some explanation as to how this came to be -I'm undergoing a colour hallucination, the wall is illuminated by hidden red lights, I've suddenly been struck by colour blindness etc. Whatever the case, there has to be more to the story -it can't 'just so happen' that the wall appears to me to be red, but isn't red.
In contrast, the fact that there are 99 losing tickets and only one winner doesn't generate the need for special explanation in the event that ticket #72 is the winner. If ticket #72 were to win, then I may be surprised and delighted (it is my ticket after all) -but I wouldn't seek some special explanation as to how this could possibly have happened. Some ticket has to win the lottery and it might just as well be ticket #72 as any other. Although it would be very unlikely, there is a sense in which there would be nothing abnormal about this ticket being the winner (Vogel, 1999).
More controversially, the fact that I'm fallible and that comparably ambitious books have always contained falsehoods in the past does not generate the need for a special explanation in the event that my book turns out to be falsehood-free. Once again, if this were the case then I may be surprised and delighted -but I wouldn't demand an explanation as to how this could possibly have happened. Recall that every claim in the book has been thoroughly researched and checked -and it could just turn out that my research has delivered the right result every time. Why shouldn't it? Although it would be very unlikely for every claim in the book to be true there is, once again, a sense in which there would be nothing abnormal about this turn of events.
Sometimes when we describe a situation as 'normal' or 'abnormal' we are simply making a claim about frequencies -a normal situation is one that frequently arises, while an abnormal situation is one that is infrequent. If this is our understanding of normalcy, then we should say that it would be abnormal for ticket #72 to win or for my book to be error free. This is not the only way that we use these terms however. If the lights in my house suddenly start to flicker, or my car fails to start when I turn my key in the ignition and I remark 'that's not normal', I'm not just pointing out that this is something rare or infrequent -part of what I'm saying is precisely that there needs to be some special explanation for what is occurring.
Say that evidence E normically supports a proposition P just in case, given E, the situation in which P is false would be abnormal in the sense of requiring special explanation (Smith, 2010(Smith, , 2016(Smith, , 2018. The evidence that the wall appears to be red normically supports the proposition that the wall is red. The evidence that there are 99 losing tickets and one winning ticket does not normically support the proposition that my ticket has lost. The fact that I'm fallible and that comparable books have always turned out to contain falsehoods in the past does not normically support the proposition that my book contains a falsehood. According to the normic theory of justification, one has justification for believing a proposition P just in case one's evidence normically supports P. In the lottery paradox, the normic theory predicts that I lack justification for believing, of any ticket, that it has lost the lottery -I lack justification for believing any of P1, P2, …, P100. In the preface paradox, the normic theory predicts that I lack justification for believing that the book contains a falsehood -I lack justification for believing ~P1 ~P2 … ~P100 9 . Not only does the normic theory offer a way of preserving Closure in the face of the lottery and preface paradoxes -it would appear to deliver a general validation of the principle: Suppose one has justification for believing P1, justification for believing P2, …, justification for believing Pn. According to the normic theory, given one's evidence E, there would have to be a special explanation if P1 were false and there would have to be a special explanation if P2 were false … and there would have to be a special explanation if Pn were false. What about P1 P2 … Pn? If P1 P2 … Pn were false, then at least one of P1, P2, …, Pn would have to be false. Therefore, given one's evidence E, there would have to be a special explanation if P1 P2 … Pn were false and, according to the normic theory, one has justification for believing P1 P2 … Pn (Smith, 2018, p3870). The claim that the normic theory validates Closure will also permit of a more rigorous proof -given a certain formal development of the notion of normic support -which I will outline in the next section. 9 In the 'homogeneous preface paradox' discussed in n3, we are asked to imagine an author who writes a wellresearched ambitious factual book in which he argues precisely that all well-researched ambitious factual books contain falsehoods (Easwaran and Fitelson, 2015, section 3). Though we are not told precisely how the author proceeds, it's natural to imagine this book as listing a series of well-researched ambitious factual books, along with the falsehoods that they have been found to contain. If that's right, then it's clear that this evidence does not normically support the conclusion that all well-researched, ambitious factual books contain at least one falsehood or that the author's own book contains at least one falsehood -it doesn't generate the need for a special explanation in the event that the author's book is falsehood-free. From the perspective of the normic theory, the homogeneous preface paradox presents no greater challenge than the original.
The normic theory can be easily extended to justification comparisons. Say that E normically supports proposition P more strongly than proposition Q just in case, given E, the situation in which P is false is less normal, in the sense of requiring more explanation, than the situation in which Q is false. According to the normic theory, one has more justification for believing P than Q just in case one's evidence normically supports P more strongly than Q. Given this, it is plausible that the normic theory will serve to validate Comparative Closure and the Minimum Conjunct Rule: To explain the falsity of P1 P2 … Pn one must explain either the falsity of P1 or the falsity of P2, … or the falsity of Pn. Given one's evidence E, the amount of explanation required by the falsity of P1 P2 … Pn is equal to the amount of explanation required by the falsity of P1 or the falsity of P2, … or the falsity of Pn, whichever is least. According to the normic theory, the degree of justification I have for believing P1 P2 … Pn is equal to the degree of justification I have for believing the least justified of P1, P2, …, Pn. This somewhat casual demonstration can, once again, be substituted for a more formal proof, which I will detail in the next section.
Normic support is defeasible. Just because a given body of evidence provides normic support for a proposition, it doesn't automatically follow that an expanded body of evidence will do so. Suppose again that I wander into a room and notice that the wall before me appears to be red. Given that the wall appears to be red, there would have to be a special explanation in the event that the wall is not red. Suppose I then discover that the wall is illuminated by hidden red light such that it would appear to be red even if it were white. Given that the wall appears to be red and is illuminated by hidden red light, there would not need to be a special explanation in the event that the wall is not red -the new evidence, in effect, removes the need for explanation in this case. If E normically supports P, we can say that D defeats the normic support for P just in case E D does not normically support P.
We can now pose the following question: Suppose a body of evidence E provides normic support for each proposition in a set {P1, P2, …, Pn}. What does E (~P1 ~P2 … ~Pn) normically support? That is, what is the defeating effect of learning ~P1 ~P2 … ~Pn? If the normic theory is to deliver the Principle of Differential Defeat, then learning ~P1 ~P2 … ~Pn must defeat the normic support for all and only those propositions in {P1, P2, …, Pn} that were the least normically supported by E. It is far from obvious, however, that this is so. On the contrary, one might think that learning ~P1 ~P2 … ~Pn should serve to defeat the normic support for all of the propositions in {P1, P2, …, Pn} (giving us instead the Principle of Uniform Defeat) as it would remove the need to explain the falsity of any one of P1, P2, …, Pn. In the next section, I will argue that this hasty reasoning is mistaken. In order to do so, we will need to start thinking about normic support in a more formal way.
V NORMAL WORLDS
Suppose propositions can be ordered according to their normalcy -according to how much explanation their truth would require. Normic support can be analysed in terms of comparative normalcy relations amongst propositions: E normically supports P just in case E P is more normal than E ~P, and E normically supports P more strongly than Q just in case E ~Q is more normal than E ~P. Given these definitions, the formal features of normic support will be determined by the formal features of the normalcy ordering of propositions.
Consider a set of propositions F, which is partially ordered by entailment, closed under disjunction and negation and which contains a 'maximal' proposition which is entailed by all propositions in the set and a 'minimal' proposition which entails all propositions in the set. The maximal proposition can be thought of as a tautology or logical truth and the minimal proposition as a contradiction or logical falsehood. If the set of propositions F is infinite, we suppose also that it is closed under infinite disjunction. I assume that all propositions in F can be compared for their normalcy -for any two propositions, either one is more normal than the other or they are equally normal. That is, I assume that, for any two propositions in F, either the truth of one requires more explanation than the truth of the other, or their truth requires the same amount of explanation.
The maximal proposition should count as maximally normal and the minimal proposition as maximally abnormal. The truth of a tautology never requires explanation, and nothing could require more explanation than the truth of a contradiction. A disjunction will be as normal as its most normal disjunct. The only way in which P Q can be true is if either P is true or Q is true. To explain the truth of P Q is to explain either the truth of P or the truth of Q, and the amount of explanation demanded by P Q will be equal to the amount demanded by P or by Q, whichever is less.
If we are considering an infinite F, I also assume that there are no infinite ascending chains of increasingly normal propositions. In this case, any set of propositions will be guaranteed to have maximally normal members and we can extend the above principle to infinite disjunctions -the disjunction of any (potentially infinite) set of propositions is as normal as its most normal members. With this assumption in place it will be possible to rank propositions according to how normal they are: The rank 1 propositions will be the most normal ones in F. Once these are removed, the rank 2 propositions will be the most normal ones amongst those that remain, and so on. The least normal propositions in F -those that are just as abnormal as the minimal proposition -might be assigned an infinite rank 10 .
Given a few further assumptions, a proposition P can be modelled as a set of possible worlds -namely, the set of possible worlds at which P is true. In this case, F will be modelled by the subsets of a set of possible worlds W, with W itself serving as the maximal proposition and the empty set serving as the minimal proposition 11 . Disjunction will be modelled as set theoretic union and negation will be modelled as complementation in W. Given a normalcy ranking of propositions, we can derive a normalcy ranking of worlds: Let the normalcy rank of a world w be equal to the normalcy rank of the least normal proposition of which it is a member. That is, let the normalcy rank of w be equal to the normalcy rank of the least normal proposition that is true at w. The rank 1 worlds will be those that are members of only rank 1 propositions. The rank 2 worlds will be those that are members of only rank 1 and rank 2 propositions, and so on. Given this definition, as can be checked, the rank of a proposition must be equal to the rank of the most normal worlds within it -the rank of the most normal worlds at which it is true 12 .
E normically supports P just in case E P is more normal than E ~P. This is just to say that the most normal E P-worlds are more normal than the most normal E ~P-worlds or, more simply, that P is true at the most normal E-worlds. Normic support can, then, be analysed in terms of variably strict quantification over possible worlds: E normically supports P just in case P is true at all of the most normal worlds at which E is true. Imagine the worlds at which E is true as points arrayed in space with proximity to a central point serving as a metaphor for normalcy. We can visualise worlds arranged in a series of concentric spheres radiating from that central point, as in figure 1 below: The innermost sphere represents the E-worlds that are most normal, the next sphere incorporates the E-worlds that rank next in terms of normalcy and so on. The diagram depicts a situation in which E provides normic support for P2 but fails to provide normic support for P1. This analysis of normic support offers a new perspective on Closure and the normic theory. If E normically supports each proposition in the set {P1, P2, …, Pn} then E normically supports P1 P2 … Pn. Proof Suppose E normically supports each proposition in the set {P1, P2, …, Pn}. In this case, P1 is true in all of the most normal E-worlds and P2 is true in all of the most normal E-worlds … and Pn is true in all of the most normal E-worlds. It follows immediately that P1 P2 … Pn is true in all of the most normal E-worlds in which case E normically supports P1 P2 … Pn. QED The normic theory of justification validates Closure. 12 Once propositions are modelled as sets of possible worlds, the assumption that there cannot be infinite ascending chains of increasingly normal propositions is equivalent to the assumption that any set of worlds must have maximally normal members. David Lewis (1973, section 1.4) famously considers and rejects a corresponding assumption for world similarity -which he terms the 'limit assumption' -though his reasons don't straightforwardly carry over to the case of world normalcy. Without this assumption, neither propositions nor worlds could be assigned numerical normalcy ranks -but we would retain the capacity to make normalcy comparisons. This looser framework would in fact still suffice for the core aims of this section -namely, to establish that the normic theory of justification will deliver both Closure and the Principle of Differential Defeat -but the details are omitted here. For further discussion of these issues, see Smith (2016, chap. 8).
E normically supports P more strongly than Q just in case E ~Q is more normal than E ~P. This is just to say that the most normal E ~Q-worlds are more normal than the most normal E ~Pworlds. If figure 2 represents, once again, the normalcy ranking of E-worlds, it depicts a situation in which E provides normic support for each of P1, P2 and P3, but stronger normic support for P3 than P2 and stronger normic support for P2 than P1. The most normal E-worlds in which P1 is false are in the second sphere, the most normal E-worlds in which P2 is false are in the third sphere and the most normal E-worlds in which P3 is false are in the fourth sphere.
We might define the degree to which E normically supports a proposition P as the number of normalcy spheres of E-worlds throughout which P holds. Given this definition, E normically supports P more strongly than Q just in case it normically supports P to a higher degree than Q. In the above diagram, E normically supports P1 to degree 1, P2 to degree 2 and P3 to degree 3 13 .
The degree to which E normically supports a conjunction P1 P2 … Pn is no lower than the degree to which it supports each of P1, P2, …, Pn. Proof Any world in which P1 P2 … Pn is false is a world at which either P1 is false or P2 is false … or Pn is false. Let w be one of the most normal worlds at which E is true and P1 P2 … Pn is false. There must be some Px {P1, P2, …, Pn} such that Px is false at w. It follows that the degree to which E normically supports P1 P2 … Pn is no lower than the degree to which it normically supports Px. QED The normic theory of justification validates Comparative Closure. As can be easily checked, if Px {P1, P2, …, Pn} is false at some of the most normal worlds in which E is true and P1 P2 … Pn is false, then the degree to which E normically supports P1 P2 … Pn will in fact be equal to the degree to which E normically supports Px and, more generally, equal to the degree to which E supports the least supported members of {P1, P2, …, Pn}. As a result, the normic theory of justification validates the Minimum Conjunct Rule 14 .
If E normically supports P, D defeats this normic support just in case E D does not normically support P. In this case, while P is true in all of the most normal E-worlds, P is false in some of the most normal E D-worlds. D, in effect, forces us further from the most normal worlds in which E is true, and into a region in which the connection between E and P is disrupted. If the following diagram, once again, represents the normalcy ranking of E-worlds, it depicts a situation in which E provides normic support for P that is defeated by D. With evidence E, I have normic support for all and only those propositions that contain the red region -including P. Once I learn D, I then have normic support for all and only those propositions that contain the yellow region 15 .
(1) r(W) = (2) r() = 0 (3) r(P Q) = min{r(P), r(Q)} In case F is infinite, (3) may be strengthened to: (4) For any set of propositions , r() = min{r(P) | P } A positive ranking function that satisfies (4) is referred to as 'completely minimative'. The degrees of normic support conferred upon propositions by a body of evidence will conform to these axioms (Smith, 2016(Smith, , chap. 8, 2018. If, as discussed in n10, we assign propositions to transfinite ordinal normalcy ranks, degrees of normic support should also be capable of taking such values. In this case, the above axioms can stand, with interpreted as a kind of 'absolute infinity' greater than an ordinal, giving us something close to Spohn's conditional ordinal functions (Spohn, 2012, pp72-73).
When I learn ~P1 ~P2 ~P3 this serves to defeat my normic support for the least normically supported of the three propositions -P1. When I learn ~P2 ~P3 this serves to defeat my normic support for the least normically supported of the two remaining propositions -P2. Suppose my evidence E provides normic support for each proposition in the set {P1, P2, …, Pn} and suppose I learn ~P1 ~P2 … ~Pn. We prove in two stages the general claim that this serves to defeat the normic support for all and only those propositions in {P1, P2, …, Pn} that are least normically supported by E. First, for any proposition Px {P1, P2, …, Pn} if there is another proposition in {P1, P2, …, Pn} that is less normically supported by E, then ~P1 ~P2 … ~Pn does not defeat the normic support for Px. Proof Suppose Px {P1, P2, …, Pn} and there exists another proposition Py {P1, P2, …, Pn} that is less normically supported than Px by evidence E. In this case, there is a world in which E is true and Py is false which is more normal than the most normal worlds in which E is true and Px is false. Since Py {P1, P2, …, Pn}, there is a world in which E and ~P1 ~P2 … ~Pn are true which is more normal than the most normal worlds in which E is true and Px is false. Ipso facto, Px is true in the most normal worlds in which E and ~P1 ~P2 … ~Pn are true and E (~P1 ~P2 … ~Pn) normically supports Px.
QED
We can also prove the converse claim: For any proposition Px {P1, P2, …, Pn} if there is no other proposition in {P1, P2, …, Pn} that is less normically supported by E then ~P1 ~P2 … ~Pn does defeat the normic support for Px. Proof Suppose Px {P1, P2, …, Pn} and there exists no other proposition in {P1, P2, …, Pn} that is less normically supported than Px by evidence E. In this case there are no worlds at which E and ~P1 ~P2 … ~Pn are true and which are more normal than the most normal worlds at which E is true and Px is false. Ipso facto, Px is false at some of the most normal worlds at which E and ~P1 ~P2 … ~Pn are true and E (~P1 ~P2 … ~Pn) does not normically support Px. QED If one's evidence E provides normic support for each proposition in the set {P1, P2, …, Pn} then ~P1 ~P2 … ~Pn defeats the normic support for all and only those propositions in {P1, P2, …, Pn} which were the least normically supported by E. According to the normic theory of justification, if one has justification for each proposition in the set {P1, P2, …, Pn} and one learns ~P1 ~P2 … ~Pn, this new evidence serves to defeat all and only those propositions in {P1, P2, …, Pn} that were the least justified by E. The normic theory of justification delivers the Principle of Differential Defeat. In the hybrid paradox, the normic theory predicts that, when I learn that the book contains an error -when I learn ~P1 ~P2 … ~P100 -this will serve to defeat all and only those claims in the book that were the least justified -the least justified members of {P1, P2, …, P100} 16 . 16 What if one receives only a defeasible report to the effect that ~P1 ~P2 … ~P100 as in Ryan's third version of the preface paradox discussed in n3? Such cases are more complex, and I provide only a preliminary treatment here: Suppose one has evidence E that normically supports a proposition P and acquires evidence F that normically supports ~P. Given certain conditions, this new evidence will defeat one's normic support for P just in case the degree to which F normically supports ~P is greater than or equal to the degree to which E normically supports P. The conditions in question are that E and F be equally normal propositions and that F hold in some of the most normal worlds in which E and ~P hold and E hold in some of the most normal worlds in which F and P hold. What this ensures, in a way, is that there is no extraneous normic interaction between E and F beyond that which is mandated by their levels of respective normic support for the conflicting propositions P and ~P.
Ryan's third version of the preface paradox arguably fits this pattern, with one's prior evidence normically supporting P1 P2 … P100, and the report normically supporting ~P1 ~P2 … ~P100. As mentioned in n3, Ryan suggests that, if one has evidence that justifies every claim in the book, one would not be justified in believing the report and ought to dismiss it. According to the normic theory, this diagnosis may When accommodating new evidence, according to the normic theory, one is only required to take seriously the most normal ways in which the new evidence could be true, given one's prior evidence. The result proved is, in effect, a special case of this more general principle. When one learns ~P1 ~P2 … ~Pn one is obliged to take seriously only those disjuncts that would be the most normal, given the existing evidence. That is just to say that one is obliged to take seriously only the falsity of those members of {P1, P2, …, Pn} for which the existing evidence provides the least normic support.
The Principle of Differential Defeat is also an instance of another more general principle, validated by the normic theory, regarding the effect of learning ~P1 ~P2 … ~Pn upon the degree of justification enjoyed by each proposition in {P1, P2, …, Pn}. The degree to which a proposition Px {P1, P2, …, Pn} is normically supported by evidence E is equal to the number of spheres of E-worlds throughout which Px holds. The degree to which a proposition Px {P1, P2, …, Pn} is normically supported by E (~P1 ~P2 … ~Pn) is equal to the number of spheres of E (~P1 ~P2 … ~Pn)worlds throughout which Px holds. If m is the degree to which E normically supports the least normically supported propositions in {P1, P2, …, Pn} then m is the number of spheres of E-worlds throughout which P1 P2 … Pn is true, and the number of spheres that disappear when we move from evidence E to evidence E (~P1 ~P2 … ~Pn). In this case, the effect of learning ~P1 ~P2 … ~Pn is to lower the degree of normic support for each proposition in {P1, P2, …, Pn} by m. According to the normic theory of justification, learning that one member of a set of justified propositions is false will uniformly lower the degree of justification for each proposition in the set. In one sense, the new evidence serves to 'partially defeat' the justification for each proposition, but will only (completely) defeat -that is, lower to 0 -the justification for those propositions which were the least justified to begin with. It is only these propositions that, as it were, completely buckle under the weight of the new evidence 17 .
As well as validating Closure and offering viable Closure-preserving solutions to the lottery and preface paradoxes, the normic theory of justification validates the Principle of Differential Defeat and offers a viable Closure-preserving solution to the hybrid paradox. While this completes the primary aim of the paper, in the final section I turn briefly to a related topic, which also stems from the work of Marvin Backes (2019b); an objection to the normic theory of justification which focusses on the way in which it handles defeaters. be correct provided that the degree to which the report provides justification for ~P1 ~P2 … ~P100 is lower than the degree to which one's prior evidence provides justification for the least justified of {P1, P2, …, P100}. Otherwise, the report will succeed in defeating one's justification for some of the claims in the book.
17 Consider again the situation described in n8 and n13 in which one has equal justification for believing each of a series of propositions P1…Pn, and 'slightly' more justification for believing a further proposition Pn+1. If one learns ~P1 ~P2 … ~Pn+1 then, as discussed, the normic theory will predict that one will lose justification for believing each of P1…Pn, but will retain justification for believing Pn+1. In light of the observation made in the main text, however, the normic theory will also predict that the new information will not affect the difference in one's degree of justification for P1…Pn and for Pn+1. Thus, if one's initial justification for P1…Pn were just one degree lower than one's justification for Pn+1 then, while the latter justification won't be defeated by the new information, it will be subject to partial defeat which reduces its degree of justification to 1. For more on partial defeat and the normic theory see Smith (2016, section 8.3)
VI THE PROBLEM OF 'EASY DEFEAT'
Suppose Helen has a peanut allergy. One day she goes into a café and orders a brownie that is labelled 'peanut free'. Helen believes that the brownie is peanut free, and this proposition is normically supported by her evidence. Suppose Helen then reads in the newspaper that a flour supplier has just announced that a bag of peanut-contaminated flour mistakenly made it into circulation. According to Backes (2019b), this will serve to defeat the normic support for Helen's belief because it provides a possible explanation as to how the brownie might contain traces of peanut, in spite of the label. If this is right, then the normic theory predicts that, once Helen reads the newspaper report, she is no longer justified in believing that the brownie is peanut free. Backes claims that this is a counterintuitive result -after all, it is extremely unlikely that any of the contaminated flour would have made its way into Helen's brownie. Backes goes on to outline several further cases of 'easy defeat', in which the normic theory allegedly makes it too easy for one's justified beliefs to be defeated. I will consider two more of his cases here: Suppose Helen believes that she will see her friend Bob when she travels to Oxfordshire next weekend. Suppose Bob has said that he will meet her and that he is usually very reliable and trustworthy and, as a result, her evidence provides normic support for her belief. Suppose Helen then reads in the newspaper that a man in Oxfordshire has been fatally struck by lightning. According to Backes, this serves to defeat the normic support for Helen's belief, and for a similar reason to the preceding case; the new information could offer a possible explanation as to how she could fail to see Bob, in spite of her present evidence. Once again, if this is right then the normic theory predicts that, after reading the newspaper, Helen would no longer have justification for believing that she will see Bob next weekend.
Finally, suppose that Helen has, for several years, owned an apartment in New York, when she reads in the newspaper that a New York apartment was recently gutted by fire. Plausibly, Helen would still be justified in believing that she has an apartment in New York. And yet, according to Backes, the normic theory predicts otherwise -after all, this report offers a possible explanation as to how she could fail to have an apartment, despite her evidence, and thus deprives the belief of normic support.
One thing that we might observe right away is that these examples all appear to be somewhat similar to the example that drives the hybrid paradox. In the hybrid paradox, recall, I have secured evidence which normically supports each of 100 independent factual claims P1, P2 … P100 and I then learn that one of these claims is false -~P1 ~P2 … ~P100. If we focus on one particular claim in the book -P57 say -the new evidence might be thought to offer a kind of explanation as to how this claim could be false in spite of my existing evidence. And yet, as we have seen, it is not inevitable that the new information will defeat my normic support for P57. By the result proved in the previous section, my normic support for P57 will only be defeated on the assumption that it was one of the least normically supported of the 100 claims.
In Backes's examples, Helen doesn't literally learn that one amongst a set of justified beliefs is false -but she does acquire evidence which can be accommodated in a number of different ways, some of which may count as more normal, given her existing evidence, than others. Suppose one's evidence E provides normic support for P and one then learns D. If there is a proposition Q that entails D and which would be more normal, given E, than ~P, then D will not defeat one's normic support for P. We might say, in this case, that Q insulates P from defeat by D. Proof Suppose E normically supports P and suppose there is a world in which E and Q are true which is more normal than the most normal worlds in which E is true and P is false. Suppose finally that Q entails D. In this case, there is a world in which E and D are true which is more normal than the most normal worlds in which E is true and P is false. Ipso facto, P is true in the most normal worlds in which E and D are true and E D normically supports P. QED Consider again Backes' apartment example. On the normic theory, the information that an apartment in New York has been gutted by fire will only defeat Helen's justification for believing that she has an apartment in New York on the assumption that it would be just as normal, given her evidence, for her apartment to be gutted by fire as any other. But we have no reason to accept thisand it would take little to make it false. There are many factors that determine an apartment's vulnerability to fire. Some apartments are fitted with smoke alarms and sprinkler systems while others will lack them. Some apartments will have old, deteriorated electrical wiring, while others will have new wiring that has passed rigorous safety checks. Some apartments will have open fireplaces, while others won't, and so on. For any apartment to be gutted by fire may require explanation, but more explanation is required in the case of some apartments than others.
It would be natural to suppose that Helen is aware that her apartment has certain fire safety measures that are not present in every apartment in New York. While this evidence does leave open the possibility that Helen's apartment is the one that burned, it generates the need for further explanation in this case. Upon reading that a New York apartment has been gutted by fire, it would be natural for Helen to reassure herself with the thought that she has fire safety measures in placea sprinkler system, wiring that has undergone safety checks etc. In this case, there is a proposition that insulates Helen's belief from defeat -the proposition that there is an apartment in New York that lacks appropriate fire safety measures and was recently gutted by fire. It would be more normal, given Helen's evidence, for this proposition to be true than for her belief to be false.
Similar remarks may apply to the Bob example (though matters are admittedly less clear-cut). Upon reading that a man in Oxfordshire has been fatally struck by lightning, it would be natural for Helen to reassure herself with the thought that Bob is relatively safety conscious and not the sort of person who would venture outside during a thunderstorm etc. The proposition that someone in Oxfordshire, more reckless than Bob, has been fatally struck by lightning would plausibly insulate Helen's belief from defeat. Once again, it would be more normal, given Helen's evidence, for this proposition to be true than for her belief to be false.
We could, of course, adjust these examples in such a way that there is no proposition that will insulate Helen's beliefs from defeat. We could stipulate that, given Helen's evidence, one of the most normal ways in which the fire report could be true is for her own apartment to have burned, and one of the most normal ways in which the lightning report could be true is for Bob to have been struck. Perhaps Helen knows that her apartment is highly vulnerable to fire -perhaps she has reason to believe that it is amongst the most vulnerable in New York. Perhaps Helen knows that Bob is well and truly reckless enough to venture out during a thunderstorm -perhaps she has reason to believe that he is amongst the most reckless, in this regard, in all of Oxfordshire. In this case, the normic theory would predict, as Backes claims, that the justification for Helen's beliefs is indeed defeated by the reports -but such a prediction is not obviously wrong.
With the examples fleshed out in this way, there is no available thought with which Helen could reassure herself -she would be forced to concede that her apartment is one of those that could most easily have burned, and that Bob is one of those who could most easily have been struck. Such realisations would sit very uneasily alongside the beliefs that she has an apartment in New York and that she will be meeting Bob next weekend, if she persists in holding them. Consider the tension involved in uttering 'Someone in Oxfordshire has been fatally struck by lightning and, knowing Bob, it could just as easily have been him as anyone else, but I'll be meeting Bob next weekend'. In such a case, it's natural to think that Helen ought to fall back upon a probabilistic belief -it's very likely that she'll be meeting Bob next weekend or some such -until she has had an opportunity to gather more information, by contacting Bob or reading more details in the paper etc.
The problem with these two examples -a hazard of thought experiments more generally -is that the predictions and the intuitions that are supposedly being compared both turn, in part, upon details that are not explicitly supplied. In each example, Helen will have further background evidence that is potentially relevant to her belief, and to the interpretation of the new information that she receives. In each case, the kind of background evidence that we would naturally assume to be at Helen's disposal will be enough to ensure that the normic support for her belief is insulated and is not defeated by this new information.
This leaves only Backes's first example. In this case, I am inclined to think that the normic support for Helen's belief genuinely is defeated. When Helen reads about the bag of peanutcontaminated flour, she learns is that, in all likelihood, some small proportion of baked goods in the city will contain this flour. Given her limited evidence, it would be just as normal for the brownie before her to be part of this group as any other baked good in the city. Backes claims that Helen should still be justified in believing that her brownie is peanut free -but is this really so clear? After reading the report, it would be understandable if Helen decided not to eat the brownie. And, if Helen did go ahead and eat the brownie as planned, there is some temptation to see this as a rash decision -particularly if her allergy is severe. It would also be irresponsible for Helen to assert that the brownie is peanut free or to offer some to a friend who also has a peanut allergy. Helen's new evidence does, then, make a significant difference as to how she ought to behave -no longer should she blithely act as though the brownie is peanut-free. A natural explanation for this is that Helen's new evidence also makes a difference as to what she should believe -she should no longer believe that the brownie is peanut free.
Backes describes Helen's new information as 'negligible' (p2885) -but many, I suspect, would be inclined to take such information very seriously. If the café owners were made aware of the report, one could easily imagine them strongly advising Helen not to eat the brownie -offering her a refund or an exchange for a flour-free item. In real food contamination or tampering scares, the proportion of products affected is typically very small -but the measures taken are often drastic, including mass product recalls and dire public health warnings 18 .
In any case, it is not just the normic theorist who must accept that the justification for Helen's belief is defeated in this case -any defender of Closure is committed to this. Suppose that Helen were still justified in believing that her brownie is peanut free, and contains no peanut-contaminated flour. Helen should also be justified in believing that the brownie sitting unpurchased on the shelf does not contain peanut-contaminated flour, and believing the same about the blueberry muffin next to it and the cinnamon swirl at the café across the street and so on. If Helen encountered every baked good in the city, she could justifiably believe, of each one, that it does not contain peanut-contaminated flour. Given Closure, Helen would be justified in believing that none of the baked goods in the city contains peanut contaminated flour. Given the newspaper report, it is clear that she does not have justification for believing this.
VII CONCLUSION
This paper has, in one way, been an extended exercise in exploring the consequences of preserving the Closure principle for justification. The lottery and the preface paradoxes both highlight certain commitments that any defender of Closure must be prepared to undertake -and so it is with the new hybrid paradox described by Praolini and Backes. I have argued that the primary lesson of the new paradox is that a Closure defender is under significant pressure to accept what I have called the Principle of Differential Defeat. The formal framework that I have set out demonstrates one way in which this principle could be embedded within a broad and systematic approach to epistemic justification -an approach which brings a range of further principles, such as Comparative Closure and the Minimum Conjunct Rule, as well as principles regarding partial defeat and insulation. For a Closure denier, the new paradox represents a powerful addition to one's arsenal. For a Closure defender, the paradox is valuable in another way -for revealing more of the rich network of principles of which Closure is but one part. | 16,346.4 | 2020-09-14T00:00:00.000 | [
"Philosophy"
] |
Dielectric Optical-Controllable Magnifying Lens by Nonlinear Negative Refraction
A simple optical lens plays an important role for exploring the microscopic world in science and technology by refracting light with tailored spatially varying refractive indices. Recent advancements in nanotechnology enable novel lenses, such as, superlens and hyperlens, with sub-wavelength resolution capabilities by specially designed materials’ refractive indices with meta-materials and transformation optics. However, these artificially nano- or micro-engineered lenses usually suffer high losses from metals and are highly demanding in fabrication. Here, we experimentally demonstrate, for the first time, a nonlinear dielectric magnifying lens using negative refraction by degenerate four-wave mixing in a plano-concave glass slide, obtaining magnified images. Moreover, we transform a nonlinear flat lens into a magnifying lens by introducing transformation optics into the nonlinear regime, achieving an all-optical controllable lensing effect through nonlinear wave mixing, which may have many potential applications in microscopy and imaging science.
A traditional optical lens refracts light with designed spatially varying refractive indices to form images; such images can be magnified or demagnified according to the law of geometrical optics on both surfaces of the lens by linear refraction, e.g. plano-convex lens, plano-concave lens. These images formed by optical lenses have a limited resolution due to the well-known diffraction limit, caused by lack of detections of the near-field evanescent waves at the far field 1 . In order to overcome this limit, a slab-like flat lens, namely "superlens" 2,3 , has been demonstrated with sub-diffraction-limited resolution imaging capability in the near field by exploiting the idea of negatively refracted evanescent waves in some carefully engineered meta-materials or photonic crystals 4,5 . However, images can only be formed by superlens in the near-field without any magnification. To mitigate these constrains, the concept of hyperlens was introduced later to convert the near-field evanescent waves into propagating ones providing magnification at the far field by the help of transformation optics 6,7 to enable negative refraction near some hyperbolic dispersion surfaces [8][9][10][11][12] . Besides optics, various forms of these sub-diffraction-limited resolution lenses have recently been realized in many other fields including microwave and acoustic [13][14][15] . One major drawback of these lenses is associated with high losses from metallic materials, which are the essential elements bringing in negative permittivity and artificial permeability to enable negative refraction 16 . Meanwhile, fabrications of such nano-or micro-structures raise additional obstacles for their practical applications.
To address this problem, alternative approaches have been proposed in nonlinear optics to achieve the nonlinear version of negative refraction using phase conjugation, time reversal and four wave mixing (4 WM) [17][18][19] , where negative refractions can be attained by exploring nonlinear wave mixings with right angle matching schemes. In contrast to those artificially engineered methods, i.e., meta-materials and photonic crystals in linear optics, ideally only a thin flat nonlinear slab is required to enable this nonlinear negative refraction [20][21][22][23][24][25] . Such negative refractions using nonlinear wave mixing have been demonstrated in some thin films with high nonlinearity such as the metal and graphite thin films 20,21 . Moreover, a flat lens utilizing negative refraction of nonlinear mixing waves has successfully shown its 1D and 2D imaging ability 25 . However, this lens still lacks the magnification capability, which is crucial for imaging applications.
Here we experimentally demonstrate a new type of dielectric magnifying lens based on nonlinear negative refraction by degenerate four-wave mixing with a thin glass slide. A multi-color imaging scheme is realized at the millimeter scale by converting the original infrared beams into the negatively refracted visible ones, spatial refractive index of the lens is carefully designed to ensure the magnification. By doing so, we surprisingly turn a demagnifying plano-concave lens in linear optics into a magnifying one in nonlinear optics. Moreover, inspired by the transformation optics, we successfully transform a non-magnified nonlinear flat lens into a magnifying one by controlling the divergence of pumping beams, effectively creating a magnifying lens controlled by another optical beam for the first time. This new imaging theme may offer a new platform for novel microscopy applications.
Results
Magnifying lens by nonlinear negative refraction. Negative Refraction can occur in a nonlinear degenerate four-wave mixing scheme 17,19 as shown in Fig. 1a, where a thin slab of third order nonlinear susceptibility χ (3) can internally mix an intense normal-incident pump beam at frequency ω 1 with an angled-incident probe beam at frequency ω 2 , generating a 4 WM wave at frequency ω 3 = 2ω 1 − ω 2 , which is negatively refracted with respect to the probe's incidence 20,25 . Such nonlinear negative refraction arises from the momentum requirement of the phase matching condition: k 3 = 2k 1 − k 2 during 4 WM in order to ensure efficient wavelength conversion. This phase matching condition can be further translated to a Snell-like angle dependence law and create an effective negative refractive index n e as (Supplementary Section 1): where the ratio of sine values between the probe's incident angle θ 2 and the 4 WM's refraction angle θ 3 (Fig. 1a) negatively proportions to the ratio between their wavelengths λ 2 and λ 3 . The negative sign indicates reversed angles with respect to the central pump's axis, effectively creating a "negative refraction" between the probe and the 4 WM wave. Meanwhile, the phase matching condition in three-dimensional wave vector space (Fig. 1b) exhibits a double cone shape around the central pump's axis, where the joint points between the incident probe's wave vector k 2 and the 4 WM's wave vector k 3 compose a ring in the transverse plane. Physically, this means that all the incident probe beams with angles parallel to k 2 , which are emitted from a point source, will be negatively refracted through 4 WM waves and focus on the other side of the slab. This builds the foundation for imaging using such negative refraction by nonlinear four-wave mixing with a thin nonlinear flat slab 25 . Both 1D and 2D images can be obtained by a nonlinear flat lens in such manner. However, due to one-to-one correspondence between the object points and the image points, the images' sizes are the same as the objects' without any magnification, similar to the case with a superlens 2 . In order to overcome this magnification issue, a negative diverging lens, e.g., plano-concave lens, can be combined with the nonlinear negative refraction to reduce the converging angles of the 4 WM beams, such that a real magnified image can be obtained, as shown in Fig. 1c. As contrast, such a plano-concave lens in linear optics only forms a demagnified virtual image with the same color; our nonlinear plano-concave lens can magnify the image with another color through nonlinear 4 WM.
To elaborate this idea, we consider a four-wave mixing process in a plano-concave lens as shown in Fig. 2a. An intense normal incident pump beam can nonlinearly mix with a probe beam with an incidence angle matching the 4 WM phase matching condition in Fig. 1b to generate a 4 WM beam. In a nonlinear flat lens (i.e., double plano-surface slab) 25 , such 4 WM beams can be negatively refracted with respect to the probe as shown as the dash lines in Fig. 2a according to the nonlinear refraction law in Equ. (1). With a plano-concave lens, this nonlinear negative refraction can be weakened by the linear Snell's refraction law on the concave surface (solid green lines in Fig. 2a), giving a magnified image. Therefore, by combining both the nonlinear refraction law and the linear Snell's law, we can obtain the magnification as (Supplementary Section 2): , θ 2 and θ 3 are the probe's incident angle and 4 WM's refraction angle. f is the focal length of the plano-concave lens and u and v are the object distance and the image distance from the lens.
In our experiment, the pump beam with the pulse duration of ~75 fs and central wavelength λ 1 = 800 nm is delivered by a Ti:Sapphire femtosecond laser source, while another optical parametric amplifier provides pulses of similar duration at wavelength λ 2 = 1300 nm as the probe beam. A plano-concave lens made of BK-7 glass with focal length f = − 13.5 cm, is used as our nonlinear lens, which contains the third order nonlinear susceptibility χ (3) around 2.8 × 10 −22 m 2 /V 226 . The incident angle of the probe beam θ 2 is set to 7.4°, close to the phase matching condition inside BK-7 glass material in order to ensure nonlinear wave conversion about 10 −5 efficiency. In a non-collinear configuration, a USAF resolution card is placed on the probe's path with a distance u away from the lens, while the images formed with the 4 WM beams around 578 nm wavelength can be captured by a CCD camera. Figure 2b shows such images with different magnifications by varying the object distance. The measured magnification linearly proportions to the ratio between the image and object distances as shown in Fig. 2c: the linear fitting slope reads 0.473, similar to 0.468 calculated from Equ. (2). Figure 2d further proves the validity of Equ. (2) by only varying the objective distance u, showing good agreement between experimental measurements and the theory. It is also worth mentioning that the rainbow colors in the images are resulted from multicolor 4 WM processes, which are enabled by the slight phase mismatching inside the nonlinear glass due to finite spectrum spreading of the incoming beams and the glass slide's thickness (Supplementary Section 4 and 7). Figure 3 shows the 2D magnified images formed by the nonlinear magnifying lens in a non-collinear configuration. It is noticeable that the horizontal features are much clearer than the vertical ones. This is because that the incident pump and probe beams both lay on the same horizontal plane, where only one small portion of phase matching ring near the horizontal plane in Fig. 1b is exploited, giving a better phase matching to 4 WMs on that plane, while not to 4 WMs on the vertical one ( Supplementary Section 3 and 4). Hence, 4 WMs can be better generated and focused in the horizontal plane, giving a finer resolution. To overcome this limitation, we implement a collinear configuration shown in Fig. 4 to access the full phase matching ring in 3D vector space in Fig. 1b (Supplementary Section 5), where a normal incident pump beam combined with probe beams scattered off the image object can fulfill the phase matching condition around the full ring geometry in 3D vector space (Fig. 1b) to generate 4 WMs. Unlike the non-collinear configuration, both vertical and horizontal lines are clear now in Fig. 4d,e with a magnification around ~1.87 given by Equ. (2).
Transforming a nonlinear flat lens into a magnifying one. Inspired by the development of transformation optics 6,7 , we can transform a non-magnifying nonlinear flat lens 25 into a magnifying one by connecting the spatially varying index in a plano-concave nonlinear magnifying lens to the 4 WM phase match conditions (effective negative refractive index ) in a non-magnifying nonlinear flat lens. Figure 5 illustrates this idea: with a nonlinear plano-concave magnifying lens mentioned above, the pump beam usually is normally incident to the front facet of the lens, diverged by the plano-concave lens due to linear refraction (Fig. 5a). This behavior can be mimicked by a point-like divergent pump beam passing through a flat slide (Fig. 5d). Meanwhile, 4 WMs in Fig. 5d no longer fulfill the same phase matching uniformly along the transverse plane as in Fig. 1b The pump beam at λ 1 = 800 nm is incident on the plano-concave lens normally, reflected by a dichroic mirror (900 nm long pass). The probe beam at λ 2 = 1300 nm modulated by a "grating" is transformed and forms an "object" in the front of the lens by a 4f system. The focal lengths of "L 1 " and "L 2 " are 4 cm and 6 cm, respectively. The zero order diffraction beam of the grating is blocked because this beam can't fulfill phase matching. The focal length of the plano-concave lens used in this setup is − 9. where M 2 are the magnifications of a nonlinear flat lens with a divergent pumping. In Fig. 5d, θ 2 ′ is the probe's incident angle. F is the distance between the pump and the lens. Technically, θ 2 ′ is different from θ 2 in Equ. (2) which is the probe's incident angle in a nonlinear plano-concave lens in Fig. 5a, because they have to fulfill different phase matching due to the pump's incidence. In our case, this difference is only ~0.7°, which is within the allowed 4 WM angle spreading due to multicolor spectrums of pump (3) if f = F. Experimentally, we confirm this by transforming a nonlinear plano-concave lens with f = − 13.5 cm to a nonlinear flat lens with a divergent pump 13.5 cm away from the lens, obtaining the 2D images with similar magnification ~1.26 in both cases as shown in Fig. 5b,c,e,f with a collinear configuration.
Optical controlling a nonlinear magnifying lens. At last, we show the most interesting feature by this transformed nonlinear lens: optical controlled magnification. Note that compared to Equ. (2), Equation (3) contains the effective focal length F, which can be tunable by tuning the divergence point of the pump beam, effectively optically controlling the nonlinear lens' focal length. By varying this effective focus, we can control the magnification of the formed images. For example, we experimentally can increase the magnification to 1.58 from 1.31 in Fig. 6b,c,e,f by decreasing F from −10 cm to −6 cm. This create the first example ever of an optical controllable lens, as all previous works involves mostly with liquid crystal, thermal effect or deformed liquid lenses [27][28][29] , which could have slow responsibility. Such optical controllable devices may trigger new applications in imaging science.
Conclusion
In summary, we have experimentally demonstrated a dielectric nonlinear magnifying lens by nonlinear refraction through four-wave mixing in a thin glass slide. Our method explores the possibility of using dielectric's nonlinear properties for negative refraction as a substitute approach for meta-materials to overcome the loss problem. We extend the transformation optics into nonlinear regime, creating a nonlinear optical-controlled magnifying lens. The new nonlinear optical lens design reported here may open new realms of many applications in microscopy and imaging science in the near future. | 3,374 | 2014-11-24T00:00:00.000 | [
"Physics"
] |
Diagnosis of machine vision of an unmanned vehicle
There is an increase in the number of cars using artificial intelligence. Therefore, it is necessary to provide quality maintenance of artificial intelligence components, such as machine vision (MV). The paper considers a general approach to the diagnosis of the unmanned vehicle. Based on the analysis of the use of existing systems, general requirements for the diagnosis of unmanned vehicle MV were formulated, diagnostic parameters were proposed. To solve the problem of testing, debugging and diagnostics of the MV, it is proposed to use virtual polygons built using the methods of procedural computer graphics. After diagnosing the MV video cameras, if necessary, they are calibrated according to the images of a special test object.
Introduction
The automotive industry is a leader in the application of machine vision technology (MV) and is its largest consumer. According to analysts [1,2], the automotive industry forms 23% of the market of MV products in Germany. And according to VDMA, for Europe this figure is 21%. Therefore, it is not surprising that the algorithms of the Ministry of Health gradually began to be used in the cars themselves -autopilots, and not only at the stages of their production.
Practice shows that the autopilot works correctly in 80% of road situations. Modern unmanned vehicles have a maximum of the third level of autonomy. Autopilot only helps the driver. One of the reasons for the imperfection of the autopilot is the imperfection of the MV. And until an optimal solution to this problem is found, the level of autonomy of unmanned vehicles will not rise above 3+ [3].
Continuity: due to the lack of fatigue, the MV system can be used in continuous use mode without stopping. Repeatability: control is carried out under fixed and uniform conditions, which guarantees the adequacy of the assessment [5].
Consistency: automation of the process avoids human subjectivity, and therefore allows you to maintain a constant level of quality. Also, reduce costs [6].
Among the disadvantages should be noted: the need for constant and clear lighting; the need to calibrate the camera [7].
Difficulties in recognizing objects that overlap or have an adjacent color to the background; higher final cost of works [8].
Basic material
Based on the analysis, the diagnostic system of the Ministry of Health for use in automotive services is proposed.
MV diagnostic system -a set of tools and methods for diagnosing MV, which allows you to detect a faulty element of MV in the most rational way [9]. The troubleshooting procedure is expected to be performed automatically. First, the MV can be in two mutually exclusive and discriminatory states (operational and inoperable). Secondly, it is possible to allocate elements (blocks), each of which is also characterized by the distinguishing states defined as a result of checks. [10,11,12].
The main components of the MV are conditionally presented in Fig. 1. This figure shows all possible combinations of MV elements at once. In practical tasks, these components can be used in various combinations.
The object of interest of the Ministry of Health is an object of the external world, information about which is of interest for use in a practical task. The scale and size of the objects of interest can be very different, as may the information required to obtain through the MV. Lighting and reflective properties of objects are those processes and phenomena that allow contactless information about the illuminated objects. Lighting can be controlled (specially organized) or external to the MV. Optical system -a system of lenses, through which the light flux from the area of interest is projected on the transducer "light-signal". Most often, such a system uses a standard photo, micro and television lenses. A Light-signal converter is a device that converts the energy of incident light into an object into electricity. Traditionally, in such devices, the voltage of the output signal is proportional to the number of incident photons on the corresponding surface of the converter. Computer or special computer -equipment that implements algorithms for collecting and processing visual data. Mathematical software -a set of mathematical models, algorithms and programs. Analog data input device -converter of analog visual data into digital. Its main part is a high-speed analog-to-digital converter and a bus connected to computer memory. There are options for both built-in boards (framegraphers or video capture cards) and external devices built into digital video cameras. When using this device it is necessary to take into account the scale of real-time, "light-signal" set by the converter. External devices -external to the MV and devices that affect the object of interest, the processes that take place in real-time. Actuators -devices that are connected to the MV and with which you can influence the observed scene and, in particular, the object of interest.
Analysis of literature sources and exploratory research revealed the main factors influencing the accuracy of the Ministry of Health: -when forming the image: the type and type of television camera and video card, the settings of the equipment, the distance between the optical center of the television camera and the object; lens aperture, image magnification, MV calibration, position and location of light sources, object illuminance, background-object contrast ratio, shape, size and material of the object of interest, ratio of linear dimensions and object areas and frame, the position of the object on the background plane, the time factor; -when processing the image: the type of the operator of the selection of the boundaries of the object, the allowable amount of error when approximating the contour of the part, the number of additional points when approximating the edges. Fig. 1. The main components of the MV: 1 -object of interest; 2 -lighting; 3 -optical system; 4converter "lighting-signal"; 5 -computer; 6 -mathematical software; 7 -analog data input device; 8external devices; 9 -actuators; 10 -the driver.
Formatting the
Diagnostic parameters (DP) -parameters, the value of which indirectly characterizes the technical condition of the object we choose: pixel; focal length/distance to the lens; field of view (FOV) -the area that can be seen by the MV device; working distance (WD) -the distance between the lens and the object; depth of field (DOF).
The Ministry of Health requires high reliability and quality work in a variety of conditions. Development and further use of MV is impossible without debugging, testing and diagnostics of compliance of parameters and characteristics of MV with operating conditions. At this stage, a serious problem is obtaining test data and organizing the testing and diagnostic process.
To solve the problem of testing, debugging and diagnostics of the MV, it is proposed to use virtual polygons built using the methods of procedural computer graphics [13]. The virtual landfill (VL) for testing consists of three main parts: the VL generator, the interface between the MV and the virtual unmanned vehicle (UC), the UC simulator ( Fig. 2): VL generator is the most important subsystem. Its task is to build and paint a synthetic model of the road. The generation process is controlled by the operator by setting a small number of parameters that determine the appearance of the road. The road is built by combining three levels of detail: a global map of vertical and horizontal low resolution (built procedurally on the basis of parameters), a three-dimensional grid (built on a global map of vertical and horizontal and three-dimensional "smart noise" using the marching cubes) and a high-frequency fractal noise component (generated and added during the output process). VL is painted procedurally, on the basis of characteristics of roughnesses and average slopes of separate sites of a road surface. During generation, there is no repetitive texture patterns, a variety of generated surfaces and their similarity to real surfaces.
The interface between the MV and the virtual UC is provided by transmitting control messages (requests for photos from cameras and UV control signals) from the MV and receiving information messages (photos with debugging information, data on the actual position and orientation of the UC) from the virtual UC. Fig. 2. The structure of the elements of VL: 1 -generator VL; 2 -interface; 3 -UC motion simulator; 4 -device for entering evaluation criteria; 5 -simulator (processor-computer); 6 -device for entering parameters and characteristics; 7 -output device of the obtained results.
The UC simulator provides a simple physical model of UC movement and the ability to visualize the test site from the positions of the cameras located onboard the UC. This takes into account the parameters and characteristics of the elements of the EP, their importance on the basis of selected criteria.
The use of a virtual test site allows you to get the following advantages over real sites: 1. Speed and low cost of obtaining test data.
2. Ability to obtain test data on various types of road surfaces: from asphalt to ground. 3. Ability to determine the accuracy of the MV, because of the exact surface of the road. 4. Ability to interrupt the work of the Ministry of Health at the time of the error and quickly find its cause.
5. Repeatability of test results. Development of VP for testing and diagnostics of MV allows to estimate the size of the influence of casual errors of external parameters of the environment and internal parameters of elements of MV on quality of work of MV, and also to model the form of space of diagnostic parameters, to define its sizes and to analyze characteristics of the distribution of errors [14].
In systems of passive-type, light energy from third-party sources are used for the transmission of measuring information, not related to functionally with the measurement system. In optoelectronic passive systems such as receivers of the measuring information, machine visual cameras (MVC) are used, the accuracy of the perception of measuring information which significantly depends on the perfection of the optical part. Although modern lenses are projected using computers for complex calculations and modeling, even with such technology it is impossible to completely eliminate all distortions. Measuring geometric disfirms image devoted to work [15,16]. Conditional publication can be divided into two groups.
The first group of works is devoted to the diagnosis of optical systems in order to determine distortions in the calculation of optical systems and the assessment of their quality. To compensate for distortions, hardware compensation methods are used, which leads to complications of optical systems and their significant emergency.
The second group of works is devoted to the calibration of MVC to determine the distortions and their further compensation (including the distortion of the image).
The most promising method is the method of calibrating the camera in the pictures of a special test object [17]. The essence of the method is to obtain calibration coefficients that take into account the effects of all systematic distortions existing in real shooting. And further program compensation of distortions based on a mathematical model describing distortion.
This calibration method is the most widespread now because it is easily implemented in practice
When calibrating the camera, test objects are used, both flat and spatial. By distorting images of test objects formed arrays for measuring the chamber distortion [16]. Most often used in calibration test objects are presented in Fig. 3.
The test object A is used to adjust promising distortions and distorts, in the process of calibration of the binding is carried out to the corners of the square. In Figure B, the test object is used to correct the distortion, which the binding to the centers of the points. There are modifications of this test object that are characterized by the form of points. In Figure C, the test object for calibration systems is used to measure angular variables. Binding during calibration is carried out to the corners of triangles. The test object D is an array of squares that can be used to obtain coefficients of several types of distortions (promising distortions, distortion).
On test objects A and B, it is much easier to carry out the binding process, on test object B an attachment to the points of points, the quality of the calibration process will depend on the accuracy of the center of points. a b c d
Despite the variety of polynomials, all of them are based on two basic ideas. The first is the integral systematic error of DX, DY, described by the polynomial, is presented as the sum of members of individual systematic distortions (distortion, deformation, etc.), for example, in works [18,19].
The second idea is that the presentation of a polynomial of an integral error does not bind to individual types of distortions. Integral error is described by static polynomial, for example, in works [20,21].
The most appropriate calibration method for the MV camcorder is a method based on the classical method of Tsaa [22]. This calibration algorithm involves performing multiple operations: -conducting internal calibration to find the matrix АIC -a conversion matrix from the image coordinate system into the camera coordinates; -adjustment of linear deviations and distortion; -external calibration for finding the matrix АWC -matrix of conversion from the camera coordinate system into the system of coordinates UV; -development of a direct task of the kinematics of the UV movement for finding the matrix АBC -matrix of transformation from the UV coordinate system into the baseline coordinate system.
To find the matrix АIC you need to determine the location of the image coordinate system relative to some constructive cell element [23].
The advantage of this method is the possibility of calibration with the camera installed on the Camera body. The use of the UV housing as measurement base increases the accuracy of the camera calibration by eliminating the errors of additional kinematic fastening chains (turns, movements).
In the process of internal calibration as the beginning of the coordinates of the basic system of coordinates XBYBZB is selected a calibration base. The beginning of the coordinates of the XCYCZC camera is collected to it. The image coordinate system binds to the calibration field (Fig. 4).
The image of the image coordinates is the projection of the central calibration field label on the plane А [26]. When calibrating the distance from the center of the lens to the central tag template is selected equal to the distance from the center of the lens to an object of interest (for example, a column of a petroleum curb side by side). The start of the coordinate system is the projection of the central label of the calibration field on the plane А. Next it is necessary to determine the internal cameras parameters: the focal length F and the coordinates of the main point (u0, v0) [27].
At the time of the second calibration phase, a known mutual location of the image coordinate system and a camera coordinate system that allows you to compare the image model and a real image received by the camcorder.
The calibration algorithm for diagnosis is proposed -the sequence of execution of inspections included in the diagnostic test, and the rules for processing the results of inspections in order to obtain a diagnosis -information about the object of diagnostics, which allows localization of the system's malfunction (assess its technical condition), or to identify the reason for its incapacity. Based on the analysis of diagnostic parameters or symptoms (forms of manifestation of the deviation of the diagnostic parameter from its permissible values).
The algorithm of internal calibration of the chamber contains elements (Fig. 5): 1. With the calibration device pins, the camera is installed above the gauge template so that the optical axis of the chamber and the central layer of the template coincided.
2. Multiple template shots are made.
3. There is a recognition of a calibration template label on the resulting image and calculating their pixel coordinates in the sampling coordinate system. Angular orientation of the coordinate system of the image relative to the object coordinate system is a matrix of guide cosines of elements of external orientation of a picture, which is determined by the formula: (2) Provided the colonality of the axes of the image coordinate system and the coordinate system of the picture of the orienteering matrix B gaining the appearance: Then the Kolinearity equation is written as follows: Calibration is carried out according to the least squares method. At the same time, a minimum is achieved by the union of the Kolinearity Center for the camera design, points of the object and corresponding points (Fig. 6).
From equation (4) it follows that the arguments of functions ex and ey is a value u0e, v0e and fe,and function ex and ey tend to zero with equal projected and actual values u0, v0 and f. Thus, internal calibration algorithm reduces to the problem of finding the minimum functions ex and ey method of least squares. In vector corrected parameters include u0, v0 and f. Estimated location u0, v0 is located in the center of the image and is defined in a coordinate system image formulas: Translate the coordinates of the projected point of gauge fields with coordinates in the image coordinate system images using dependence: where x, y -coordinates of points in the shooting coordinate system obtained from the recognition of points of the calibration field; lp -the length of the pixel, μm.
Further, minimization of ex and ey functions to provide an average deviation of the position of control points from the given and obtaining necessary accuracy.
Conclusions
Considered generalized approach to the diagnosis of MV unmanned vehicle and formulated general requirements for diagnostic MV UV proposed diagnostic parameters to assess the serviceability of the system MV. To solve the problem, testing, debugging and diagnostics MV proposed use virtual polygons constructed using procedural methods of computer graphics. After the diagnosis of cameras is their calibration images special test facility. | 4,127.2 | 2021-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Noncovalent Modulation of Chemoselectivity in the Gas Phase Leads to a Switchover in Reaction Type from Heterolytic to Homolytic to Electrocyclic Cleavage
: In the gas phase, thermal activation of supramolecular assemblies such as host–guest complexes leads commonly to noncovalent dissociation into the individual components. Chemical reactions, for example of encapsulated guest molecules, are only found in exceptional cases. As observed by mass spectrometry, when 1-amino-methyl-2,3-diazabicyclo[2.2.2]oct-2-ene (DBOA) is complexed by the macrocycle β -cyclodextrin, its protonated complex undergoes collision-induced dissociation into its components, the conventional reaction pathway. Inside the macrocyclic cavity of cucurbit-[7]uril (CB7), a competitive chemical reaction of monoprotonated DBOA takes place upon thermal activation, namely a stepwise homolytic covalent bond cleavage with the elimination of N 2 , while the doubly protonated CB7 · DBOA complex undergoes an inner-phase elimination of ethylene, a concerted, electrocyclic ring-opening reaction. These chemical reaction pathways stand in contrast to the gas-phase chemistry of uncom-plexed monoprotonated DBOA, for which an elimination of NH 3 predominates upon collision-induced activation, as a heterolytic bond cleavage reaction. The combined results, which can be rationalized in terms of organic-chemical reaction mechanisms and density-func-tion theoretical calculations, demonstrate that chemical reactions in the gas phase can be steered chemo-selectively through noncovalent interactions.
Introduction
10][11][12] Previously, we have demonstrated that covalent bond cleavage in a supramolecular complex can become favored over its noncovalent dissociation and that certain chemical reactions can be effectively suppressed by macrocyclic encapsulation. [13]In detail, for a series of bicyclic azoalkanes, host-guest dissociation, which generally prevails upon thermal excitation of noncovalent complexes in the gas phase, could be suppressed in favor of an electrocyclic cleavage of the guest, namely a retro-Diels-Alder reaction, taking place inside the macrocyclic cavity.This observation has opened a new playground to explore how chemical reactivity can be directed in the gas phase through noncovalent interactions, which we are now entering.
Specifically, we have pursued the possibility to not only suppress certain reactions but to switch on new reaction pathways, to control the cleavage of certain bonds over those of others and, thereby, to mimic the aim of modeselective chemistry through a supramolecular approach.Our efforts have led us to a showcase where not only different covalent reactions can be affected in the inner phase of a macrocycle, but where the entire spectrum of organic-chemical reaction types (heterolytic, homolytic, and electrocyclic bond cleavage) can be covered.
Results and Discussion
Stoichiometric (1 : 1) complexes of β-CD or CB7 with DBOA are spontaneously formed by self-assembly in aqueous media.The complexes are of the inclusion type (endo), which can be demonstrated by several techniques, conventionally by 1 H NMR. The host-guest complexation is driven by a combination of electrostatic interactions and hydrophobic effects, resulting in host-characteristic aqueous binding constants of K a = 250 M À 1 for β-CD and K a = 2.2 × 10 10 M À 1 for CB7. [14,15] oth (protonated) supramolecular complexes could be transferred in their intact form to the gas phase by using electrospray ionization in a mass spectrometer under ultrahigh vacuum conditions (see Supporting Information Section 1 for experimental details).The chemical reactivity of the encapsulated DBOA was inves-tigated by using collision-induced dissociation (CID) [13] of the isolated ions of interest.CID is an ergodic process which allows energy randomization such that scission of the weakest bond is expected.For noncovalent host-guest complexes the "weakest link" is the noncovalent one, because intermolecular (supramolecular) interactions are typically 1-2 orders of magnitude weaker than covalent or ionic chemical bonds.[18][19] However, the complexes of bicyclic azoalkanes with CB7 have been found to be a notable violation this rule, [13] which we have now exploited to unveil an exciting gas-phase chemistry.
Qualitative insights into the reaction pathways can be obtained by analyzing the CID data of DBOA and its complex with the macrocyclic hosts (Figure 2).In the absence of a macrocycle, an aqueous solution of DBOA affords [DBOA•H] + as base peak in its mass spectrum, no doubly protonated ions were observed even in the presence of strong acid (Figure S2).CID experiments establish the elimination of NH 3 as the major reaction pathway (ca.90 %), while the elimination of C 2 H 4 (ca.10 %) is observed as a minor route (Figure 2a,e & Figure S4).The elimination of NH 3 requires the amino group to be protonated (DBOA-NH 3 + in Figure 1) to allow for a heterolytic cleavage mechanism.Computational methods (Supporting Information Section 5) predict the protonation of the azo group to be energetically favorable in the gas phase (Figure 3a), contrary to the intuitively expected aqueous pK a -based situation, but a fast shuttling of the proton between the neighboring nitrogen atoms becomes feasible through a quasi-hydrogen bonded geometry (Figure 3a).Vice versa, protonation of the azo group facilitates a retro-Diels-Alder reaction, [20] which leads to an electrocyclic elimination of C 2 H 4 as a minor reaction pathway of uncomplexed The CID effects of the host-guest complexes varied in dependence on the macrocycle.For the [β-CD•DBOA•H] + complex, only dissociation of the supramolecular complex into its components, [DBOA•H] + and β-CD, was observed.This corresponds to a noncovalent bond dissociation, the standard pathway for dissociation of host-guest complexes (Figure 2d,h).A different result was expected for CB7 as macrocycle, because it is known for its constrictive binding. [13]Indeed, CID of the [CB7•DBOA•H] + complex afforded not only host-guest dissociation, but also chemical reactions of the guest under formation of host-product complexes.Surprisingly, while two bond cleavage pathways with two different reaction types (heterolytic vs electrocyclic) were found to be immanent for uncomplexed [DBOA•H] + , both routes are shut down upon CB7 inclusion complex formation.Instead, when the host-guest complex [CB7•DBOA•H] + is thermally activated under CID conditions, it eliminates N 2 (Figure 2b,f).Although the fragments C 2 H 4 and N 2 are isobaric and have the same nominal mass (28 u), the latter was confirmed by accurate mass analysis (Table S1).Ion mobility mass spectrometry (IM-MS) unambiguously established that all investigated CB7- guest/product complexes were of the inclusion type (Table S2), that is, the observed processes are inner-phase reactions.Additionally, the doubly protonated complex, [CB7•DBOA•2H] 2 + , can be observed and isolated from the profile mass spectrum.CID experiments showed that the fragmentation of [CB7•DBOA•2H] 2 + follows yet another route (Figure 2c,g), exclusively the electrocyclic C 2 H 4 elimination pathway.These observations pointed to a fascinating switch-over in chemoselectivity, which we investigated in depth.
The modulation of the gas-phase reactivity of [DBOA•H] + by CB7-encapsulation can be formally explained by classical organic-chemical reaction mechanisms and reactivity-selectivity principles (Figure S15).The preferential elimination of NH 3 from free [DBOA•H] + can be mechanistically rationalized as an intramolecular anti elimination reaction (Figure 2a), in which the departure of NH 3 as a leaving group is assisted by a heterolytic ring-opening reaction under formation of a diazonium cation, which subsequently eliminates N 2 , as demonstrated by the MS/MS spectra (Figure 2e).Upon encapsulation by CB7, the NH 3elimination pathway becomes unfavorable because the NH 3 moiety is now noncovalently stabilized by electrostatic interactions with the electron-rich carbonyl portal of CB7 (Figure 2b), that is, it becomes a poorer (more basic) leaving group.In this case, a homolytic elimination of N 2 takes place via a radical mechanism, driven by the high stability of molecular dinitrogen.This observation is consistent with previous experimental and computational studies on the thermal deazatization of neutral DBO derivatives that occurs with significant rates at temperatures between 200- 250 °C. [21,22] he elimination is expected to occur via a stepwise homolytic cleavage involving a 1,4-cyclohexanediyl diradical intermediate, which subsequently forms bicyclo-[2.2.0]hexane upon radical recombination and eventually rearranges to yield 1,5-hexadiene-the more flexible and, thus, entropically favored product under thermally activated conditions (Figure S15b). [23]rotonation of the azo moiety, in the doubly protonated [CB7•DBOA•2H] 2 + , effectively blocks the homolytic CÀ N bond cleavage pathway because a highly unstable [N 2 H] + species (instead of N 2 ) would be formed.Instead, a third pathway is switched on, the C 2 H 4 -elimination via a concerted retro-Diels-Alder mechanism (Figure 2c), which had also been observed in our previous study of bridgehead-unsubstituted azoalkanes. [13]This pathway subsequently follows a characteristic reaction cascade involving 6 intermediates (all confirmed by accurate mass analysis) through a 1,3-H shift, a second cycloelimination, and a βelimination reaction, before ultimately dissociating back to [CB7•2H] 2 + and [CB7•H] + (Figure 2g, Figure S12, Figure S15c).The kinetic stability of the [CB7•fragment•2H] 2 + complexes is remarkable.In fact, it constitutes the longest discrete inner-phase reaction cascade in the gas phase reported to date (> 5 steps).
Organic-chemical bond cleavage reactions are textbookwise categorized according to three fundamental reaction types (heterolytic, homolytic, and electrocyclic), and the chemical reactivity of protonated DBOA in the gas phase features all of them.The reaction types govern the way we classically write out chemical reaction mechanisms with electron-pushing arrows (double-barbed, single-barbed, or connected in a circle as in Figure 2) and they govern the way we rationalize chemical reactivity through the types of involved frontier molecular orbitals (σ bonds for heterolytic vs SOMO-SOMO for homolytic vs π HOMO-LUMO for electrocyclic reactions) or the energy gaps (Δɛ) between the interacting orbitals of the two involved reaction partners or products (large Δɛ for heterolytic and small Δɛ for electrocyclic reactions). [24]n detail, the thermal decomposition of DBOA•H + can occur through multiple competitive pathways that involve three distinct reaction types and the cleavage of different sets of covalent σ bonds: (1) heterolytic cleavage of the exocyclic CÀ NH 3 bond under elimination of NH 3 , (2) (stepwise) homolytic bond dissociation of the endocyclic CÀ N=N bonds under elimination of N 2 , and (3) an electrocyclic reaction involving a concerted breakage of the endocyclic CÀ C bonds with elimination of C 2 H 4 .The third pathway predominates in the doubly protonated [CB7•DBOA•2H] 2 + complex.The first pathway is the predominant pathway for uncomplexed [DBOA•H] + , in competition with the third pathway, while the second pathway is exclusive for the [CB7•DBOA•H] + complex.As an added layer of mechanistic complexity, the system also features a competition between covalent and noncovalent bond cleavage pathways (Figure 2a-c vs Figure 2d), which has been scrutinized before [13] and is therefore omitted from the chemical reactivity discussion.
In order to rationalize the noncovalent differentiation of the covalent chemoselectivity in detail, we carried out density-functional theoretical (DFT) calculations for the lowest-energy states (LES) and transition states (TS) of the different reaction pathways (Figure 3, 4).We used the wB97XD/6-31G* level of theory, since this dispersioncorrected hybrid functional is known to capture reasonably well intramolecular and supramolecular interactions between non-bonded moieties, such as those within host-guest complexes. [25]In an attempt to obtain a holistic view of the energy surface, which-as we found later-is essential for the understanding of the observed chemoselectivity, we performed calculations around the dihedral angle θ (as depicted in Figure 3) of the bridgehead CÀ CH 2 NH 2 bond; it is the only non-terminal, rotatable bond in the DBOA molecular framework.This dihedral angle θ can serve as a simple measure of the relative positions of the two distinctive functional groups (À NH 3 + and À N=NÀ ) and scanning around it can effectively sample the chemical reactivity of the competing pathways across the entire conformational space.Note that the energetic reference point of all calculated intermediates at all dihedral arrangements in Figure 4 is the global lowest-energy conformation (near 30°, marked as LES in Figure 3).In the interpretation of the computational data, we focus on relative energetic trends, because the accuracy of the selected method cannot be fully validated.Specifically, the only available experimental benchmarks, the activation enthalpy and entropy for thermal extrusion of N 2 from DBO in the gas phase, ΔH ‡ ca.45 kcal mol À 1 and ΔS ‡ ca. 10 cal K À 1 mol À 1 , [22] indicates a good agreement with the computed enthalpy for homolytic CÀ N=N bond cleavage, ΔH ‡ = 47.0 kcal mol À 1 , but a larger variation for the computed entropy, ΔS ‡ = 5.4 cal K À 1 mol À 1 (Table S4).Cross-validation of the transition-state energies using higher levels of theory shows excellent consistency (Table S3) which supports the choice of wB97XD/6-31G* as default.
As can be seen from Figure 4, the activation barriers for the three reaction types are dramatically and selectively affected by the supramolecular encapsulation (compare left and right graphs in Figure 4).In particular, the pathways for elimination of NH 3 and C 2 H 4 (grey and blue) become energetically disfavored in the [CB7•DBOA•H] + complex while the elimination of N 2 is energetically favored (red), especially at elevated temperature.Even though the precise temperature under the CID conditions is unknown, the sensible temperature window between 300-700 K (indicated as dashed lines in Figure 4c,d) nicely explains the experimental results in regard to the competitive heterolytic and electrocyclic pathways for free DBOA and the predominance of the homolytic pathway for the singly protonated CB7 complex.
The computational results in Figure 4 can be semiquantitatively interpreted in regard to the described structure-activity relationships and organic-chemical reaction mechanisms.As expected, although [DBOA•H] + is most stable when the amino and azo groups are adjacent (Figure 3a), the elimination of NH 3 occurs from the anticoplanar configuration (i.e., θ = 180°, grey graph in Fig-
Angewandte Chemie
Research Articles ure 4a).Complexation of the NH 3 group by the carbonyl portal of the CB7 deteriorates its leaving group propensity electrostatically which is reflected in a more than 10 kcal mol À 1 activation energy increase of this pathway, according to DFT calculations (compare grey data points in Figure 4a,b).The extrusion of C 2 H 4 in free DBOA occurs from the tautomer with a protonated azo group which activates the retro-Diels-Alder reaction.In contrast to the elimination of NH 3 , the electrocyclic reaction (blue entries in Figure 4a,b) takes place from an approximately cis coplanar conformation of the NH 2 group with the protonated azo group, where an intramolecular hydrogen-bond contributes (Figure 3a).The calculations predict that this electrocyclic pathway is certainly (Figure 4a) if not preferred (Figure 4c) to the heterolytic pathway for free [DBOA•H] + , and it is also found to be at least a minor pathway in the CID experiments (Figure 2e).Equally important, the calculations show that the electrocyclic pathway is also energetically disfavored by more than 10 kcal mol À 1 in the [CB7•DBOA•H] + complex, because the protonated azo group is electrostatically stabilized by the CB7 carbonyl rim; this deactivates the electrocyclic reaction relative to that of uncomplexed [DBOA•H] + because the dienophilic fragment becomes less electron-deficient.
From a catalytic point of view, the computational results show that the homolytic reaction pathway (red data points in Figure 4a,b), which is the energetically least favorable and experimentally elusive one uncomplexed [DBOA•H] + , becomes favorable in relative and absolute terms in the [CB7•DBOA•H] + complex.In relative terms, because the other two competitive reaction types are strongly disfavored in the noncovalent assembly, and in absolute terms because the reaction has a positive volume of activation, and the packing coefficient of the corresponding transition state (52 %) is more favorable since closer to the ideal value (55 %) than the packing in the loose substrate complex (48 %).Packing coefficient arguments, which empirically reflect a balance of enthalpic and entropic effects in hostguest complexes, have been previously used to account for the favorable elimination of C 2 H 4 from the parent [DBO•H] + , [13] and the same arguments will obviously apply for the elimination of any other molecular fragment as well, N 2 in the case of the homolytic bond cleavage of [DBOA•H] + .Notably, based on the DFT calculations, the lowering of ΔG ‡ upon CB-encapsulation is due to the large and more positive ΔS ‡ value rather than a more negative ΔH ‡ (See Supporting Information Section 5.5 for further discussion on the assumptions).The latter does not change significantly inside the cavity of CB7 (47.0 kcal mol À 1 for [DBOA•H] + and 48.2 kcal mol À 1 for [CB7•DBOA•H] + , data points at T = 0 K in Figure 4c,d), while the former can be deduced from the different slopes of the red graphs in Figure 4c,d.We speculate that this entropic effect is due to the decreased steepness of the potential well, [27] where weakening of the CÀ N bonds is counterbalanced by the enhanced dispersion interactions with the cavity wall of CB7 as the bond elongates.In the relevant temperature range between 300-700 K, the absolute stabilization due to this effect accounts for 2-4 kcal mol À 1 , which falls in the typical range of stabilization of the transition state for a fragmentation reaction due to increased dispersion interactions. [13,28,29] Meover, there appears to be an additional entropic effect that favors the homolytic pathway in the CB7 cavity, because the dihedral angle dependence for deazetization largely disappears when complexed inside CB7 (red data points in Figure 4b), while in the uncomplexed form a preferential cleavage near the cis conformation was ob- , where the y-intercept represents ΔH ‡ and the slope represents À ΔS ‡ .Room temperature (300 K) is marked as a green dotted line and the onset temperature of thermal decomposition of CB7 (ca.700 K) [26] is marked as the orange dotted line.In the case of free [DBOA•H] + , NH 3 -elimination becomes kinetically more favorable above a critical temperature of 600 K due to the larger positive ΔS ‡ .For [CB7•DBOA•H] + , N 2 -elimination is kinetically favorable across the entire temperature range.See Figure S16 for lowest energy transition state structures (TS).Note that the energetic reference point of all calculated TS at all dihedral arrangements in Figure 4 served (Figure 4a).In the latter, the ammonium group is stabilized by the neighboring azo group (Figure 3, grey data points), which leads to a favorable homolytic cleavage from this conformation (ground-state effect), while in the former, the ammonium group is immersed in electrostatic interactions with the carbonyl corona of CB7, which essentially allows for a free rotation of the bicyclic residue and a homolytic reaction essentially from any dihedral angle.32] Note that the protonation of the azo group in the CB7 complex, which requires a double protonation to form [CB7•DBOA•2H] 2 + , results experimentally in an exclusive retro-Diels-Alder reaction, that is, the electrocyclic pathway as the third covalent reaction pathway is "switched on" (Figure 2c,g).Calculations further rationalize that the second protonation, that of the azo group, takes place at the distal azo nitrogen, the one more remote from the already protonated ammonium group (Figure S21).This geometry is not only expected on account of intramolecular charge repulsion, but also on account of the complementary stabilization that the protonation of the distal azo nitrogen can receive by electrostatic interactions with the second carbonyl portal of CB7, the one that is not involved in already stabilizing the ammonium group.This "bi-dentate" stabilization of [CB7•DBOA•2H] 2 + is unique for the noncovalent assembly, which accounts also for the failure to observe the uncomplexed doubly charged [DBOA•2H] 2 + species experimentally, by mass spectrometry.Consequently, the peculiar electrocyclic reactivity of the [CB7•DBOA•2H] 2 + complex is also the genuine result of a noncovalent reaction control, allowing an otherwise disfavored double protonation.
It should also be noted that the electrocyclic reaction is the only sensible and computationally verified (Figure S22) covalent pathway for [CB7•DBOA•2H] 2 + because homolytic N 2 elimination cannot occur due to the azo group protonation, while the heterolytic pathway (which could be computationally described for the singly charged [CB7•DBOA•H] + ) is disfavored, because the mechanistically important conjugatively electron-donating effect by the distal azo nitrogen (Figure 2a, Figure S15) is blocked by protonation.
Conclusion
[35][36][37] However, in the gas phase, when thermally activated, host-guest complexes generally undergo noncovalent bond dissociations.Constrictive binding in hostguest complexes can exceptionally activate covalent bond dissociation as a competitive reaction pathway.We have now established, by employing a functionalized azoalkane as guest, that noncovalent interactions can be deliberately used to modulate covalent reactions such that the full spectrum of organic-chemical reaction types can be covered, ranging from heterolytic bond cleavage (dominant for the uncomplexed monoprotonated guest) to homolytic bond cleavage (exclusive for the protonated host-guest complex) to a concerted cyclo-reversion reaction (specific for the doubly protonated host-guest complex).These gas-phase results, which can be comprehensively computationally and mechanistically rationalized, can be viewed as an example of "mode-selective supramolecular chemistry".They show how a noncovalent approach can be employed to gain control over chemoselectivity and how to steer multiple pathways of thermally activated reactions of small molecules.These results are expected to inspire our vision of how encapsulating supramolecular catalysts can be used regardless of the surrounding phase of matter, and how a differential stabilization of transition states inside host molecules, the arrangement of functional groups in confined environments, the targeted occupation of charges, and constrictive binding in nano-space can be used in the rational design of functional synthetic materials as well as the understanding of enzymatically active proteins.
Figure 2 .
Figure 2. Major reaction pathways of DBOA under CID conditions in the gas phase.a) Free DBOA eliminates NH 3 (green) upon thermal activation as major pathway (heterolytic covalent bond cleavage).b) Inside the cavity of CB7, and with the NH 3 -moiety being stabilized by the CB portal, it eliminates N 2 (orange) upon activation (homolytic covalent bond cleavage).c) An additional protonation (red) of the azo group inside CB7 blocks the N 2 -elimination pathway, such that DBOA undergoes a retro-Diels-Alder reaction (electrocyclic ring opening) to eliminate C 2 H 4 (blue).d) When complexed by β-CD, collision-induced dissociation into the free host and guest takes place (noncovalent bond cleavage).See Supporting Information Sections 3 and 4 for a detailed analysis.Corresponding CID MS/MS spectra of e) [DBOA•H] + , f) [CB7•DBOA•H] + , g) [CB7•DBOA•2H] 2 + , and h) [β-CD•DBOA•H] + , where precursor ions are marked with an asterisk.
Figure 3 .
Figure 3. Molecular models of a) [DBOA•H] + and b) [CB7•DBOA•H] + for different protonation states, at the azo or amino group with the lowest-energy state (LES) indicated and associated dihedral energy profiles.The dashed lines indicate hydrogen bonds.Curly arrows denote the dihedral angle θ of the rotatable NÀ CÀ CÀ N bond.Note that the encapsulation of DBOA inside CB7 energetically favors protonation of the amino group due to enhanced electrostatic interactions with the CB portal.Note also that the protonation of the azo nitrogen that is distal to the amino group (not shown) results in higher-energy structures.
Figure 4 .
Figure 4. Variation of the Gibbs free energy of activation ΔG ‡ of the three reaction pathways for free [DBOA•H] + (left) upon inclusion complexation by CB7 (right).ΔG ‡ of a) [DBOA•H] + and b) [CB7•DBOA•H] + against dihedral angle at 650 K.For [DBOA•H] + , the global minimum of ΔG ‡ occurs in the NH 3 -elimination pathway at θ = 180°(dark grey) followed by that of the C 2 H 4 -elimination at θ � 30°( blue).For [CB7•DBOA•H] + , N 2 -elimination (red) is favorable across all θ values.Boltzmann-weighted values for ΔG ‡ of c) [DBOA•H] + and d) [CB7•DBOA•H] + of the different dihedral-angle dependent structures against temperature, where the y-intercept represents ΔH ‡ and the slope represents À ΔS ‡ .Room temperature (300K) is marked as a green dotted line and the onset temperature of thermal decomposition of CB7 (ca.700 K)[26] is marked as the orange dotted line.In the case of free [DBOA•H] + , NH 3 -elimination becomes kinetically more favorable above a critical temperature of 600 K due to the larger positive ΔS ‡ .For [CB7•DBOA•H] + , N 2 -elimination is kinetically favorable across the entire temperature range.See FigureS16for lowest energy transition state structures (TS).Note that the energetic reference point of all calculated TS at all dihedral arrangements in Figure4is the global lowest-energy conformation (LES, θ ca.30°) in Figure 3. Shown in panels (e) and (f) are the conventional reaction coordinates connecting the reactant R to the (first) transition state TS for the heterolytic (black), homolytic (red), and electrocyclic (blue) cleavages of free [DBOA•H] + and [CB7•DBOA•H] + at T = 650 K.
Figure 4. Variation of the Gibbs free energy of activation ΔG ‡ of the three reaction pathways for free [DBOA•H] + (left) upon inclusion complexation by CB7 (right).ΔG ‡ of a) [DBOA•H] + and b) [CB7•DBOA•H] + against dihedral angle at 650 K.For [DBOA•H] + , the global minimum of ΔG ‡ occurs in the NH 3 -elimination pathway at θ = 180°(dark grey) followed by that of the C 2 H 4 -elimination at θ � 30°( blue).For [CB7•DBOA•H] + , N 2 -elimination (red) is favorable across all θ values.Boltzmann-weighted values for ΔG ‡ of c) [DBOA•H] + and d) [CB7•DBOA•H] + of the different dihedral-angle dependent structures against temperature, where the y-intercept represents ΔH ‡ and the slope represents À ΔS ‡ .Room temperature (300K) is marked as a green dotted line and the onset temperature of thermal decomposition of CB7 (ca.700 K)[26] is marked as the orange dotted line.In the case of free [DBOA•H] + , NH 3 -elimination becomes kinetically more favorable above a critical temperature of 600 K due to the larger positive ΔS ‡ .For [CB7•DBOA•H] + , N 2 -elimination is kinetically favorable across the entire temperature range.See FigureS16for lowest energy transition state structures (TS).Note that the energetic reference point of all calculated TS at all dihedral arrangements in Figure4is the global lowest-energy conformation (LES, θ ca.30°) in Figure 3. Shown in panels (e) and (f) are the conventional reaction coordinates connecting the reactant R to the (first) transition state TS for the heterolytic (black), homolytic (red), and electrocyclic (blue) cleavages of free [DBOA•H] + and [CB7•DBOA•H] + at T = 650 K. | 5,997.6 | 2023-05-10T00:00:00.000 | [
"Chemistry"
] |
The Main Directions of the Humanization of Industrial Objects in Urban Environment
The tendency to transform the old industrial areas began in 1950-1960 last century in Europe and America. By the end of the twentieth century with the development of the world economy, the time has come when the transformation of industrial infrastructure is becoming a comprehensive phenomenon. Currently, in the economies of developed countries, forms of transformation such as global mergers, takeovers, re-equipment and re-functioning are being intensively implemented. Based on the analysis of positive foreign experience, the main directions of humanization of the urban environment are considered through the transformation of industrial facilities. The transformation of industrial facilities and their territories with a change in functionality becomes the main direction of humanization of the urban environment in the XXI century. Numerous architectural and compositional techniques are allowed to adapt any industrial facility in the dynamic infrastructure of the city.
INTRODUCTION
Processes for the transformation of industrial areas have been arising for a long time. The tendency to transform the old industrial areas began in 1950-1960 in Europe and America, when the old industrial areas within cities with access to highways regained their attractiveness due to lack of free areas in the suburbs, as well as the presence of buildings and infrastructure in the field of point development. By the end of the twentieth century, with the development of the world economy, the time has come when the transformation of industrial infrastructure is becoming a comprehensive phenomenon. Currently, in the economies of developed countries, forms of transformation such as global mergers, takeovers, re-equipment and re-functioning are being intensively implemented.
Production turned out to be neither competitive nor efficient because of socioeconomic problems in the Commonwealth of Independent States. There were also a lot of unexploited industrial facilities and territories have appeared and entailed urban issues that have made the urban anti-humane environment. In these conditions it is necessary to determine the main directions of humanization of the urban environment through the transformation of industrial facilities.
Only certain aesthetic and compositional aspects of industrial facilities forming are considered in research [1,2,3].
Analysis of the Literature Data and the Formulation of the Problem
The latest research analysis of publications has shown that the issues of improving the industrial objects architectural environment in the city and various aspects of the urban environment formation are considered in the following research papers of authors: 1) The historical development of industrial architectural objects was considered in the works of Agranovich & Mamleev [4], Vershinin [5] etc.
2) The typological formation of the architectural environment of production objects was researched by Demidov & Khrustalev [6], and Votinov [7]. However, the design methods discussed in these works do not cover many of the humanistic aspects of the modern development of industrial objects. 3) Issues of urban planning of industrial objects, their interaction with residential and other areas of cities were considered in their works: Avdotin et al. [8,9], Savarenskaya [10], Biryukov [11], and Daun [12]. 4) Energy saving and ecologization of the urban environment -scientific works of Sullivan & Krieger [13], Voskresenskiy [14], Côté et al. [15], and Gibson et al. [16]. 5) General theoretical studies on the state, current problems, prospects for the development of industrial architecture, including the influence of innovations, were reflected in their works by Vershinin [5], Kim [17], Demidov & Khrustalev [18], Getun [19], and Semenova [20].
Research of the materials on this matter indicates that the issues of improving the formation of the architecture of production objects is still insufficiently studied. In the ecological crisis, the problem of renovation of industrial areas is transformed into the problem of the discrepancy of the modern urban environment with the ecological and aesthetic requirements of comfort.
Exploring the regulatory documents [21] has shown that their content does not sufficiently reflect the peculiarities of the industrial objects formation with taking into account the growing demands of the population to form functional, environmental and aesthetic comfortable conditions for work and production. Having taken note of this analysis, a holistic methodological approach to the problem of humanization and greening of industrial objects in the urban environment has not appeared, yet. However, a positive experience in solving certain aspects of this problem can be traced in theoretical and design development.
Purpose and Objectives of the Research
The purpose of the article is to determine the main directions of humanization of the urban environment through the transformation of industrial facilities.
Objectives of the study: 1) To determine the main approaches of the industrial environment humanization with full or partial preservation of their functions. 2) To identify the main directions of the urban environment humanization by the elimination of the industrial function.
Research Methodology and Approaches to Optimizing and Greening of Industrial Objects in Urban Environment
Analysis of the literature data and regulatory documents of the industrial facilities formation in the urban environment made it possible to determine the main methodology and approaches to the research of this problem.
To formulate the research strategy, positions of the system-ecological and environmental approaches were used. They were the methodological basis for the development of scientifically theoretical principles and directions for the industrial facilities humanization in the urban environment.
The system-ecological approach in the solution of townplanning problems assumes consideration of various objects of town-planning activity as human environment elements. It is aimed at improving the formation of the urban environment and preserving the historical basis, developing and enriching its ecological and aesthetic potential, and an optimal solution to contemporary problems in the environment of life. Such an approach to designing urban environment objects is necessary in connection with anthropogenic pollution of the biosphere, since the consumption of natural resources is becoming more and more dangerous.
The environmental approach is a methodology for researching the working environment as a combination of elements: economic (industrial enterprise and production organization), human (worker and their needs), and public (society's attitude to production and its employees). The environmental approach used when researching these objects puts forward certain requirements for their formation and the methods are common in professional architectural practice in such concepts as the objective environment (situational structure of the environment and the functional typology of environmental situations). The environmental approach involves the consideration of environment as a result of a person's mastering their life environment. Accordingly, the activity and behavior of a person are accepted as a determining factor that binds the individual elements of the environment into integrity. The main goal of modern project thinking is the formation principle of the objective and spatial environment as an organic unity of the visual-sensual system and the functional place conditions. The methodology for developing issues of environmental comfort includes: -Analysis of projects and field surveys of domestic and foreign industrial enterprises and industrial areas located in urban areas in the creating a comfortable working environment aspect; -The working premises surveys of industrial enterprises specialized for the work of disabled people [22]; -Systematization of factors affecting the working environment comfort level, taking into account the adverse effects of the working environment on people.
The methodology for developing issues of improving the aesthetic quality of the human labor activity environment in industrial enterprises includes: -Generalization of the scientific research results on the architectural improvement, artistic level of industrial enterprises and the main industries development; -The concept of industrial enterprise formation and development considering the possible preferences of workers being in a formed technical environment conditions. It is necessary to take into account the trends of shaping in the world industrial architecture formed under the influence of human environmental engineering, the systematization of technical objects outside the architectural and design activities with negative affecting on the architectural level of industrial enterprises development, the analysis of the patterns of formation, directions for improvement industrial objects.
MAIN APPROACHES TO THE HUMANIZATION OF INDUSTRIAL OBJECTS IN THE URBAN ENVIRONMENT
Industrial construction, performing the city-forming function, actively influences the formation of the cities' architectural appearance. It has an emotional impact on a person, due to its parameters and specific typological characteristics of architectural forms, introduces additional diversity in the architectural composition of streets and squares. Industrial objects with historical and cultural value undoubtedly have a positive emotional impact on a person.
As a rule, these objects have a high level of architectural and artistic qualities. In most cases, these are buildings with carefully designed facades, precisely calibrated in style and proportions, with an already established, interconnected and high-quality environment, to a person-scale. In this regard, such objects always have a positive visual impact on the person.
Industrial facilities that do not function with their destroyed facades and with abandoned areas in the form of landfills become unsafe and have a negative impact on the psycho-physiological state of a person, especially in large and major cities, where they occupy large areas. The researcher of ecological psychology M. Chernoushek writes about the relevance of the problem of researching the influence of the architectural-spatial environment on a person: "While the physical, chemical and biological influence of the environment on a person is relatively well studied and fixed, the psychological influence of the environment on its creator is man, we know much less. Nevertheless, the psychological impact on the person of the environment created by him is significant, despite the fact that we do not even realize it." In perspective, this aspect appears to be the key means of humanizing the urban environment [23].
Of great importance for the human perception psychology in the urban environment is the nature of the buildings and structures placement and their large-scale characteristics, color, the preservation level of facades, outdated equipment and technology. This leads to certain contradictions between the man -production -city. Often, such problems are proposed to be solved, eliminating even profitable production. At the same time, the social and economic advantages of the location of industrial facilities in the structure of the city, including direct connection with residential areas, are lost, and the uniqueness of the existing architectural environment is disturbed. Many industrial facilities are an integral part of historic buildings, which are intertwined with the environment. In their own way, they are a naturally formed historical layering environment and continue to exist in a certain abstracted space outside of time.
At the same time, most of the industrial buildings, especially in the central historical part of the city, are monuments of architecture or culture and form the architectural and artistic image of the city. However, since most industrial facilities have not functioned for a long time, under the influence of natural factors, many buildings are dilapidated, and facade decoration elements have been lost. It is also very important to note the compositional and artistic features of industrial buildings. The architectural, artistic, and aesthetic qualities of many industrial buildings are low as a result of unacceptable excessive subordination of architectural issues to technical tasks and limited search for new ways to achieve architectural expressiveness. A natural and actual problem arises: the need to transform the industrial zones of cities into modern conditions and the needs of society. This process does not involve the destruction of an already established organism: it implies a change and transformation of its infrastructure.
There are three main approaches to the transformation of industrial facilities and their territories in order to humanize the urban environment ( Fig. 1): 1) With full preservation of the production function; 2) With partial preservation of the production function; 3) With the elimination of the production function.
In order to humanize the industrial infrastructure, it is necessary to improve the formation of industrial enterprises and their territories while maintaining the production function through reorganization, reconstruction, restoration, adaptation, and modernization.
Reorganization is the transformation of the organizational structure and management structure of the enterprise while maintaining fixed assets and production potential of the facilities. The term "reorganization" has several meanings depending on the scope of application. In this context, it is a kind of radical complex innovation, which is the restructuring of the object's organizational structure (system, goals, relationships, and norms). The reorganization of industrial buildings and structures makes it possible to effectively control the spatial environment of the city development [24]. One of the approaches to the reorganization process in the West is based on the elimination of industrial objects and architecture of residential and public buildings opposition.
Reconstruction (lat.) is a radical reorganization, improvement, streamlining of something. Reconstruction in architecture is the restructuring of the city, architectural complex, and buildings, caused by new living conditions [25].
Objects of reconstruction in the field of industrial architecture can be the following: industrial zone of the city, including all industrial areas and individual enterprises; industrial area (node); industrial enterprise; separate functional zones of an industrial enterprise (pre-factory, warehouse, engineering structures, etc.); industrial building; interior production workshop. The named objects correspond to different levels of the spatial industrial production organization.
In modern practice, the reconstruction of industrial facilities uses a number of concepts that reflect either individual aspects of the reconstruction process or specific approaches to carrying out reconstructive measures. These include technical re-equipment -updating and qualitative improvement of the technological equipment characteristics [26].
Technical re-equipment includes a set of measures to improve the technical and economic level of individual technological processes, replace the worn-out equipment of the main production and auxiliary services.
At the same time, not only the replacement of outdated equipment, machine tools, machines and mechanisms often occurs, but also the introduction of new promising technologies. When carrying out activities of architectural and construction industrial facilities reconstruction, it is also expected to replace outdated equipment and introduce new equipment, but, as a rule, to a lesser extent and with preservation of the existing technological process.
Therefore, in the process of reconstruction, various specific weight of the reorganization of the active and passive parts of the basic production assets is applied. The active part of the production assets includes machines and equipment, whereas the passive part includes factory territory with industrial buildings and structures. Reconstruction, first of all, involves the restructuring of existing cost-effective facilities the functioning of which is budget-forming for the city and provides a large number of jobs. In order to humanize the production environment, such facilities should provide for the reconstruction of industrial areas with the ergonomic spaces creation for recreation (short rest) and improvement of environmental and aesthetic environment indicators. Humanization methods of industrial areas should be carried out, first of all, taking into account the analysis of the production impact on the environment and the development of the most effective measures to reduce negative factors (harmful gases into the atmosphere, dust, odors, noise propagation, etc.).
The restoration is used to improve the aesthetic characteristics of the production environment. Basically, the restoration of facades is carried out, if the architecture of industrial buildings is of historical value and is an architectural monument.
Adaptation is the reorganization of an industrial facility for its use with a partial change in the functional process.
Regarding to industrial buildings or complexes, measures are proposed with the placement of a technological process related to another industry, as a rule, with less environmental stress on the environment [19].
Figure 1 The main directions of humanization of industrial objects in an urban environment
Complete modernization of the existing manufacture (refers to high-tech and environmentally ecological production) is reconstruction of buildings and structures, technical re-equipment, landscaping, more efficient use of available space with the introduction of modern technologies. Though this approach, the city does not lose the taxpayer (the company and the place of employment of citizens).
The second method of converting industrial facilities with partial preservation of the production function is most effective. This technique is appropriate in the socio-cultural terms, as it allows saving the production function and at the same time improving the aesthetic characteristics of the environment by combining the production function of the object and the function of the city. In this case, incomplete re-functionalization allows expanding the social infrastructure of the city and transforming the industrial territory to consider the new requirements.
A part of the industrial area with appropriate architectural and landscape transformations can be used to change the function. This part of the territory can be used for museum, recreational, residential and other functions. Thus, manufacture remains, but the industrial territory receives a new urban development.
The third method of converting industrial facilities with a complete change of production functions is carried out in the process of conservation, revitalization, renovation, environmental rehabilitation, and complete refunctionalization.
Preservation and industrial archaeology are activities that include cultural and historical aspects aimed at the research and preservation of industrial objects that are part of the world material culture. Industrial archaeology is the identification, certification and research of monuments of industrial architecture and technology, the development of proposals for their safety and functioning. In the world practice, monuments of material culture after holding the relevant events function as museum recreational complexes, administrative, exhibition, trade and other objects.
Revitalization is the revival of urban space in which an industrial facility exists. Depending on the urban parameters of the object, this may be the space of the pre-plant zone, the street, the embankment, an industrial facility, a city block with industrial buildings or an industrial area. International practice has shown that it is revitalization that makes it possible to find new, more efficient and cost-effective ways of transforming former industrial facilities. Revitalization requires significantly smaller investments in contrast to renovation (redevelopment) with large-scale changes to the facility and significant investments. The lack of capital works makes it possible to noticeably shorten the period from the start of work on revitalization to the commissioning of a facility with a slightly updated interior and exterior environment. In addition, revitalization allows solving social and cultural problems, landscaping the territory, preserving the monuments of industrial architecture, reducing the load on the environment and improving the image of the city.
Renovation is a set of measures aimed at changing the functional purpose of an industrial facility. Renovation is a collective concept. Renovation is a transformation of an architectural object in which special zones of stability of the architectural space in an urban environment are created on the basis of taking into account psychological, historical, and aesthetic factors. This approach prevents the negative perception by consumers of the space of its significant changes when a separate industrial building, enterprise or district is transformed. Recently, conflict situations have arisen because of the people's personal attitude towards the architectural space where they live and work. Renovation allows solving the problem of continuity in the urban environment development.
Renovation as a method of humanization is usually used when changing the functional purpose of an industrial object. The industrial object environment often involves adjusting the existing urban planning environment. The renovation process should be understood as measures aimed at removing the production function while preserving the industrial nature of the building and recreating the new function. Inactive or inefficient production facilities, as well as industrial areas that impede the full further development of urban infrastructure, are subject to renovation.
Ecological rehabilitation, most often, involves the use of an industrial site for recreational purposes through the creation of parks, squares, a system of recreational avenues, etc. In the process of ecological rehabilitation, measures are taken to reclaim industrial areas that have fallen into the contaminated zone by returning the landscape to its original or close-to-original state. This process can be done by recreating the original natural components of the environment (soil, relief, vegetation, and water). In many cases, the professional use of landscape design allows to create a unique landscape environment with high emotional impact on the person.
Full re-functionalization is carried out on dilapidated industrial facilities. In this case, there is a complete demolition of dilapidated industrial facilities and the use of the territory for the new facility construction.
CONCLUSIONS
As a result of the research, the following conclusions were formulated: 1) In order to humanize the urban environment, it is necessary to improve the formation of industrial facilities and their territories while retaining the production function through reorganization, reconstruction, adaptation, and modernization. These activities will create a more comfortable environment for the work processes of human life and improve the ecological and aesthetic indicators of any city. 2) To improve the social and aesthetic characteristics of the urban environment and to humanize it, it is necessary to transform industrial facilities with a partial or complete change of the production function through conservation, revitalization, renovation, environmental rehabilitation, and complete re-functionalization.
The transformation of industrial facilities and their territories with a change in functionality becomes the main direction of the urban environment humanization in the 21 st century. Numerous architectural and compositional techniques allow adapting any industrial facility in the dynamic infrastructure of the city.
In further research, it is advisable to consider the main techniques for the formation of alternative objects in nonfunctioning industrial facilities. | 4,862.2 | 2020-03-20T00:00:00.000 | [
"Economics"
] |
Cabozantinib Following Immunotherapy in Patients with Advanced Hepatocellular Carcinoma
Simple Summary Management of hepatocellular carcinoma is a rapidly evolving field, with atezolizumab-bevacizumab recently becoming standard of care after showing survival benefit over sorafenib. However, all clinical trials evaluating drugs approved in the second line setting, such as cabozantinib, have been evaluated following progression on sorafenib, not immunotherapy. We sought to determine if cabozantinib is a viable option for patients after progression on immunotherapy. We conducted a retrospective analysis of patients seen at our institution who had disease progression on immunotherapy and subsequently received cabozantinib, reporting patient survival and tolerance of treatment. We found that patients had a median progression free survival of 2.1 months and median overall survival of 7.7 months, and most patients had a manageable side effect profile, suggesting that cabozantinib is a viable treatment option following progression on immunotherapy. Abstract (1) Background: Cabozantinib, a multikinase inhibitor, is approved by the Food and Drug Administration (FDA) for the treatment of advanced hepatocellular carcinoma (HCC) following progression on sorafenib. Recently, atezolizumab plus bevacizumab has been approved in the first line setting for advanced HCC and has become the new standard of care. Whether cabozantinib improves outcomes following progression on immunotherapy remains unknown. We describe the clinical outcomes following treatment with immunotherapy in patients with advanced HCC who received cabozantinib. (2) Methods: We conducted a multicentric, retrospective analysis of patients with advanced HCC diagnosed between 2010–2021 at Mayo Clinic in Minnesota, Arizona, and Florida who received cabozantinib. Median overall survival and progression free survival analyses were performed using the Kaplan–Meier method. Adverse events were determined using Common Terminology Criteria for Adverse Events (CTCAE). (3). Results: We identified 26 patients with advanced HCC who received cabozantinib following progression on immunotherapy. Median progression free survival on cabozantinib therapy was 2.1 months (95% CI: 1.3–3.9) and median overall survival from time of cabozantinib initiation was 7.7 months (95% CI: 5.3–14.9). (4) Conclusion: The optimal sequencing of therapy for patients with advanced HCC following progression on immunotherapy remains unknown. Our study demonstrates that patients may benefit from treatment with cabozantinib following progression on immunotherapy.
Introduction
Sorafenib was the first systemic therapy receiving Food and Drug Administration (FDA) approval for advanced hepatocellular carcinoma (HCC) after it demonstrated improved median overall survival (mOS) when compared to best supportive care (10.7 months vs. 7.9 months, hazard ratio (HR) 0.69, p < 0.001), and remained the only first line therapy for greater than a decade [1]. Lenvatinib, an antiangiogenic tyrosine kinase inhibitor (TKI), was later evaluated in a non-inferiority study versus sorafenib in the first line setting and reached its primary end point of non-inferior mOS (13.6 months vs. 12.3 months, HR 0.92) [2]. Since, as our understanding of the pathogenesis of HCC has improved, treatment strategies and targets of intervention have begun to shift.
The pathogenesis of HCC is largely driven by chronic inflammation, often secondary to viral hepatitis or alcohol use, however, tumor cells are often able to evade immune recognition through generation of an immunosuppressive microenvironment, garnering interest in the role of immunotherapy in targeting HCC [3,4]. Recently, the IMbrave 150 study evaluated the combination of atezolizumab and bevacizumab versus sorafenib in the first line setting for treatment of advanced HCC, and demonstrated a mOS benefit (OS at 12 months 67.2% vs. 54.6%, HR 0.58), becoming the new standard of care for advanced HCC [5]. Since then, multiple studies have demonstrated survival benefit in the treatment of advanced HCC. The HIMALAYA trial, evaluating the combination of tremelimumab and durvalumab versus sorafenib demonstrated an OS benefit (HR 0.78), and durvalumab monotherapy was non-inferior to sorafenib (HR 0.86) [6]. A phase 3 study evaluating the combination of camrelizumab and apatinib, an anti-angiogenic TKI, demonstrated OS benefit when compared to sorafenib in the first line setting (22.1 months vs. 15.2 months, HR 0.62, p < 0.0001) [7]. RATIONALE-301 evaluated tislelizumab in the first line setting compared to sorafenib and demonstrated non-inferior OS (15.9 months vs. 14.1 months, HR 0.85, p = 0.0398) [8].
Cabozantinib is a multikinase inhibitor, targeting vascular endothelial growth factor (VEGF), mesenchymal-epithelial transition factor (MET), and the anexelekto receptor tyrosine kinase (AXL) [17,18]. The CELESTIAL trial was a randomized, double-blinded, phase 3 trial which compared cabozantinib to placebo in patients with advanced HCC who had disease progression on sorafenib [9]. Cabozantinib demonstrated improved overall survival (10.2 months vs. 8.0 months, HR 0.76, p = 005) and subsequently received FDA approval for use in the second line setting.
As with other agents approved for second line therapy in advanced HCC, there is a paucity of data evaluating outcomes of patients who received cabozantinib following progression on immunotherapy in the first line setting. Considering increased use of immunotherapy in the first line setting following the results of the IMbrave 150 and HIMALAYA trials, it is imperative to evaluate optimal sequencing of treatment following progression on immunotherapy. In this study, we sought to characterize the outcomes of patients with advanced HCC who received cabozantinib after progression on immunotherapy.
Methods
We conducted a retrospective study of patients with radiologically and/or pathologically confirmed diagnosis of HCC treated at the Mayo Clinic Enterprise involving three sites at Rochester, MN, Scottsdale, AZ and Jacksonville, FL between 1 January 2010, and 31 December 2021. Patients and their clinical data were identified and obtained via an electronic medical record survey using key search terms. Demographic characteristics, including age at diagnosis, sex, body mass index (BMI), body surface area (BSA), clinical history, tumor stage and grade at diagnosis, and systemic treatments received, were recorded. The study was reviewed and approved by the Mayo Clinic institutional review board and deemed not to require informed consent. The primary endpoints analyzed were progression-free survival (PFS) and OS following treatment with cabozantinib in patients who had previous disease progression on immunotherapy. PFS was defined as the time from initiation of cabozantinib until disease progression or death. OS was defined as the time from initiation of cabozantinib until death due to any cause. Secondary outcomes were objective response rate (ORR) and disease control rate (DCR). ORR was defined as achieving complete response (CR) or partial response (PR) per Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1 [19]. DCR was defined as the proportion of subjects achieving CR, PR, or stable disease (SD) while on therapy. The distributions of OS and PFS were estimated using the Kaplan-Meier method. Median values were estimated along with 95% confidence intervals (CI). Two-sided log-rank testing was used to compare OS and PFS between subgroups. ORR and DCR were estimated within each subgroup and compared between groups using a Chi-Square test or Fisher's Exact test for proportion. Adverse events were determined using Common Terminology Criteria for Adverse Events (CTCAE).
Results
One-hundred and thirty-one patients were identified who received cabozantinib for HCC, and among these, 26 received immunotherapy prior to cabozantinib. The median age of patients was 61 years (range, 39-81 years). Twenty-two (85%) patients were male and 20 (77%) were Caucasian. The median AFP at diagnosis was 29 U/mL (range, 1-71,929), with four patients having AFP ≥ 400 at diagnosis. Nineteen (73%) patients had cirrhosis at time of starting cabozantinib. Seventeen (65%) received prior embolization, 9 (35%) received prior ablation, and 8 (31%) received prior radiation therapy. Eighteen patients (72%) had Child Pugh score A at diagnosis. Baseline patient demographics and clinical characteristics are summarized in Table 1. With regard to prior immunotherapy treatment, 13 (50%) patients received atezolizumab/bevacizumab, 12 (46%) received nivolumab, and 1 (4%) received durvalumab prior to receiving cabozantinib. Median PFS on treatment with cabozantinib was 2.1 months (95% CI: 1.3-3.9) (Figure 1). The DCR was 27% (7 patients) and the objective response rate was 4% (1 patient). Among 7 patients with disease control, 4 had Child Pugh score A at initiation of cabozantinib, 1 had B7, 1 had B8, and 1 was unknown. Six of these received cabozantinib in the third line and 1 received it in the second line. No patient had a complete response to cabozantinib. One patient with objective response had Child Pugh score A at initiation of cabozantinib and received it in the second line setting following first line atezolizumab plus bevacizumab. There were no statistically significant differences in PFS on cabozantinib therapy when stratified by hepatitis C infection status (p = 0.50), cirrhosis (p = 0.30), or when stratified by line of therapy that cabozantinib was received (2nd line vs. 3rd line and beyond, p = 0.39). When stratifying by Child Pugh status at time of initiation of cabozantinib, patients with Child Pugh class A liver function had a mPFS of 2.1 months (95% CI: 1.5-4.0) whereas those with Child Pugh class B liver function had a mPFS of 1.3 months (95% CI: 0.9-NE), although these were not significantly different (p = 0.55) (Figure 2). Median OS from initiation of cabozantinib therapy was 7.7 months (95% CI: 5.3-14.9) (Figure 3). At time of data collection, 19 patients (73%) had died from any cause, primarily due to disease progression. Common adverse events reported while on cabozantinib included fatigue (50%), anorexia (35%), AST elevation (35%), diarrhea (31%), hypertension (27%), abdominal discomfort, dyspepsia, ALT elevation (23% each), stomatitis, weight loss, rash, peripheral edema (15% each), and constipation (12%) ( Table 2). Seven patients experienced grade 3 or greater adverse events (one patient experienced two grade 3 or greater toxicities) including hypertension, diarrhea, anorexia, stomatitis, bowel obstruction, palmar-plantar erythrodysesthesia, and rectal abscess/fistula. Notably, 2 patients discontinued the medication due to side effects including blood pressure elevation and one patient who selfdiscontinued due to intolerance.
Discussion
In this study, we sought to determine PFS and OS of patients receiving cabozantinib who had disease progression on immunotherapy. Within our retrospective study, patients receiving cabozantinib after immunotherapy had mPFS of 2.1 months and mOS of 7.7 months, whereas in the phase 3 CELESTIAL trial following progression on sorafenib, patients receiving cabozantinib had a mPFS of 5.2 months and mOS of 10.2 months [9]. With regard to response, patients receiving cabozantinib following immunotherapy had a DCR of 27% versus 64% seen in the CELESTIAL trial following first line sorafenib [9]. A study conducted in Italy evaluated 96 patients receiving cabozantinib in the real-world setting, with 79.1% receiving cabozantinib in the third line setting, and 90% receiving sorafenib in the first line setting [20]. Among these patients, mPFS was 5.1 months and mOS was 12.1 months, which is comparable to the CELESTIAL trial. However, the study did not account for patients receiving prior immunotherapy.
While patients in the reported retrospective cohort receiving cabozantinib following progression on immunotherapy seemingly had poorer outcomes when compared to patients enrolled in the CELESTIAL trial receiving cabozantinib following progression on sorafenib, it is important to note that this retrospective cohort of patients was different than that enrolled in the CELESTIAL trial. While the CELESTIAL trial only included patients with Child-Pugh class A liver function who had received no more than 2 prior lines of systemic therapy, within our retrospective cohort, 7 patients (28%) had Child-Pugh class B liver function and 22 (84%) had received 3 or more prior lines of systemic therapy. These differences could significantly impact PFS and OS, and therefore, make direct comparison challenging. When looking at 7 patients with disease control, 4 had Child Pugh score A liver function and 6 received cabozantinib in the third line. Finkelmeier et al. conducted a retrospective analysis of patients who received cabozantinib for HCC, including 26% with Child-Pugh class B or worse liver function, and demonstrated mPFS of 3.4 months and mOS of 7.0 months, which is more consistent with that seen in our retrospective study, which supports the impact of liver function on patient outcomes [21]. Notably, the side effect profile for patients receiving cabozantinib was similar to previous studies suggesting that cabozantinib can be safely prescribed following treatment with immunotherapy.
Overall, this study does suggest a role for use of cabozantinib for HCC in the second line setting following progression on immunotherapy. As shown, adverse events were largely grade 1 or 2, and cabozantinib was generally well tolerated in patients who had previously received immunotherapy. Additionally, outcomes were comparable to those seen in other real world data that included patients with Child Pugh B liver function [21].
With atezolizumab plus bevacizumab now used as the first line standard of care, and durvalumab plus tremelimumab expected to received FDA approval in the near future, it will be important for future studies to evaluate other second line agents following immunotherapy. We have begun by evaluating cabozantinib, but future studies ought to evaluate other options, such as lenvatinib, regorafenib, or ramucirumab. Additionally, it is of utmost importance to compare second line agents in order to determine the ideal sequence of therapy going forward. Furthermore, important will be evaluating the combination of cabozantinib and other second line agents with immunotherapy. Cabozantinib was studied in combination with immunotherapy in COSMIC-312, which was an open label, randomized, phase 3 trial comparing the combination of cabozantinib with atezolizumab versus sorafenib in the first line setting, which ultimately demonstrated improved PFS (6.8 vs. 4.2 months, HR 0.63, p = 0.001) but failed to confer an OS benefit (15.4 months vs. 15.5 months, HR 0.90, p = 0.44) [22]. However, the combination of cabozantinib with immunotherapy has shown promising results in management of other malignancies, such as renal cell carcinoma, and therefore further evaluation is merited [23].
This study has several limitations including small sample size, diversity of immunotherapy agents used, and its retrospective nature. However, to the best of our knowledge, this is the largest retrospective study to report outcomes of patients with advanced HCC treated with cabozantinib following progression on immunotherapy. In this multicenter study, patients were included from different geographical locations accounting for diverse patient population. The results from this study provide data for treatment of patients with HCC after progression on immunotherapy.
Conclusions
With the approval of immunotherapy for treatment of advanced HCC in the first line setting, there has been increased ambiguity as to the ideal sequence of treatment in the second line setting and beyond. This retrospective study suggests a role for cabozantinib in patients who have had disease progression on immunotherapy, however, further studies | 3,295.6 | 2022-10-22T00:00:00.000 | [
"Medicine",
"Biology"
] |
An Image Recognition Framework for Oral Cancer Cells
Oral squamous cell carcinoma (OSCC) is a common type of cancer of the oral cavity. Despite their great impact on mortality, sufficient screening techniques for early diagnosis of OSCC often lack accuracy and thus OSCCs are mostly diagnosed at a late stage. Early detection and accurate recognition of OSCCs would lead to an improved curative result and a reduction in recurrence rates after surgical treatment. -e introduction of image recognition technology into the doctor’s diagnosis process can significantly improve cancer diagnosis, reduce individual differences, and effectively assist doctors in making the correct diagnosis of the disease.-e objective of this study was to assess the precision and robustness of a deep learning-basedmethod to automatically identify the extent of cancer on digitized oral images. We present a new method that employs different variants of convolutional neural network (CNN) for detecting cancer in oral cells. Our approach involves training the classifier on different images from the imageNet dataset and then independently validating on different cancer cells. -e image is segmented using multiscale morphology methods to prepare for cell feature analysis and extraction. -e method of morphological edge detection is used to more accurately extract the target, cell area, perimeter, and othermultidimensional features followed by classification through CNN. For all five variants of CNN, namely, VGG16, VGG19, InceptionV3, InceptionResNetV2, and Xception, the train and value losses are less than 6%. Experimental results show that the method can be an effective tool for OSCC diagnosis.
Introduction
With the development of modern society, the incidence of oral cancer is increasing year by year in the world. e latest worldwide census showed that malignant tumors of the oral cavity and throat accounted for sixth place among all neoplastic lesions [1]. As a fatal disease, oral cancer [2] has a 5-year survival rate of only 30-40% (lip cancer can reach 80%) [3]. Owing to the continuous efforts of professionals, modern oral malignant tumor treatment technology has been dramatically improved [4]. Still, in terms of the survival rate of such patients, there has been no significant improvement in the past few decades. is is mainly due to the lack of general understanding of oral cancer, leading to early oral cancer often failing to attract enough attention from patients and delaying the best treatment opportunity, resulting in irreversible consequences. erefore, the early detection, prevention, and treatment of oral diseases, especially oral cancer, are important for improving the cure rate of cancer and curing the tumor [5].
In the pathological diagnosis of cancer tumors, it is common to observe and qualitatively describe cell morphology. e primary method to investigate cell morphology is to keep ultrathin sections under a microscope and examine cell structures. However, this traditional analysis method is mainly based on a large number of observations and qualitative descriptions [6]. On the one hand, this method requires a large amount of inspection work and provides low inspection efficiency. It can also lead to misidentification and affect the accurate diagnosis of the disease. On the other hand, the analysis and recognition of pathological images are limited by the doctor's experience and visual resolution of images [7]. It is easy to produce subjective factors and lacks a scientific and objective quantitative basis.
Digital image processing [8] refers to using digital computers and other related digital techniques to apply certain operations and processing to images to achieve a specific intended purpose. For example, to make faded photos clear, extract meaningful cell features from medical micrographs, and so on, digital image processing can be divided into several aspects: image digitization, image information storage, image information processing, image information output, and display [9]. e acquisition of image information in digital image processing is the study of how to represent an image as a set of numbers (digital photos) and input them into a computer or digital device for analysis and processing.
is image conversion process is called digitization [10]. At each pixel location of the image, the image's brightness is sampled and quantized to obtain an integer value representing its brightness and darkness on the corresponding point of the image [11]. After the conversion is completed for all pixels, the image is represented as an integer matrix as the object of computer processing. In an image, each pixel has two attributes: position and grayscale. e position is determined by the sampling point coordinates in the scan line, also called row and column, and the integer that represents the brightness and darkness of the pixel position is called grayscale. e most commonly used digital instruments for processing images are digital cameras, flying spot scanners, and microdensitometers.
Digital pathology is the process of digitization of tissue images and slides. is process of digitization could enable more efficient storage, visualization, and pathologic analysis of tissue and could potentially improve the overall efficiency of routine diagnostic pathology workflow. e main processing techniques for cancer cell digital pathology include effective preprocessing, image segmentation, feature extraction, and classification methods. It is based on the morphology, structure, texture, and other characteristics of the cells reflected in the pathological pictures under the microscope to determine the standard for determining benign and malignant tumors to distinguish the cancer cells from the normal cells [12]. Extracting effective feature parameters and improving the recognition rate is the focus of cell digital image processing. In the analysis of microscopic cell images [13], mathematical morphology [14] is one of the commonly used methods. Mathematical morphology is suitable for digital image analysis. It is easy to obtain information such as the size, shape, direction, and connectivity of the target, which can extract morphological features. e effect of opening and closing operations to cut peaks and fill valleys is very suitable for denoising and segmentation cell images. e rest of the paper is ordered as follows. In Section 2, different oral cancer recognition methods are described. Section 3 provides a detailed discussion of the proposed method.
e results are presented in Section 4 and the conclusion is given in Section 5.
Related Works
With the development of quantitative image analysis and the advent of image recognition techniques and its wide application in medical diagnosis, pathology has produced a new branch of computer image recognition research [15]. e image recognition techniques are introduced into the doctor's diagnosis process. Computer image analysis and recognition technology are used to study the characteristics of pathological morphology and structure of related tissues and discuss its application in diagnosis, classification, and prognosis judgment [16]. Image processing techniques can improve the recognition accuracy rate and reduce the labor intensity and workload of the person, eliminate the misdiagnosis and missed diagnosis caused by the psychological adaptability of manual detection and fatigue, and assist the doctor in making the correct diagnosis to a considerable extent.
Recently, with the gradual development of computerized tumor pathology recognition technology, foreign experts have pointed out the limitations of selecting fixed thresholds when extracting units from complex irregular backgrounds in the segmentation of pathological microscopic images and proposed the threshold theory of change which has achieved good results. German experts proposed a binary spatial organization classification method [17] in 2003, using stochastic geometric processes for nonlinear deterministic analysis and artificial neural networks (ANN) to assist in diagnosing breast cancer, pancreatic cancer, and prostate cancer and a series of prominent results have been achieved.
In terms of products, the Image-Pro Plus [18] developed by US Media Cybernetics is an entire 32-bit image processing and analysis system software that represents the latest international level. e software is suitable for professional image processing systems in medicine, scientific research, industry, and other fields. However, because it is a general-purpose commercial software, it can only provide corresponding data indicators for doctors' reference.
In recent years, the artificial neural network expert diagnosis system has become a research hotspot at home and abroad. e application of this technology in the field of stomatology has also achieved better results. Hung et al. [19] used an artificial neural network to predict the incidence of oral cancer in high-risk populations. In this study, 2027 adults received a questionnaire about smoking, drinking, and other bad habits and a professional dentist's examination to determine their final diagnosis. e data of 1,662 adults were used as training data into the 3-layer feedforward backpropagation neural network, and the data of the remaining 365 people were used to test the effectiveness of the trained neural network. e sensitivity and specificity of manual screening were 74% and 99%, respectively, while the sensitivity and specificity of neural network detection results were 80% and 77%, respectively. erefore, the higher sensitivity of the neural network used in screening high-risk groups of oral cancer has certain practical value. Still, its low specificity needs to be further studied to reduce the falsepositive rate. In 2005, Campisi et al. [20] applied fuzzy neural networks to study cytokine expression in oral cancer and precancerous lesions. It detected the expression of BCL-2, survivin, and proliferating cell nuclear antigen in the lesions of 8 human papillomavirus-positive oral leukoplakia patients and applied a fuzzy neural network to determine the correlation between the cytokines as mentioned earlier and human papillomavirus infection. e results showed that survivin is related to the expression of proliferating cell nuclear antigen (PCNA) and human papillomavirus infection in leukoplakia lesions. In addition, the fuzzy neural network can be used as a credible and highly accurate research tool in the research of a small sample size. Jaremenko et al. [21] proposed an automatic image recognition method based on Confocal Laser Endomicroscopy images of the oral cavity, using the traditional pattern recognition methods with several local binary patterns, and histogram statistics and used random forest (RF) and support vector machine (SVM) for the classification. Rodner et al. [22] showed that segmentation-based image recognition has the potential to be applied to cancer recognition in Confocal Laser Endomicroscopy images of the head and neck region. Tanriver et al. [23] explored the applications of image processing techniques in the detection of oral cancer. A two-stage deep learning model was proposed to detect oral cancer and classify the detected region into three types of benign, oral potentially malignant disorders carcinoma with a secondstage classifier network. Kim et al. [24] developed a survival prediction method based on deep learning for oral squamous cell carcinoma (SCC) patients and validated its performance.
e proposed method was compared with random survival forest (RSF) and the Cox proportional hazard model (CPH) and the proposed model showed the best performance among the three models. Tseng et al. [25] applied machine learning for oral cancer prediction 674 patients. Although the method ignored the time element, it was based on the major oral cancer patient dataset to date and is a prominent early effort to apply machine learning to oral cancer survival prediction.
In this study, an image recognition model is developed for the accurate prediction of oral cancer. Morphological analysis and calculation were performed on the segmented cell regions using machine learning techniques such as convolution neural network to extract distinct and prominent features of the cancer cells followed by prediction of cancer in these cells. Experimental results show that the proposed technique is effective in the prediction of cancer and can be an effective tool for the diagnosis of oral cancer.
Classification and Recognition of Cancer Cell Images.
Image segmentation refers to the process of dividing an image into regions with various characteristics and extracting the target of interest. It is a key step of image processing and further image analysis. It is a low-level computation technique and the most basic and important research process in computer vision. e quality of image segmentation results directly affects the quality of subsequent analysis, recognition, and interpretation. Based on an efficient image segmentation technique, feature extraction and parameter measurement of the target image can be performed, making higher-level image analysis. erefore, the research on image segmentation is of great significance in the field of image processing. In this article, the main morphological characteristics of the nucleus of the cells are used to determine whether the cell is malignant or normal. erefore, the primary problem of cancer cell identification is to separate the nucleus from the background through segmentation and process and identify the nucleus.
We use the concept of a set to give the following definition of image segmentation: Let the set R represent the entire area of the image, and the segmentation of R can be accomplished by dividing R into N nonempty subsets. e subregion can be represented as Suppose a given uniformity measure P is a binary logic function. If a certain area meets a certain uniformity, its P value is TRUE; otherwise, it is FALSE. ese N nonempty subsets satisfy the following five conditions given in Table 1.
e above conditions not only define segmentation but also guide how to perform segmentation. e image segmentation is always carried out according to some segmentation criteria. Condition 1 and Condition 2 indicate that the correct segmentation criteria should be applicable to all regions and all pixels, while Condition 3 and Condition 4 indicate that reasonable segmentation criteria should help determine the representative features of pixels in each image region, and Condition 5 indicates complete segmentation. e criteria should directly or indirectly have certain requirements or restrictions on the connectivity of pixels in the area.
After removing the image noise, the integrity and connectivity of the target are well maintained. At this time, the microscopic cell image only has two parts: the background and the nucleus, and the gray values of these two parts are quite different. e target can be segmented with a relatively simple threshold segmentation method, and the objective is to effectively determine the threshold. ere are many threshold segmentation methods; this study uses the most typical maximum between-class variance threshold method. e basic principle is to divide the image histogram into two groups at a certain threshold and determine the threshold when the variance between the divided two groups takes the maximum value. e algorithm is briefly introduced below (k is the threshold).
Using the steps of Algorithm 1, the result of image segmentation can be obtained as shown in Figure 1. We can see that the processed image maintains the basic shape of the nucleus target, and the nucleus is extracted by an automatic threshold. After removal, the suspicious cell nucleus can be extracted more accurately. e normal nucleus, cytoplasm, and cytoplasm background are all treated as nontarget areas and discarded. Suspicious nuclei and cell clusters are better preserved, but there are a few holes in the preserved area. e following will simply fill in the cavity to facilitate the individual processing of the cell nucleus and extract appropriate characteristic parameters.
To completely extract the nucleus and accurately calculate the parameters, these holes need to be filled. We employ the black area principle to fill the voids in the core. e specific method is to first binarize the image and then perform the inverse processing so that the original hollow Journal of Healthcare Engineering area becomes black, and the nucleated area becomes white. e area of the black area in the inverted image is calculated due to the hole. Since the area is generally very small, we can select the area threshold CS (the number of pixels). When the area of the black area is smaller than CS, we can save the area and finally make the saved area similar in the backup image with holes Color filling. Similar colors are selected based on empirical values so that the cell nucleus is more completely approximate to its original shape. After the experiment, the value of Cs is 200. e filling result obtained is shown in Figure 2.
Deep Learning Model.
Deep learning is a type of machine learning, and machine learning is the necessary path to realize artificial intelligence. Deep learning technology is widely used in tasks such as image and speech processing. Traditional machine learning methods use extract image features using Global feature descriptors such as histograms of oriented gradients, local binary patterns, and color histograms. ese are hand-crafted features that require domain-level expertise. Instead of using hand-crafted features, deep neural networks implicitly extract features from images in a hierarchical manner. Lower layers learn low-level features such as edges and corners, whereas middle layers learn color, shape, and so on and higher layers learn high-level features representing the object in the image.
Among the deep neural networks, the convolutional neural network is the most famous network structure for processing images in deep learning. A convolutional neural network (CNN) can characterize learning and can classify input information according to its hierarchical structure, so it is also called a "translation-invariant artificial neural network." We employed the different models of CNN for image prediction such as Visual Geometry Group with 16 layers (VGG16), VGG19, InceptionV3, Xception, and Inception_resnet_v2. VGGNet is one of the earlier batches of excellent neural networks. VGGNet was originally used to analyze and compare ovarian cancer images with AlexNet, GoogLeNet, and so on and a new network was proposed with higher accuracy on this basis [25]. It shows the feasibility of this model for cancer-related image recognition. In this study, we employed CNN for oral cancer cell recognition. e proposed VGG convolutional neural network model is composed of 13 layers of convolutional layers and 3 layers of fully connected layers. ere are three main features, small convolution kernel, small pooling kernel, and fully connected transconvolution. e required input image data size is 224 × 224 × 3, which has fewer parameters and reduces the complexity of the model. e multilayer convolution structure can perform more nonlinear transformations than a single convolution layer, which is conducive to extracting high-level image features. Not only is Incep-tionV3 a well-known network of CNN, but also it has been Table 1: Uniformity measure conditions. 1.
(1) e probability of background and target appearance is computed as w i � n−1 i�0 P i , w 2 � 1 − w 1 . (2) e average gray level in the cluster is defined as E 1 � n−1 i�0 iP i /w 1 , E 2 � n−1 i�n iP i /w 2 . (3) e variance of the two types of clusters and the variance between clusters are as follows: To satisfy the minimum difference between clusters and the maximum difference between clusters, we can set L � σ 12 /σ 1 + σ 2 to find the N that satisfies the maximum. used many times in the intelligent recognition research of other cancers. InceptionV3 consists of 5 convolutional layers, 3 pooling layers, 1 fully connected layer, and 11 Inception Module compositions. InceptionV3 has three main features; first, it uses different sizes of convolution kernels, which can extract different features and fuse them. e second pair of different sizes of convolution kernels use different padding to make the output feature map. e third convolution is used for the fusion of different channels of the feature map. e core idea is to increase the depth and width of the network to improve the performance of the CNN network and to avoid excessive loss of extracted image features. InceptionV3 solves the shortcomings of the VGG series well. InceptionV3 widens the network and uses different sizes of convolution kernels to deconvolve it.
Both ResNet and Xception are well-known networks in the field of deep learning and are widely used in the detection, segmentation, recognition, and other fields. ResNet is a residual network model with excellent performance. It constructs a deep neural network through residual connections, which can avoid gradient disappearance and gradient explosion caused by deep connections, and can effectively solve the situation that the accuracy rate tends to be flat in the later stage of training, but the training error becomes larger. Xception is a CNN architecture based entirely on deeply separable convolutional layers. Since its architecture is a linear stack of deeply separable convolutional layers with residual connections, the architecture is more convenient in definition and modification. Incep-tionResNetV2 is an early modification of the InceptionV3 model and is combined with some ideas of ResNet due to the existence of shortcuts in its model, deeper networks can be trained, and the Inception module can be simplified. e accuracy of this model is more advantageous than Incep-tionV3, ResNet152, and so on.
Experimental Results
Since the current database for oral cancer recognition does not have a more complete or authoritative version, to better provide a benchmark for the study of this problem, this article has enhanced the data of a similar project database on GitHub [26] and used it as the project's dataset. e original dataset sample includes two categories of images: normal and cancerous. In the original dataset, Class0 is a normal oral sample image, and the number before data enhancement is 150. Class1 is a diseased oral cancer sample image, and the number before data enhancement is 25. Figures 3 and 4 show normal and diseased images of oral cells samples. It can be observed that the two types of images show nearly similar patterns when observed through the naked eye. erefore, it is more important to use intelligent assisted diagnosis methods to differentiate normal and diseased images.
In this experiment, we employed a pretrained network on the imageNet dataset for migration learning, followed Journal of Healthcare Engineering by several layers of fully connected layers. Transfer learning is a machine learning method that transfers knowledge in one field to another field so that the target field can achieve better learning results. Since the deep learning model requires a large amount of training set data support, this experiment uses the method of transfer learning to make up for the lack of the number of training sets to improve the accuracy of the results. In this experiment, the activation function used for the proposed CNN in the fully connected layer is rectifier linear unit (ReLU), the number of neurons in the final classification layer is 2, and the softmax activation function is used. ReLU for short is a piecewise linear function that will output the input directly if it is positive; otherwise, it will output zero. In this experiment, the weights pretrained on ImageNet have been frozen, no longer participate in the changes in neuron values caused by subsequent training, and only train the newly added fully connected layer. e activation function is used because the activation function introduces nonlinear characteristics into the network so that the neural network can be applied to many nonlinear models.
Next, we employed the Learning Rate Scheduler to dynamically adjust the learning rate to cope with the gradual reduction of the required step size as the number of training rounds increases. e input is a function, the input of the function is the current epoch number, and the return is the corresponding learning rate. In addition, this experiment also sets ReduceLROnPlateau to dynamically reduce the learning rate when the training is stagnant to avoid the phenomenon that the excessively high learning rate oscillates near the optimal solution. We selected the "Adam" optimizer, "categorical_crossentropy" loss function, and 50 epochs. e training results of the six models are shown in Figures 5-9 for VGG16, VGG19, InceptionV3, InceptionResNetV2, and Xception, respectively.
Conclusion
It can be observed in Figures 5-9 that the two losses (loss and val_loss) are decreasing and the two acc (acc and val_acc) are increasing. For all the five models of VGG16, VGG19, InceptionV3, InceptionResNetV2, and Xception, the train and value losses are less than 6%, so this indicates the modeling is trained in a good way. e val_acc is the measure of how good the predictions of a model are. In this paper, the model we proposed was trained pretty well after 50 epochs, while the rest training is not necessary.
Journal of Healthcare Engineering
Data Availability e datasets used and analyzed during the current study are available from the corresponding author upon reasonable request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 5,583.2 | 2021-10-14T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Low Friction at the Nanoscale of Hydrogenated Fullerene-Like Carbon Films
: Friction force microscopy experiments at the nanometer scale are applied to study low friction of hydrogenated fullerene-like carbon films. The measured friction coefficients indicate that lower hydrogen concentration during preparation is beneficial to enter the low friction regime, especially in combination with only methane as precursor. Furthermore, two regions are found with distinct friction coefficients and surface roughnesses related to different surface structures. One is rich in amorphous carbon and the other is rich in fullerene-like carbon, dispersed on the same surface. Transmission electron microscopy and Raman spectroscopy images verify this observation of the two separated structures, especially with the extracted fullerene-like structures in the wear debris from macro friction experiments. It is speculated that hydrogen may tend to impair the growth of fullerene-like carbon and is therefore detrimental for lubricity.
Introduction
Carbon-based layers with different structures have excellent protective function for efficient control of the heat transfer rate at the solid-liquid interface and better viability of the living matters [1]. Considering high hardness, low friction coefficients and chemical inertness, these materials are also applied for the design and manufacture of nanoelectromechanical systems (NEMS) [2]. Previous studies have shown that graphene can help to enter the superlubricity regime [3]. Even under relatively harsh conditions after fluorination or oxidation, it is found that polycrystalline graphene deposited on Ge(111) still exhibits ultralow friction forces [4]. Besides, the diamond-like carbon (DLC) films are another group of carbon materials with a high content of sp 3 C−C bonds [5]. They have a quite distinct lubricity in dry environment, with lower friction coefficients (µ = 0.05-0.2) for hydrogenated DLC and higher values (µ = 0.4-0.7) for hydrogen-free DLC films [6]. The main cause for low friction in dry atmosphere are the dangling C−H bonds of the hydrogenated DLC films, which lead to a complete passivation and can prevent charge accumulation at the interface [7]. However, these special bonds will gradually disappear with the escape of the hydrogen termination after a considerable number of sliding cycles [8][9][10]. Apart from DLC films, disordered solid interfaces in fullerene-like nanoparticles (such as MoS 2 and WS 2 ) have shown a great potential to obtain low friction in recent years [11,12]. During the friction, these particles will be modified by the contact pressure and broken into individual sheets under stress [13,14]. Furthermore, hydrogenated carbon layers can also been prepared in fullerene-like structures, which possess high hardness and elasticity at the same time [15,16]. Raman spectroscopy was applied on such systems to study detailed structural and chemical properties [16,17]. However, a systematic study concerning friction and lubrication of this special carbon films has not yet been fully discussed at the micro and nanoscale, including some key factors of preparation conditions. Friction force microscopy (FFM) is an ideal technique for precisely determining the friction force at the nanometer scale. It allows tiny forces to be measured with the sensitivity of nanonewtons (nN) or even piconewtons (pN) [18,19]. Considering the tip of a soft cantilever (the spring constant of a typical contact cantilever is approximately 0.2 N m −1 ) in contact with the substrate, the lateral force is obtained as the torsional signal, while the flexural forces are determined by the normal bending of the cantilever [20]. With the help of FFM, considerable efforts have been made to explain the basic mechanisms of friction and lubricity at the micro-and nanoscale [21].
In this work, fullerene-like hydrogenated carbon (FL-C:H) films were fabricated by plasma enhanced chemical vapor deposition (PECVD) and subsequently analyzed by FFM under controlled atmosphere in a nitrogen glovebox and ambient air separately. By the comparison of the friction coefficient and the surface roughness, two regions on the films were identified and related to different carbon structures. One of them performs more elastically with less energy dissipation during sliding, thus leading to a better lubrication behavior.
Sample Preparation
The FL-C:H films were grown on Si(001) substrates by PECVD, with an average thickness of around 100 nm. Prior to the deposition, the chamber was pumped to 10 −6 mbar followed by introducing a mixed gas of constant methane gas (30 SCCM) and argon as carrier gas (10 SCCM), as well as variable contents of hydrogen gas, from 0 SCCM, 5 SCCM, 10 SCCM, 15 SCCM to 20 SCCM. For the convenience to describe, they are simplified as 0% H, 5% H, 10% H, 15% H and 20% H in the following text and set as the only variable in this work. The total pressure is increased from 27 Pa of pure CH 4 to the mixed with hydrogen gas of 44 Pa. The parameters for deposition were set to 880 V and 0.16 A, with a 50% duty cycle and 59.7 Hz of pulse power supply. After the preparation, the samples were vacuum encapsulated for transportation. Notably, we will focus here on the 5% H sample to show some intrinsic regularities between hydrogen concentration and nanoscale friction properties, but similar effects have also been observed for the other hydrogen samples.
Scanning Probe Microscopy
The experiments were performed with an atomic force microscope (Nanosurf FlexAFM C3000, Basel, Switzerland) in a nitrogen glovebox (O 2 < 0.1 ppm, H 2 O < 0.1 ppm). Comparative experiments were carried out in ambient air (humidity ≈ 40%) with the same samples, of which results are listed in the supporting information. The friction force mode was used with a contact cantilever (Nanosensors, PPP-CONT, k = 0.17 N m −1 , f = 13 kHz). The samples were scanned at a speed of 4 µm s −1 at different normal forces F N , ranging from 6-20 nN (in a linear relationship between friction and normal forces). The scanning areas were selected to be 2 × 2 µm 2 and each sample was scanned at ten random areas (from P1 to P10) with the same cantilever. Therefore, the contact topographic images and the friction loops can be obtained at the same time. Additionally, the surface roughness S (here, we take the root mean square roughness of a full image) was determined during the experiments.
Force Calibration
The torsional bending of the cantilever is proportional to the friction or lateral force and is given by the AFM system as a voltage signal V L . To obtain calibrated friction force values, the following formula can be used [22]: where F L is the lateral force, c L is the lateral spring constant and S z is the sensitivity of the photodetector (S z = 298 nm V −1 ), which was determined beforehand by recording force-distance curves on a hard surface. The lateral spring constant c L can be calculated by the dimensions of the cantilever [22]: where G is the shear modulus, which is 50 GPa for silicon [20], l, w and t are the length, width and thickness of the cantilever respectively, and h is the height of the tip. In this experiment, l = 450 µm, w = 50 µm, t = 2.0 µm and h = 12.5 µm are obtained from the cantilever manufacturer values, so that c L is calculated to be 94.8 N m −1 . Furthermore, the tip radius was measured to be ∼20 nm by scanning electron microscopy (Nova NanoSEM 230, Basel, Switzerland). The friction coefficient µ is obtained from the slope of the dependence from friction (lateral) and the normal forces, which allows to avoid the effect of adhesive forces (see Supplementary Figure S1). These attractive forces between tip apex and the sample surface are a significant effect which was also measured by force-distance curves. It is considered as the maximum attractive force when the tip retracts from the sample surface.
TEM Characterization
The structures of the FL-C:H films were characterized by high resolution transmission electron microscopy (FEI Tecani F30, Lanzhou, China) at an acceleration voltage of 300 kV. The samples for HRTEM observation were deposited on a freshly cleaved NaCl wafer and then thinned down by dissolving the NaCl wafer with water.
Raman Spectroscopy
The FL-C:H film of 5% H was analyzed in situ using Raman spectroscopy (LABRAM HR 800 Microspectrometer, Lanzhou, China) at an excitation wavelength of 532 nm. To avoid the unintentional damage of the sample, we chose a laser power of 0.5 MW m −2 for the Raman spectroscopy experiments.
Extract from Wear Debris
In order to investigate the composition of surface structures, the friction experiments at the macro scale were also carried out with a reciprocating ball-on-disc tester (CSM Tribometer III, Lanzhou, China) at 25 • C and relative humidity of 40%. The sliding velocity and normal force were set as 10 cm s −1 and 10 N, respectively, with the counterpart material of Al 2 O 3 (φ = 5 mm ). After testing, the wear debris on the films was collected and analyzed by HRTEM to characterize the extracted structures.
The Binary Structure via TEM and Raman Spectra
Low friction is one of the representative features of amorphous hydrogenated carbon films (a-C:H), especially in vacuum or dry atmosphere [23][24][25][26]. However, this value strongly depends on the humidity and preparation conditions. Especially in the low humidity regime the friction coefficient is known to depend on the precursor used to make the a-C:H films [27]. It was reported that the friction coefficient decreases with increasing hydrogen concentration in inert atmosphere [9]. Interestingly, the films produced here are also in the range of low friction but presents the contrary trend of hydrogen dependence with a-C:H films. With a low concentration of 5% H, the high-resolution transmission electron microscopy image of Figure 1a indicates a nearly full coverage by a heavily bent and cross-linked structure. Contrarily, for the 20% H sample it looks more hybrid as shown in Figure 1b, where more crystalline areas are highlighted in the yellow squares and more amorphous in the red, respectively. It suggests the different surface roughnesses between the two areas, which will be discussed more in the following Raman analysis.
Raman spectroscopy is a fast and convenient method for the characterization of carbon film materials [28,29]. The result of 5% H sample is shown in Figure 1c. In the first order region of 1000-2000 cm −1 , an obvious overlapped D and G band is observed that were fitted to be 1360 and 1560 cm −1 by Gaussian functions, respectively. The G band around 1560 cm −1 is related to the E 2g optical mode with the in-plane stretching of the C− −C bonds such as C atoms in aromatic rings and olefinic chains [30]. While the D band around 1350 cm −1 is assigned to an A 1g breathing mode of sp 2 atoms arranged in rings, of which the intensity is linked to the amount of defects in these graphite sheet [31]. The fitted D and G bands here correspond well to those of amorphous carbon films of [32]. Notably, owing to the obvious shoulder peak at ∼1250 cm −1 (marked with a grey circle), another two bands were also fitted at 1200 and 1470 cm −1 , respectively. It was discussed that the band at ∼1200 cm −1 can be attributed to the formation of the fullerene or onion-like structures [33,34]. Furthermore, the second-order bands are observed in the range from 2500 to 3200 cm −1 . It is formed by the mixture of several band signals including the 2D band (also called the G' band, ∼2700 cm −1 ), the D + G band (∼2940 cm −1 ) and the 2G band (∼3170 cm −1 ) [35,36]. Consequently, on the basis of all the Raman data, the FL-C:H fims not only possesses the amorphous structures with the similar G and D bands, but also displays the fullerene-like structures at the peak of ∼1191 cm −1 . Moreover, to verify this crystalline fullerene-like structures, pin-disk friction experiments were performed with considerable reciprocated cycles on sample 5% H. Finally, a group of fullerene-like structures were extracted from the wear debris, as shown in Figure 1d. The fullerene-like carbon ball with a diameter of ∼30 nm is assumed to be formed by tens of individual flakes. It indirectly proves that the carbon film prepared in this work is composed of the binary structure of rich fullerene-like carbon (FL-C) and rich amorphous carbon (a-C) varying with different hydrogen concentrations. A speculative structure is proposed that: the coiling FL-C structures pile up while the a-C structures are forming the substrate. Thus, the friction properties of this binary structures are highly interesting as well as the effect of hydrogen concentration and the comparison with the a-C:H films. Figure 2a shows the friction coefficient determined from a measured area of 2 × 2 µm 2 and averaged over ten random sites (P1 to P10) for each sample. Differently, with the hydrogen concentrations of the precursor from 0% to 20%, the friction coefficient shows a slight increase from 0.081 to 0.150, which is still considered to be in the range of low friction but presents the contrary trend of hydrogen dependence with the a-C:H films [7][8][9]. This unexpected behavior between friction coefficient and hydrogen concentration suggests that there is another dominant factor for the low friction of the FL-C:H layers. Also, in the measurements under ambient conditions, where a water film plays an important role between the two sliding surfaces, the friction coefficient still presents a small increase with the hydrogen concentration, however, in a narrower range (see Supplementary Figure S2). Compared to the increasing friction coefficient, the surface roughness of the films with different hydrogen concentration stays nearly constant as visible in Figure 2a. It is contrary to the assumption at the nanoscale or the situation at the macroscale [37], of a smoother surface with lower friction coefficient. To compensate the influence of surface roughness on friction in our data, we therefore determined the ratio µ/S between the measured friction coefficient µ and the surface roughness S which is shown in Figure 2b as a function of the hydrogen concentration. The introduction of this ratio is a challenging attempt and will help us indirectly verify whether there is a dependent relationship of friction coefficient on hydrogen concentration, after excluding the effect from the irregular average values of surface roughness in Figure 2a. A linear behavior is observed which indicates a correlation of the increasing friction with the hydrogen concentration.
Friction Experiments at the Microscale
To illustrate the correlation of the friction coefficient and the surface roughness, the topographic images measured by AFM reveal a detailed view. Figure 3 shows two typical images with identical image size of the 0% H ( Figure 3a) and the 20% H (Figure 3b) samples. In both images topographic variations in the order of a few nanometers are observed, however, the expansion of these variations becomes larger with increasing hydrogen concentration. The specific surface roughnesses of both measurements are 0.45 nm and 0.36 nm for the 0% H and the 20% H sample, respectively. On the other hand, the friction coefficient increases from 0.08 to 0.17, indicating the evolution of lubricity with microscopic morphology. Figure 4 shows the friction coefficient, surface roughness and their ratio of all ten random areas of sample 5% H sorted by the µ/S ratio. Unambiguously, the data from these measured areas can be divided into two parts: region A, with high surface roughness and low friction coefficient (e.g., P1 to P7), and region B, with high friction coefficient and low surface roughness (e.g., P9 to P10). The values of the ratio µ/S (the black curve in Figure 4) also reveal a transition around P8. To clarify this behavior, we have compared the topography images in region A (P2, µ = 0.087, S = 0.75 nm ) and region B (P9, µ = 0.152, S = 0.46 nm ) on sample 5% H, as shown in Figure 5. Clearly distinct height variation in the range of ∆z = 3 nm can be observed in both images. This is consistent with the observation shown in Figure 5b that specific areas extracted either on peaks (black square) or in valleys (white square) have an opposite behavior between friction coefficient and surface roughness, i.e., peak: lower friction coefficient and higher surface roughness or valley: higher friction coefficient and lower surface roughness. If we utilize several colorful lines as the contour to separate Figure 5a,b based on the topography, the essence of region A and B will become more clear. As shown in Figure 5c, P2 possesses more higher areas (ranging from 1.25 nm to 2.75 nm) while more flatter areas (ranging from 0.25 nm to 1.25 nm) are observed for P9. Since the measured friction coefficient of the higher area is smaller than that of flatter ones, it finally leads to the lower friction coefficient of P2 compared with P9. We assume that the separated µ/S ratios between region A and B are the result of distinct structures with higher and lower roughnesses on this hydrogenated carbon films, respectively. To generalize the influence of µ/S ratio for different hydrogen concentrations, we performed similar experiment, i.e., µ/S calculation for 10 different areas for samples from 0% H to 20% H. In Figure 6, all the measured areas are shown by the ratio between friction coefficient and surface roughness (µ/S). The measurements for each H concentration are all sorted so that P1 always corresponds to the lowest µ/S ratio and P10 to the highest. Apart from sample 0% H that shows less differences as others in µ/S, it occurs to form an obvious change from region A to region B, which will make a big effect on the formation and distribution of different structures whether adding hydrogen as the feedstock or not. Figure 6. The ratio between friction coefficient and surface roughness (µ/S) of ten random areas for each hydrogen concentration.
Discussion
Compared with the results of the TEM and Raman analysis in Figure 1 and the AFM measurement in Figure 3, one could draw a conclusion that the FL-C:H films are composed of different topographies on the separated regions: the flat area with amorphous appearance and the rough area with more crystalline structures. The observed structures extracted from the wear debris in the macro friction experiments turn out that these FL-C:H pieces should be peeled off from the higher part of the films. Besides the higher roughness, the results in Figure 4 indicate that the friction coefficient of the crystalline structures is lower than that of the flat amorphous with smaller roughness. This is not only the case in sample 5% H, but also for the other hydrogen concentrations, with the largest µ/S in sample 20% H and lowest µ/S in sample 0% H. Additionally, the increasing adhesive force from 0% H to 20% H not only results from the surface curvature and surface roughness, but also from variable lubricated properties (see Supplementary Figure S3). It presents that region A possesses shorter contact time and less energy dissipation than those of region B during sliding, which are representative of rich FL-C and rich a-C accumulated areas, respectively.
It is proposed that the phenomenon of opposite friction properties with roughness is stemmed from the variable hydrogen concentrations in the precursor gas. As it is known that hydrogen makes an important effect on the lubrication of the hydrogenated DLC films in the inert atmosphere, which is attributed to the large amount of hydrogen-terminated dangling bonds extending from the amorphous carbons [38]. Even so, there is also a maximum threshold of hydrogen to avoid the formation of hydrocarbon polymers [39,40]. In contrary to the hydrogenated DLC films, the FL-C:H films prepared in this work seem not to functionalize through these dangling bonds. Related studies on the effect of the hydrogen addition during the deposition of FL-CN x films have shown that the precursor gas enables the termination of bonds by hydrogen atoms so that the extension of ring structures are prohibited [41,42]. It demonstrates that a small amount of hydrogen in the precursor is beneficial to initiate the formation of fullerene-like structures, rather than graphitic. Yet, on the other hand, the increasing hydrogen concentration will interrupt this process rapidly, replaced by the formation of amorphous carbon.
Furthermore, several other views are also useful to explain the lower friction coefficient of the FL-C:H films. First, the excellent mechanical properties like the high hardness (∼ 20.9 GPa) and elastic recovery (∼85%) of the FL-C:H films measured in other articles [15,16] suggests that the contact during sliding is likely to behave more elastically than plastically [43,44]. The main difference between elastic and plastic behaviors is depended on whether there is an accumulated unrecoverable deformation on the surface or not. In our case, the film surfaces seem not to be scratched or damaged with the increasing normal load during the scanning procedures. It demonstrates that the fullerene-like structures will not produce plastic accumulation at the end of the frictional path, otherwise it can break such smooth sliding and sometimes cause stuck with rather high friction force. Second, the rather weak interlayer van der Waals forces and the large lattice spacing of the fullerene-like structures provide a high compressibility between the sliding surfaces [45]. When the surfaces are elastically contacting, the FL-C structures prefer to deform or coil rather than break. Thus, the damage always happens at a-C:H area with dangling C-H bonds, while the FL-C:H regions tend to be peeled off in whole pieces as shown in Figure 1c. Sometimes the debris could transfer into multilayer nano-onion balls after repeated friction cycles, which behave like nano bearings for lower friction and may become another interesting topic about this FL-C:H films [46,47]. Finally, another robust explanation was discussed by Rachel J. Cannara et al. that lighter atoms vibrate faster and induce rapid energy dissipation, as a result of higher friction than heavier atoms [48], which just corresponds to the roles of the amorphous carbon with the terminated hydrogen (light atoms) and the fullerene-like carbon (heavy atoms) on the surface in our experiment, respectively.
Conclusions
In this paper, the FL-C:H films are prepared by PECVD, where tribological properties at the nanoscale are investigated by FFM in dry nitrogen atmosphere and under ambient conditions. This film is considered to be composed of a binary structure: FL-C and a-C areas. The former structure induces lower friction coefficient, lower adhesive force and higher surface roughness, while the latter behaves just opposite.
It is obvious that the low friction is determined by the ratio of both FL-C and a-C structures. To understand the lubrication mechanism, it should shed light on the excellent hardness and elastic modulus of this film, which behaves elastically with less energy dissipation during frictional sliding.
Since higher hydrogen fraction is verified to impede the formation of FL-C structures, the further plan is to reduce the hydrogen concentration during the film preparation, such as utilizing the unsaturated hydrocarbon as reaction gas. In contrast to DLC films, the FL-C:H films are rather independent of hydrogen to perform excellent lubricity as well as durability, which promotes the application range of this special carbon film and continues to be a great impetus to perform superlubricity in the ambient environment.
Supplementary Materials: The following are available online at http://www.mdpi.com/2079-6412/10/7/643/s1, Figure S1: The normal-friction force curve with the slope of friction coefficient. Figure S2: Friction coefficient and surface roughness of the FL-C:H films measured in ambient air. Figure S3. Measured adhesive force of each hydrogen concentration in nitrogen averaged over all ten areas by force spectroscopy curve. | 5,468.4 | 2020-07-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
A Single Nucleotide Change in the Mouse Genome Accelerates Breast Cancer Progression
Authors' A Institute of
Introduction
In recent years, it has become increasingly clear that in addition to somatic mutations, germ-line alterations such as single nucleotide polymorphisms (SNP) have clinical significance for the development and progression of diseases such as cancer as well as for the definition of a patient's individual response to therapeutic agents (1,2).
In the human FGFR4, a polymorphic nucleotide change in codon 388 converts glycine to arginine in the transmembrane region of the receptor (3).This SNP was shown to be implicated in progression and poor prognosis of various types of human cancer (3)(4)(5)(6)(7)(8).Discovered by Bange and colleagues (3), the FGFR4Arg388 allele could be associated with tumor progression in breast and colon cancer patients.Similarly, soft tissue sarcoma patients, who carried the FGFR4Arg388 allele, had a poor clinical outcome (9).In melanoma, the FGFR4Arg388 allele is associated with increased tumor thickness, whereas in head and neck squamous cell carcinoma the glycine-arginine substitution correlates with reduced overall patient survival and advanced tumor stage (6,7).Furthermore, a recent study on prostate cancer patients associated the FGFR4Arg388 allele not only with tumor progression but also with initiation (8).Breast cancer studies correlate the FGFR4Arg388 allele with both accelerated disease progression and higher resistance to adjuvant systemic or chemotherapy, leading to a significantly shorter disease-free and overall survival (3,10).
The main conclusion of these studies was that the presence of one or two FGFR4Arg388 alleles in the genome does not initiate cancer development but predisposes the carrier to a more aggressive form.Unfortunately, due to the highly complex and heterogeneous genetic background of the patients, statistical analysis yielded at times marginal results, and because of differences in patient stratification and statistical evaluation, diverging results led to controversies (11).Because of that and in spite of the strong association of the FGFR4Arg388 allele with disease progression, this genetic configuration is not yet established as progression marker for clinical outcome or as basis for individual patient treatment decisions.
Here, we show in a genetically "clean" system the effect of a single nucleotide difference in codon 385 of the mouse FGFR4 gene (corresponding to codon 388 in the human) on breast cancer progression.We generated an FGFR4Arg385 knock-in (KI) mouse model to investigate the effect of the FGFR4 isotypes on the physiology of mouse embryonic fibroblasts (MEF) and on mammary cancer progression in vivo.For this purpose, we crossed the FGFR4Arg385 KI mice to WAP-TGFα and MMTV-PymT transgenic mice.In the WAP-TGFα model, transforming growth factor α (TGFα) overexpression is controlled by the whey acidic protein (WAP) promoter, which specifically activates the transgene in mammary epithelial cells in mid-pregnancy, thereby promoting mammary carcinogenesis (12,13).In the MMTV-PymT model, overexpression of PymT leads to the constitutive activation of the proto-oncogene src, which in turn causes the development of breast cancer (14).
Here, we report that the mouse FGFR4Arg385 allele enhances the transformation rate of isolated MEFs; promotes their migration, anchorage independence, and invasion after stable transformation; and, above all, enhances the progression of breast cancer in the WAP-TGFα mouse mammary carcinoma model.Furthermore, the FGFR4Arg385 allele accelerates lung metastatic lesions in vivo.
These results support previous clinical correlation studies on cancer patients with diverse genetic profiles and highlight the importance of the FGFR4Arg388 allele in cancer progression.This SNP may therefore serve as a prognostic marker of clinical outcome for breast cancer patients.
Materials and Methods
Mouse embryonic fibroblasts.MEFs were isolated from 13.5-day postcoitum embryos and maintained following the 3T3 protocol (15).To stably overexpress epidermal growth factor receptor (EGFR), v-src, and empty pLXSN, MEFs were selected with G418 24 h after infection.Transmigration of MEFs was analyzed in Boyden chambers (Schubert & Weiss).Cells (1.5 × 10 4 ) were seeded in DMEM containing 0% FCS.Migration was performed to DMEM containing 4% FCS for 16 h.Afterward, cells were stained with crystal violet and migrated cells were analyzed microscopically.For quantification, Boyden chamber membranes were destained in 5% acetic acid and analyzed in an ELISA reader.For the soft agar assay, cells (1 × 10 5 ) were added to 3 mL DMEM supplemented with 10% fetal bovine serum and 0.3% agar and layered onto 6 mL of 0.5% agar beds in 60-mm dishes.After 24 to 96 h, anchorage independence of cells was calculated and quantified microscopically.To perform a Matrigel assay, 5 × 10 3 cells were seeded on Matrigel-coated (BD Biosciences) 96-well plates.After 24 to 96 h, Matrigel outgrowth was calculated and quantified microscopically.
RNA and reverse transcription-PCR.Total RNA of minced murine tissues was isolated using the RNeasy kit (Qiagen) according to the manufacturer's recommendation.cDNA was produced with the first-strand cDNA kit (Boehringer Mannheim) according to the manufacturer's protocol.Raw data were quantified via ImageJ software, normalized on the expression levels of glyceraldehyde-3-phosphate dehydrogenase (GAPDH), and plotted relatively to the control that was set on 100%.
Immunoprecipitation and Western blotting.Tumor samples were minced by an Ultratorax (Janke & Kunkel, IKA Labortechnik).Tumor samples and cultured cells were lysed in radioimmunoprecipitation assay lysis buffer containing phosphatase and proteinase inhibitors for 30 min and precleared by centrifugation.For immunoprecipitation, lysates (1,000 μg protein) were incubated with protein A-Sepharose beads (GE Healthcare) and the primary antibody at 4°C overnight and subjected to Western blot analysis.Raw data were quantified via ImageJ software, normalized on the expression levels of actin/tubulin, and plotted relatively to the control.The following primary antibodies were used: FGFR4 (Santa Cruz Biotechnology), 4G10 (Upstate), and α-actin/α-tubulin (Sigma).The following secondary antibodies were used: horseradish peroxidase (HRP)-conjugated α-rabbit (Bio-Rad) and HRP-conjugated α-mouse (Sigma).
Tumor analysis.Mice were sacrificed by cervical dislocation and opened ventrally.Mammary glands and tumors were excised for measurement.The area and mass of tumors or normal mammary glands were analyzed by metrical measurement and weighing of the tumor tissue and the mammary gland tissue independently.Raw data were normalized on body weight.
Statistical analysis.All data are shown as mean ± SD; all P values were calculated using the Student's t test and values ≤0.03 were considered statistically significant.
Histology and immunohistochemistry. Murine tissues were fixed in 70% ethanol at 4°C overnight.Samples were embedded in paraffin and sectioned (4-8 μm) on a microtome (HM355S, Microm).Sections were deparaffinized in xylene and rehydrated in a graded series of ethanol.Antigen retrieval was achieved by cooking in citrate buffer (pH 6) in a microwave.Immunohistochemistry was done with the Vectastain staining kit (Vector Laboratories) following the manufacturer's protocol.After blocking with 10% horse serum in PBS/3% Triton X-100 for 1 h, the sections were incubated with the primary antibody at 4°C overnight.The secondary antibody (α-rabbit; Vector Laboratories) was incubated for 1 h in PBS/3% Triton X-100.Mayer's hematoxylin (Fluka) was used as a counterstain.
For analysis of metastases, lungs were sectioned and analyzed at 800-to 1,000-μm intervals.Sections were stained with H&E (Fluka).Metastatic burden was calculated microscopically.
Results
The FGFR4Arg385 allele facilitates cell transformation, migration, anchorage-independent growth, and branching of EGFR-transformed MEFs.To ultimately prove the influence of the FGFR4Arg388 allele on tumor progression in vivo, we generated an FGFR4Arg385 KI model in the genetic background of SV/129 mice, which represents the first directly targeted KI mouse model to investigate the effect of a SNP on the progression of cancer (Supplementary Fig. S1).
Similar to the human situation (3), the FGFR4Arg385 KI mouse model displays no obvious phenotype that distinguishes it from FGFR4Gly385-carrying mice, and FGFR4 expression and localization does not differ between the different FGFR4 isotypes (Supplementary Fig. S2).
MEFs represent a useful in vitro system to investigate the effect of altered genetic loci.Thus, we investigated the effect of the FGFR4Arg385 allele on biological functions of isolated MEFs.Western blot analysis displays neither an overexpression nor a hyperactivation of the FGFR4Arg385 in MEFs (Supplementary Fig. S3A).We further analyzed the effect of the FGFR4Arg385 allele on several processes, including migration, cell proliferation, and cellular life span; however, FGFR4Arg385-carrying MEFs do not show an increased proliferative capacity, life span, or migration compared with FGFR4Gly385 MEFs (Supplementary Fig. S3B and C).Previous reports of clinical studies do not implicate the FGFR4Arg388 allele in tumor initiation but rather associate it with enhanced disease progression (3,6).For that reason, we performed focus formation assays with HER2, EGFR, and vkit to determine if a certain FGFR4 genotype could enhance the transformation and the progression of MEFs together with different receptor tyrosine kinases acting as the trans-formation-initiating oncogenes.Infection with the oncogene v-src or empty pLXSN vector served as a positive or negative control in all experiments.FGFR4Gly/Arg385 MEFs display a significantly increased number of foci in cooperation with HER2 and EGFR, whereas FGFR4Arg/Arg385 MEFs show a significantly enhanced focus formation with all three investigated oncogenes (Supplementary Fig. S3D).Furthermore, a time course analysis showed that FGFR4Arg/ Arg385-carrying MEFs not only transform considerably faster but also generate an increased number of foci over time.These results indicate that the FGFR4Arg385 isotype clearly promotes the transformation that is initiated by different oncogenes and seems to facilitate the transformation of MEFs, resulting in a higher number of foci that occur at earlier time points.
Next to v-kit, the transformation of FGFR4Arg385 MEFs with EGFR displays an unusually high activity in the focus formation assay.Therefore, we aimed to investigate the involvement of the FGFR4Arg385 allele on several physiologic processes using stable EGFR-overexpressing FGFR4Gly385 and Arg385 MEFs.To ensure equal expression among the infected MEFs, overexpression of EGFR and v-src was analyzed via immunoblot analysis and quantification (Supplementary Fig. S4A).Interestingly, FGFR4 is clearly upregulated in EGFR-transformed cells compared with v-src-transformed MEFs and even more upregulated and hyperactivated in FGFR4Arg385 relative to Gly385 MEFs.We further investigated if this upregulation of the FGFR4Arg385 allele compared with FGFR4Gly385 in EGFR-transformed MEFs influences certain biological processes.About proliferation, EGFR-transformed MEFs display no differences between the FGFR4 isotypes (Supplementary Fig. S4B).In contrast, FGFR4Arg385 MEFs transformed with EGFR display a significantly increased migratory capacity compared with Gly385 MEFs (Fig. 1A).Next, we analyzed anchorage independence of EGFR-transformed MEFs in soft agar colony formation assays.FGFR4Arg/Arg385 EGFRtransformed MEFs display a significantly accelerated anchorage-independent growth after 24 and 96 hours (Fig. 1B).Subsequently, we analyzed the effect of the FGFR4Arg385 allele on invasivity in a Matrigel assay (Fig. 1C).EGFRtransformed MEFs display significantly accelerated branching in Matrigel after 24 and 96 hours in the presence of the FGFR4Arg385 allele.In contrast, cell migration, soft agar colony formation, and branching in Matrigel were not promoted by the FGFR4Arg385 allele in v-src-transformed MEFs (Supplementary Fig. S5).
These results show that the FGFR4Arg385 allele accelerates physiologic processes in EGFR-transformed MEFs, including migration, invasion, and anchorage independence, which all contribute to tumor progression.Furthermore, the effect of the FGFR4Arg385 allele is dependent on the genetic background that triggers malignant transformation.
The FGFR4Arg385 allele promotes breast cancer progression in the WAP-TGFα mouse mammary carcinoma model.The in vitro experiments in primary and transformed MEFs show the effect of the FGFR4Arg385 allele on tumor progression on several cell biological processes.Furthermore, the effect of the FGFR4Arg385 isotype seems to be dependent on the oncogenic background.To ultimately clarify the influence of the FGFR4Arg385 allele on tumor progression and accelerated aggressiveness, we investigated the effect of the FGFR4Arg385 allele on breast cancer progression in vivo.Similar to the experiments in vitro, we analyzed the involvement of the FGFR4Arg385 allele on tumor progression in combination with the well-established WAP-TGFα and the MMTV-PymT transgenes (Supplementary Fig. S6A and B).
To investigate the effect of the FGFR4Arg385 allele on tumor progression in the WAP-TGFα model, we analyzed tumors of 6-month-old female WAP-TGFα;FGFR4Gly/Gly385, WAP-TGFα;FGFR4Gly/Arg385, and WAP-TGFα;FGFR4Arg/ Arg385 mice.The analyzed criteria for tumor progression are the mass, area, and the percentage of mass and area compared with the whole mammary gland of the analyzed tumors.As shown in Fig. 2A, the tumor mass is significantly increased in WAP-TGFα;FGFR4Arg/Arg385 mice when compared with WAP-TGFα;FGFR4Gly/Gly385 controls.Furthermore, the percentage of tumor mass is significantly promoted in WAP-TGFα;FGFR4Arg/Arg385 mice (Fig. 2A).Moreover, the tumor area and the percentage of tumor area are significantly accelerated in WAP-TGFα;FGFR4Arg/Arg385 mice (Fig. 2B).These results indicate that the FGFR4Arg385 allele is a potent enhancer of WAP-TGFα−induced mammary tumors.Additionally, the higher significance in the area of tumors suggests that the FGFR4Arg385 allele is not an enhancer of cancer cell proliferation but seems to accelerate migration, resulting in an increased invaded area of the mammary gland.The analyzed control mammary glands of FGFR4Gly/Gly385, Gly/Arg385, and Arg/Arg385 mice without an oncogenic background do not display any changes in their mass, size, or pathology (Supplementary Fig. S6C and D).
The potent tumor-enhancing effect of the FGFR4Arg385 allele is apparent when comparing WAP-TGFα;FGFR4Arg/ Arg385 mice with WAP-TGFα;FGFR4Gly/Gly385 controls sacri-ficed after 8 months of tumor progression (Fig. 2C).Mice transgenic for WAP-TGFα display more as well as larger tumors in the presence of the FGFR4Arg385 allele (white arrows).
In addition to the WAP-TGFα mouse model, we also investigated the tumor-promoting effect of the FGFR4Arg385 allele in the MMTV-PymT mouse mammary carcinoma model.Because of the in vitro results with v-src-transformed MEFs, we aimed to investigate if the tumor-promoting action of the FGFR4Arg385 allele is likewise in vitro not apparent in vivo.
Therefore, we analyzed the tumors of 3-month-old female MMTV-PymT;FGFR4Gly/Gly385, MMTV-PymT;FGFR4Gly/ Arg385, and MMTV-PymT;FGFR4Arg/Arg385 mice.As seen in Supplementary Fig. S7A and B, there is a significant difference neither in tumor size nor in tumor mass between the FGFR4 isotypes in mice transgenic for MMTV-PymT.Thus, the tumor-promoting effect of the FGFR4Arg385 allele in vivo is likewise in vitro dependent on the genetic background, which triggers oncogenesis.
The FGFR4Arg385 allele is hyperactivated and promotes a more aggressive phenotype in the expression pattern of WAP-TGFα−derived tumors.To further investigate the underlying mechanism of the tumor-promoting effect of the FGFR4Arg385 allele, we studied molecular differences of the FGFR4 alleles.In many human cancers, overexpression of FGFR4 is a commonly observed feature of tumors (16,17).Therefore, we examined FGFR4 expression in tumors derived from 6-month-old WAP-TGFα;FGFR4Gly/Gly385, WAP-TGFα; FGFR4Gly/Arg385, and WAP-TGFα;FGFR4Arg/Arg385 mice.Here, the FGFR4 protein is clearly overexpressed in tumors compared with normal mammary glands; however, there was no detectable difference in the presence of the FGFR4Arg385 alleles (Supplementary Fig. S8A).Contrarily, the FGFR4Arg/ Arg385 displays a significantly enhanced activation compared with the FGFR4Gly/Gly385 (Fig. 3A), suggesting that the tumor-promoting potential of the FGFR4Arg385 allele is possibly due to an enhanced kinase activity.Because of the higher FGFR4Arg/Arg385 activity, we aimed to determine the expression and activation of phosphorylated extracellular signal-regulated kinase and phosphorylated Akt; however, we could not detect a difference in the activation of these molecules (Supplementary Fig. S8B).We further investigated the expression levels of FGFR4 in 3-month-old hyperplastic mammary glands and 6-month-old adenocarcinomas from WAP-TGFα;FGFR4Gly/Gly385 and WAP-TGFα;FGFR4Arg/ Arg385 mice immunohistochemically.Interestingly, the expression of FGFR4 in WAP-TGFα;FGFR4Arg/Arg385 hyperplasias is clearly increased compared with WAP-TGFα; FGFR4Gly/Gly385 mice (Fig. 3B), indicating that the FGFR4Arg/Arg385 allele potentially affects the onset of mammary tumor progression.Similarly to the Western blot analysis, the expression of FGFR4 in adenocarcinomas does not alter the presence of the FGFR4Arg385 allele (Supplementary Fig. S8C).
These data strongly suggest that WAP-TGFα;FGFR4Arg/ Arg385-induced tumors display a more aggressive behavior resulting in an accelerated tumor progression.
The FGFR4Arg385 allele decreases the time point of tumor incidence and promotes tumor progression over time in the WAP-TGFα transgenic model.To further analyze the tumor-promoting effect of FGFR4Arg385 in the WAP-TGFα model, we followed tumor progression by sacrificing the female mice at defined periods.
As shown in Fig. 4A, the visible time point of tumor incidence is significantly decreased in WAP-TGFα;FGFR4Arg385 mice, suggesting that the FGFR4Arg385 allele facilitates neoplastic transformation and thereby decreases the time point of tumor incidence.To ensure that these data are independent of the genetic background, we backcrossed the WAP-TGFα and the FGFR4Arg385 KI mice to the FVB background.Here, we also analyzed the visible tumor incidence in WAP-TGFα;FGFR4Gly/Gly385, WAP-TGFα;FGFR4Gly/Arg385, and WAP-TGFα;FGFR4Arg/Arg385 mice.Like in the C57BL/6 background, the visible tumor incidence is significantly decreased in mice carrying the FGFR4Arg385 allele (Supplementary Fig. S7C).
In the C57BL/6 background, we further investigated tumor progression over time by analyzing the number of tumors, mass, area, and the percentage of mass and area of the dissected tumors of WAP-TGFα;FGFR4Gly/Gly385, WAP-TGFα; FGFR4Gly/Arg385, and WAP-TGFα;FGFR4Arg/Arg385 mice.As shown in Fig. 4B, WAP-TGFα;FGFR4Arg/Arg385 mice just partly establish a significant larger amount of tumors at very late points of tumor progression.However, FGFR4Arg385carrying mice seem to induce a larger amount of tumors but, importantly, increase their number of tumors over time faster than the FGFR4Gly/Gly385 mice.In addition, WAP-TGFα;FGFR4Arg/Arg385 mice establish only a significantly higher tumor mass at very early time points (Fig. 4C).Nevertheless, the FGFR4Arg385 allele seems to clearly progress tumor mass over time.In contrast, the percentage of tumor mass is significantly increased in WAP-TGFα;FGFR4Arg/ Arg385 mice.The tumor area is mostly significantly increased in WAP-TGFα;FGFR4Arg/Arg385 mice (Fig. 4D).The most significant difference is shown in the percentage of tumor area, where both WAP-TGFα;FGFR4Gly/Arg385 mice and WAP-TGFα;FGFR4Arg/Arg385 mice display a significant increase in the percentage of tumor area compared with WAP-TGFα; FGFR4Gly/Gly385 mice (Fig. 4D).
In summary, the FGFR4Arg385 allele promotes breast tumor progression over time and facilitates the initiation of oncogenesis and thereby shortens the time point of tumor onset.
The FGFR4Arg385 allele promotes cancer cell metastasis.As clinical outcome of cancer is strongly dependent on the invasive stage of the primary tumor, it is essential to investigate the effect of the FGFR4Arg385 allele on metastases of WAP-TGFα-derived tumors.Importantly, the expression of genes involved in metastasis and invasion is significantly upregulated in WAP-TGFα;FGFR4Arg/Arg385derived tumors.Therefore, we investigated the occurrence of distant metastases in the lungs of WAP-TGFα;FGFR4Gly/ Gly385, WAP-TGFα;FGFR4Gly/Arg385, and WAP-TGFα; FGFR4Arg/Arg385 mice.Strikingly, FGFR4Arg385-carrying mice display a significantly earlier incidence of lung metastases when compared with WAP-TGFα;FGFR4Gly/Gly385 mice (Fig. 5A).However, as seen in Fig. 5B, the mice display no pathohistologic alterations of lung metastases in the presence of the FGFR4Arg385 allele (black arrows).Furthermore, we investigated the number and size of metastases in the invaded lungs of WAP-TGFα;FGFR4Gly/Gly385, WAP-TGFα;FGFR4Gly/ Arg385, and WAP-TGFα;FGFR4Arg/Arg385 mice after 8 months of tumor progression.Here, WAP-TGFα;FGFR4Gly/Arg385 mice show significantly more metastases that are bigger than 320 μm, whereas WAP-TGFα;FGFR4Arg/Arg385 mice show significantly more metastases that are smaller than 80 μm or bigger than 320 μm (Fig. 5C).
These results suggest that the FGFR4Arg385 allele contributes to accelerated tumor cell invasion as well as an earlier incidence and faster growth of metastases.In the MMTV-PymT breast cancer model, pulmonary metastasis was visible after 3 months of tumor development (data not shown); however, no acceleration of tumor progression was observed (Supplementary Fig. S7).We therefore did not further investigate the effect of the FGFR4Arg385 allele on metastasis to the lung.
Discussion
In this study, we investigated for the first time the effect of the change of a single nucleotide in the mouse genome in the gene encoding the receptor tyrosine kinase FGFR4 on breast cancer progression in the WAP-TGFα model.Here, we first analyzed the differential effect of the FGFR4 alleles in MEFs.First, we investigated the effect of the FGFR4Arg385 allele on MEF transformation by oncogenes such as HER2, EGFR, and v-kit, as the most prominent effect of the FGFR4Arg388 allele is the disease progression rate once the cancer has been initiated (3,6,7,9).Consistently, MEFs expressing the Arg385 allele showed a significantly higher transformation rate than control fibroblasts.We then investigated if the FGFR4Arg385 allele contributes to EGFR-driven transformation.To this end, we stably transformed the MEF FGFR4 genotype variants by EGFR overexpression.Interestingly, FGFR4 was upregulated in EGFR-transformed MEFs and the FGFR4Arg385 was found to be hyperactivated in EGFR-transformed MEFs.These results indicated the possibility of cross-talk between these two receptors as it has been shown for HER2 and FGFR4 (21).In EGFR-transformed MEFs, the FGFR4Arg385 isotype was significantly associated with accelerated cell motility, soft agar colony formation, as well as invasivity and branching in Matrigel.Furthermore, as a migratory effect is not detectable in any FGFR4 genotype MEFs not transformed by EGFR overexpression, these data clearly indicate that the FGFR4Arg385 allele is not an oncogene per se, but rather support oncogenes by the enhancement of transformation-characteristic biological processes.Moreover, no effect of the FGFR4Arg385 allele could be detected when MEFs were transformed with v-src.These results suggest that the effect of FGFR4Arg385 is clearly dependent on the specific oncogenic background that triggers the neoplastic transformation, and indicate a supportive rather than autonomous action of the FGFR4Arg385 isotype.
After this clear implication of the FGFR4 and its Arg385 variant in biological processes that are involved in definition of tumor progression and aggressiveness, we investigated the effect of FGFR4 on tumor progression in vivo.We reasoned that the FGFR4Arg385 KI mouse would overcome the problem of heterogenetic patient cohorts to clarify the possible effect of the FGFR4Arg385 allele on tumor progression (11).As FGFR4 is upregulated in diverse cancers including that of the breast and, furthermore, the FGFR4Arg388 allele was shown to promote mammary carcinoma in humans, we determined the effect of this allele on mammary cancer progression in the mouse (3,17).Similar to the experiments in vitro, we analyzed the involvement of FGFR4Arg385 in tumor progression in combination with the well-established WAP-TGFα and the MMTV-PymT transgenic cancer models.We showed that the FGFR4Arg385 allele promotes WAP-TGFα-induced mammary tumors in mass and area.In addition, these tumors displayed faster progression with a significant increase of tumor size and metastases over time depending on the different FGFR4 genotypes.Furthermore, FGFR4Arg385 decreased the time point of visible tumor incidence and therefore seems to facilitate tumor initiation.Moreover, the analysis of the criteria of tumor progression displayed a more significant difference in the tumor area rather than tumor mass, suggesting that the effect of the FGFR4Arg385 is rather on cell motility than proliferation.This is in line with the results obtained with EGFR-transformed FGFR4Arg385 MEFs.
We further analyzed the molecular consequences of FGFR4Arg385 isotype expression in tumors to investigate the underlying mechanism of accelerated tumor progression.Although FGFR4Arg385 is not overexpressed in primary tumors relative to FGFR4Gly385, its activity is enhanced.As the SNP in FGFR4 results in the conversion of a neutral to a hydrophilic amino acid, the structure of the FGFR4Arg385 allele possibly alters receptor regulation.Furthermore, Wang and colleagues (22) showed increased stability of the FGFR4Arg388 receptor in prostate cancer cell lines, which could result in a relatively higher phosphorylation status.
In this study, we further analyzed differences in the expression of several genes involved in tumor progression between WAP-TGFα;FGFR4Arg/Arg385 and WAP-TGFα; FGFR4Gly/Gly385 tumors.Here, FGFR4Arg385-carrying WAP-TGFα-derived tumors display a more "aggressive" gene expression pattern.The significant downregulation of the tumor suppressor p21 is known to predict the poorest prognosis together with high EGFR expression (23), and the upregulation of CDK1 involves FGFR4 in an enhanced migratory capacity of cancer cells (18).The unchanged expression of the other cell cycle proteins confirms the lack of involvement of FGFR4Arg385 in cell proliferation.Moreover, genes associated with cell invasivity were upregulated in FGFR4Arg385expressing WAP-TGFα-derived tumors, such as flk-1, CD44, and MMPs, contributing to a higher metastatic potential (20).Previous studies also identified changes in the cellular gene expression profile in the presence of FGFR4Arg388.Here, FGFR4Arg388 promoted the upregulation of the metastasisassociated gene Ehm2 in prostate cancer and the promigratory gene LPA receptor EDG-2 in MDA-MB-231 cells that is suppressed by FGFR4Gly388 (24,25).
Besides changes in gene expression, FGFR4 isotypes could differ in their affinity toward other functionally relevant proteins.To address this possibility, we performed a SILACbased mass spectrometry analysis of immunoprecipitates of FGFR4Gly388 and Arg388 in MDA-MB-231 breast cancer cells (3).Here, we identified the EGFR as a strong interaction partner of the FGFR4.Subsequent experiments interestingly showed a significantly higher affinity of the EGFR to FGFR4Arg388 variant, resulting in enhanced downstream signaling.This interaction may likely be the key mechanism of the tumor progression-promoting effect of the FGFR4Arg388, which is supported by our results in the FGFR4Arg385 KI WAP-TGFα mouse model in which a hyperactive EGFR drives mammary carcinogenesis. 1onsistent with the gene expression differences and our preliminary EGFR interaction hypothesis, mouse cancer cells expressing the FGFR4Arg385 allele display an enhanced potential in invading the lung to form distant metastases in vivo.These data strongly associate the FGFR4Arg388 allele with poor prognosis and thereby highlight this receptor as a marker of breast cancer progression.Our in vivo results are consistent with several clinical reports, which were published since the discovery of the FGFR4Arg388 allele by our laboratory (3), which associate the FGFR4Arg388 allele with poor clinical outcome in various cancers, including head and neck, breast, and melanoma (6)(7)(8).
In contrast, FGFR4Arg385 was not able to promote mammary cancer progression in mice transgenic for MMTV-PymT neither in tumor mass nor in area.This is well in line with the results obtained with v-src-transformed MEFs.In this case, FGFR4Arg385 could not enhance any of the analyzed biological properties.These findings underline the dependency of the FGFR4Arg385 isotype on a specific oncogenic background.
Our data support the conclusion that the FGFR4Arg388 allele is a potent enhancer of human breast tumor development, progression, and metastasis formation.As recent publications correlate the FGFR4Arg388 allele with various types of human cancer, our mouse KI model strongly supports an exclusive effect of the FGFR4Arg388 allele on a broad spectrum of cancers with respect to the rate of disease progression and outcome.The strong effect of FGFR4 on disease outcome is further underlined by Roidl and colleagues (26), who could show that upregulation of FGFR4 results in chemoresistance in breast cancer cell lines, and by the work of Meijer and colleagues (27), which shows that FGFR4 predicts failure in tamoxifen treatment of breast cancer patients.This notion is strongly supported by our previous findings-that the time of mammary cancer relapse after different drug treatments is associated with the two FGFR4-388 alleles (10).In summary, we have characterized FGFR4 as an allelespecific "oncogene enhancer" that significantly accelerates neoplastic progression driven by classic oncogenes such as EGFR or kit.This novel scenario of oncogenesis is of high clinical relevance as the FGFR4-388 genotype of a patient may provide a diagnostic marker for the individual determination of therapy decisions.These data present opportunities for the further use of the FGFR4Arg385 KI model for the investigation of FGFR4 genotype-selective cancer treatment strategies and mechanisms of resistance in targeted therapies.
Our findings strongly support a role of the FGFR4Arg388/ 385 allele as a marker for poor clinical outcome in breast cancer progression and metastasis.On this account, these data further validate the FGFR4 and its isotypes as a target for the development of prototypical drugs and emphasize the validity and importance of individualized therapy regimens for cancer patients.Above all, our findings highlight the effect of germ-line alterations, including the great variety of SNPs in tyrosine kinase genes (28), on the clinical progression of cancer and thereby emphasize the individual nature of this deadly disease.Future cancer therapy decisions will have to include individual genetic characteristics, such as the FGFR4Arg388 allele, in addition to histologic and molecular pathologic data of every individual tumor.
Figure 4 .
Figure 4.The FGFR4Arg385 allele decreases time point of tumor incidence and promotes tumor progression over time in the WAP-TGFα transgenic model.In B to D, every data point represents the values of three or more female WAP-TGFα;FGFR4Gly/Gly385, WAP-TGFα;FGFR4Gly/Arg385, or WAP-TGFα; FGFR4Arg/Arg385 mice.A, WAP-TGFα;FGFR4Arg385 mice show a significantly earlier time point of visible tumor incidence compared with WAP-TGFα;FGFR4Gly385 mice.B, WAP-TGFα;FGFR4Arg/Arg385 mice partly establish a significantly higher number of tumors over time compared with WAP-TGFα;FGFR4Gly385 mice.C, WAP-TGFα;FGFR4Arg/Arg385 mice partly display a significant increase of tumor mass and the percentage of tumor mass over time compared with WAP-TGFα;FGFR4Gly385 mice.D, WAP-TGFα;FGFR4Arg385 mice partly display a significant increase in the tumor area and the percentage of tumor area over time compared with WAP-TGFα;FGFR4Gly/Gly385 mice.
Figure 5 .
Figure 5.The FGFR4Arg385 allele promotes cancer cell metastasis in the WAP-TGFα mouse mammary tumor model.A, WAP-TGFα; FGFR4Arg385 mice show a significantly decreased time point of metastasis onset compared with WAP-TGFα;FGFR4Gly385 mice.B, black arrows, metastases display no obvious pathohistologic differences about the different FGFR4 genotypes after 6 mo of tumor progression (H&E staining).C, WAP-TGFα;FGFR4Arg385 mice partly display a significantly accelerated number of metastases compared with WAP-TGFα; FGFR4Gly/Gly385 after 8 mo of tumor progression.Size is plotted against number of metastases. | 6,389.6 | 2010-01-15T00:00:00.000 | [
"Biology"
] |
Increased Cell Traction-Induced Prestress in Dynamically Cultured Microtissues
Prestress is a phenomenon present in many cardiovascular tissues and has profound implications on their in vivo functionality. For instance, the in vivo mechanical properties are altered by the presence of prestress, and prestress also influences tissue growth and remodeling processes. The development of tissue prestress typically originates from complex growth and remodeling phenomena which yet remain to be elucidated. One particularly interesting mechanism in which prestress develops is by active traction forces generated by cells embedded in the tissue by means of their actin stress fibers. In order to understand how these traction forces influence tissue prestress, many have used microfabricated, high-throughput, micrometer scale setups to culture microtissues which actively generate prestress to specially designed cantilevers. By measuring the displacement of these cantilevers, the prestress response to all kinds of perturbations can be monitored. In the present study, such a microfabricated tissue gauge platform was combined with the commercially available Flexcell system to facilitate dynamic cyclic stretching of microtissues. First, the setup was validated to quantify the dynamic microtissue stretch applied during the experiments. Next, the microtissues were subjected to a dynamic loading regime for 24 h. After this interval, the prestress increased to levels over twice as high compared to static controls. The prestress in these tissues was completely abated when a ROCK-inhibitor was added, showing that the development of this prestress can be completely attributed to the cell-generated traction forces. Finally, after switching the microtissues back to static loading conditions, or when removing the ROCK-inhibitor, prestress magnitudes were restored to original values. These findings show that intrinsic cell-generated prestress is a highly controlled parameter, where the actin stress fibers serve as a mechanostat to regulate this prestress. Since almost all cardiovascular tissues are exposed to a dynamic loading regime, these findings have important implications for the mechanical testing of these tissues, or when designing cardiovascular tissue engineering therapies.
INTRODUCTION
Cardiovascular tissues display significant levels of prestress. This prestress is an intrinsic stress which is relieved when the particular tissues are isolated from their in vivo environment. The presence of prestress has profound implications for the in vivo functioning of cardiovascular tissues. First, prestress directly influences the apparent in vivo mechanical properties of, heart valves (Amini et al., 2012;Rausch and Kuhl, 2013) and arteries (Dobrin et al., 1975;Chuong and Fung, 1986;Cardamone et al., 2009), for example. It therefore largely dictates the functioning of these cardiovascular tissues. Second, prestress development has shown to increase tissue extracellular matrix (ECM) alignment and increased matrix deposition in tissue engineered (TE) sheets (Grenier et al., 2005) and heart valves (Mol et al., 2005), respectively, hence influencing structural adaptation in the long run. Finally, abnormal levels of prestress can give rise to serious pathologies which, among others, include vascular hypertension caused by excessive prestress-induced vasoconstriction (Fagan et al., 2004), and aneurysm formation caused by insufficient levels of prestress in tissue-engineered vascular grafts (Tara et al., 2015). In this context, gaining insight into the factors influencing the development of tissue prestress is of paramount importance.
The development of tissue prestress in cardiovascular tissues typically arises due to complex growth and remodeling phenomena, which are only partially understood (Ambrosi et al., 2011). One particularly interesting mechanism for prestress development is the ability of cells to apply traction forces to their surroundings. These forces are generated by contraction of cellular actin stress fibers. Subsequently, these actively generated forces are transferred to the surrounding ECM by means of focal adhesions, leading to the development of tissue prestress. Van Vlimmeren et al. (2012) showed that these cell-mediated traction forces are accountable for roughly 40% of the prestress present in statically cultured tissue-engineered strips.
Many previous studies have investigated the effect of cellular traction forces on the development of tissue prestress. For instance van Loosdregt et al. (2018) studied the relationship between intrinsically generated cell stress and cellular organization in 2D, and found the two to be independent from each other. In addition, Legant et al. (2009) developed a platform in which micrometer-scale cantilevers were used to simultaneously culture 3D microtissues and measure the generated stress. This stress increased with higher cantilever stiffness, but decreased with increasing collagen concentrations. Kural and Billiar (2014) used similar microtissues to study the effect of boundary stiffness, and TGF-β exposure to the developed cell-generated forces. Finally, Boudou et al. (2012) also created a microfabricated platform to measure the dynamic contraction of cardiac microtissues, which was later adapted by van Spreeuwel et al. (2014), who studied the influence of matrix (an)isotropy on this intrinsic contraction. The main advantages of these micrometer scale setups over conventional platforms are the relatively short culture times, and the option of accommodating a large number of samples. However, these particular setups were only used to study cell-generated stress under static external loading which is not physiological for cardiovascular tissues, since these are constantly being exposed to dynamic loading conditions.
There is evidence that external dynamic loading can alter (micro)tissue organization and potentially the degree of developed prestress. Like Legant et al. (2009) andFoolen et al. (2012) also used cantilever-suspended tissues, but in this particular case the cantilevers were mounted on top of a stretchable membrane, enabling dynamic loading of the constructs. They found that uniaxial and biaxial cyclic stretch differentially affected active actin and collagen (re)organization in 3D. However, as the cantilevers were relatively stiff, tissue prestress could not be quantified. A similar study by Gould et al. (2012) found that dynamic loading of collagen hydrogels, in addition to regulating collagen fiber alignment and cellular orientation, is a potent regulator of cellular phenotype. Finally, Zhao et al. (2013) cyclically loaded microtissues for 15 min using electromagnetic tweezers and found increased tissue stiffness due to increased cellular traction forces. However, it remains unclear whether long-term exposure to dynamic mechanical stimuli also induces a cell traction-mediated increase in prestress. This can be especially important in cardiovascular tissue-engineering therapies, which introduce a previously unloaded construct into a continuously dynamic loaded in vivo environment. If this is the case, prestress-induced changes in TE construct mechanical properties can alter its in vivo functionality, ultimately determining the success or failure of the therapy. Delvoye et al. (1991) showed that in constrained collagen gels, seeded fibroblasts will compact the ECM until the tensile stress reaches a steady state. After subsequent perturbations in the gel, the cells will again strive to restore the same mechanical steady state. In analogy with this phenomenon, we hypothesize that cells will also mediate tissue prestress in response to dynamic stimulation by increasing their actin-generated cell traction forces.
To investigate the validity of this hypothesis, in this study 3D microtissues were exposed to long-term dynamical loading, after which the cell traction-induced prestress was quantified. To this end, a microfabricated tissue gauge (µTUG) platform (van Spreeuwel et al., 2014) was combined with the commercial available Flexcell system to create a µFlex-TUG setup and facilitate 24 h cyclic stretching of the microtissues. First, the setup was validated by measuring the microtissue stretches during dynamic culture using digital image correlation. Subsequently, microtissues were dynamically cultured for 24 h, which increased the cell traction-induced tissue prestress almost two-fold. Next, the origin of the increased prestress levels was investigated. First, a ROCK-inhibitor, temporary inhibiting stress fiber contractility, was added after 48 h to both dynamically and statically cultured experimental groups. In both groups, prestress levels were comparable and almost completely abated after ROCKinhibition, showing that the elevated prestress levels after 24 h dynamical culture can solely be attributed to increased stress fiber contraction. Second, after subsequent removal of the ROCKinhibitor, prestress magnitudes returned to static control levels. In addition, in another dynamically cultured group, the elevated prestress levels returned to magnitudes comparable to static controls after removal of the dynamic cue. These findings show that intrinsic tissue prestress is a highly regulated parameter, in which the actin stress fibers serve as a mechanostat to control this prestress.
Since cardiovascular tissues are experiencing everlasting hemodynamic loading, and the fact that prestress influences a tissue's mechanical behavior, these findings have important implications for accurately determining (in vivo) mechanical properties. Additionally, tissue-engineering therapies aimed at replacing such cardiovascular tissues often use cells with a contractile phenotype. The findings suggest that dynamic stimulation after implantation of the TE constructs could alter their in vivo function and subsequent success of the therapy.
µFlex-TUG and µTUG Fabrication
The setup consists of eighty microfabricated tissue gauges (µTUGs), where each of these µTUGs contains four compliant polydimethylsiloxane (PDMS) microposts embedded in a microwell (Figures 1A,B). Fabrication of the µTUGs was performed according to van Spreeuwel et al. (2014). Briefly, positive masters were created by spincoating SU-8 photoresist (Microchem, Berlin, Germany) on a silicon wafer, which was subsequently UV-exposed. Alignment of different layers was performed using a Suss MJB3 mask aligner (Suss Microtec, Garching, Germany). The masters were then made non-adhesive through overnight silanization with (tridecafluoro-1,1,2,2tetrahydrooctyl)-1-trichlorosilane (Abcr, Karlsruhe, Germany) under vacuum. Negative PDMS templates were made by casting PDMS, with prepolymer to curing agent ratio of 10 : 1 w/w (Sylgard 184; Dow-Corning, Midland, USA) on the masters, followed by overnight incubation at 65 • C. These negative PDMS templates were then treated in a plasma oxydizer (1 min at 100 watt) and again made non-adhesive by overnight silanization. Subsequently, PDMS was cast on these templates which was then degassed in a vacuum oven. The PDMS-covered negative templates were either stamped in Flexcell BioFlex plates (Flexcell International Corporation, Burlington, NC, USA) to create µFlex-TUGs for dynamic culture, or in regular petri dishes to create static µTUGs. Finally, both setups were cured overnight at 65 • C followed by careful removal of the negative template.
Cell and Microtissue Culture
Human vena saphena cells (HVSCs) were cultured until passage 7 using culture medium containing advanced Dulbecco's Modified Eagle Medium (DMEM, Invitrogen, Carlsbac, CA, USA), supplemented with 10% Fetal Bovine Serum (FBS, Greiner Bio One, Frinckenhausen, Germany), 1% Glutamax (Invitrogen) and 1% penicillin/streptomycin (Lonza, Basel, Switzerland). These cells have previously been characterized as myofibroblasts (Mol et al., 2006) and exhibit a contractile phenotype. Microtissues were created according to the protocol described by van Spreeuwel et al. (2014). In short, first the µFlex-TUGs and µTUGs were sterilized by immersion in 70% ethanol for 15 min, followed by 15 min UV radiation. To impair cell adhesion, the PDMS was treated with 0.2% Pluronic F127 (BASF, Ludwigshafen am Rhein, Germany) in PBS for 15 min. A gel mixture containing 50% collagen (Rat tail collagen type 1, BD biosciences, New Jersey, US, 3.2 mg ml −1 ), 39% culture medium, 8% growth factor-reduced Matrigel (BD Biosciences) and 3% 0.25 M NaOH was prepared and centrifuged into the microwells (1 min, 2,000 RPM). Residual gel which was not in the microwells was used to resuspend harvested HVSCs, after which this suspension was centrifuged again into the microwells (1 min, 1000 RPM) (Figure 2A). Excessive gel was removed and the remaining cell/gel suspension in the microwells was allowed to polymerize for 10 min at 37 • C. Finally, 0.25 mg/mL L-ascorbic 2-phosphate acid (Sigma-Aldrich, St. Louis, MO, USA) supplemented standard culture medium was added on top of the microwells. After seeding, the setups were placed in an incubator for 24 h at 37 • C, 100% humidity and 5% CO 2 to allow for initial microtissue formation ( Figure 2B).
Validation of Microtissue Strain in µFlex-TUG
To cyclically stretch the microtissues, the seeded µFlex-TUGs were placed in the Flexcell FX-4000 system supported by rectangular posts (Figure 1B). This system enables application of uniaxial dynamic loading conditions by applying a vacuum to the flexible membrane of the Flexcell plates and stretching it over the posts ( Figure 1C). The PDMS microposts connected to the membrane translate these stretches to the connected microtissue ( Figure 2C). It is unknown how the Flexcell membrane stretches translate to actual microtissue stretches. In this regard, Colombo et al. (2008) showed that accurate calibration and measurements of Flexcell strains are recommended given the viscoelastic nature of the Flexcell system. Therefore, the strains in the microtissues in the µFlex-TUG were validated by means of digital image correlation (DIC). Toward this end, microtissues were seeded in the µFlex-TUG system and 5,10,15 and 20% Flexcell strains with a frequency of 0.5 Hertz were applied. Videos of the stretched microtissues were recorded with a camera mounted on a Zeiss stereo discovery v8 (Oberkochen, Germany) and analyzed using previously developed DIC software by Neggers et al. (2012) to obtain the Green-Lagrange strains (in the constrained tissue direction) in one rectangular middle section of the microtissue for each of the applied Flexcell strains.
Confocal Microscopy
To visualize microtissue structure at the end of the experiments, the microtissues were incubated with a collagen-specific CNA35 probe (Boerboom et al., 2007) for 30 min, after which Z-stack images were made using a confocal laser scanning microscope (TCS SP5X; Leica Microsystems, Wetzlar, Germany, excitation 488 nm, emission 520 nm, magnification 10x, Z-step= 3 µm). Next, microtissues were fixed for 10 min using 10% formalin, followed by permeabilization with 0.5% Triton X-100 in PBS. The cell nuclei and actin network were stained with DAPI and Atto 488 Phalloidin (Sigma-Aldrich) dyes, respectively, and imaged using the confocal microscope.
Prestress Measurements
In order to quantify the tissue prestress ( Figure 2D), first the generated forces on the four PDMS microposts were determined. Toward this end brightfield images of the microtissues were made on every time point on an EVOS XL Core microscope (Thermo Fisher, Waltham, MA, USA). Using a semi-automatic, custom-made Matlab script (Mathworks, Natick, USA), the four strongest circles in each image were detected using Matlab's imfindcircles function. The position of these circles corresponded to the top of the four microposts. The displacement of the center of each circle with respect to its original position was used to determine the displacement of the top of these posts (u). Next, the spring constant K [= 1.22 N m −1 ] of a single micropost was determined by means of finite element simulations (Abaqus, Dassault Systèmes Simulia Corp., Providence, RI, USA, version 6.14-1). First, a force was applied to the part of a single micropost where the tissue is attached, which is just below the relatively large cap of the post. Next, the displacement of the top middle node in the direction of the force was determined (for more information see the Supplementary Material). Second, using this spring constant and the micropost displacement, the force exerted on one micropost (F i post ) was determined: Since the force generated by the microtissue is equal and opposite on the microposts on both side of the tissue, the total microtissue force F tissue is the sum of the individual forces on each of the four posts divided by two. It needs to be noted that only the component of the total force in the constrained direction was considered. To acquire the prestress in the microtissues, the measured forces need to be translated into stresses. Toward this end, the cross-sectional area (CSA) from each microtissue was obtained. First from all the images made during the force measurements, Matlab's imdistline function was used to obtain the width in the middle part of the microtissue, which we define as the "Cross-Sectional Length" (A CSL ). Second, at the end of the experiment the "real" CSA (A CSA ) from the collagen stained microtissues (section Confocal Microscopy) was determined. All slices of the Z-stack were imported into Matlab, and from the middle image the main orientation of the largest connected component (which is the microtissues) was obtained using the regionprops Matlab function. This main orientation was used to rotate all individual slices so that the images align in the horizontal direction ( Figure 3A). Next, each lateral slice form the 3D image was binarized. A convex hull was wrapped around the images, where the pixels within this convex hull compose the CSA (Figure 3B). The lateral slice with the smallest convex hull was obtained and used together with the dimension data from each voxel to obtain the microtissue CSA in µm 2 . This real CSA was plotted against the microtissue width at the end of the experiment, for which a strong correlation (R 2 =0.91) was found when fitting a linear model (Figure 4). Finally, the real CSA was determined by applying the obtained linear model to the measured microtissues widths at each previous time point of the experiment. All the forces (F tissue , µN) were divided by the CSA (A CSA , µm 2 ) to obtain the microtissue prestress (σ ) in kPa, i.e., (2)
Number of Cells
To determine the prestress magnitude per cell, the number of cells per microtissue was counted. This was done by importing the Z-stacks of the DAPI channel from the confocal images into ImageJ (NIH, Bethesda, MD, USA) ( Figure 3C). First the stacks were filtered with a 3D watershed algorithm (Ollion et al., 2013) (Figure 3D). Next the amount of cells in the binarized images were counted using a 3D object counter plugin (Bolte and Cordelières, 2006) (Figure 3E).
Experimental Design
Microtissues were seeded in two µTUGs as a static control, and additionally two wells in a µFlex-TUG were seeded as a dynamic group. An overview of the experimental design can be found in Figure 5. Initially, all four groups were cultured under static conditions for 24 h to allow the formation of microtissues, after which the force and CSA were determined.
To determine the effect of dynamic loading on the generated prestress, subsequently the two dynamic groups were switched to dynamic culture conditions by applying 10% strain to the Flexcell plate at 0.5 Hz (Foolen et al., 2012). After another 24 h, the force and CSA was determined again for all groups.
To analyse the cell traction-mediated fraction of the total prestress, a ROCK-inhibitor (Y-27632, Sigma-Aldrich) was added to one static and one dynamic group and forces and CSA were measured again after 30 min. Finally, to determine whether differences in prestress were reversible, the ROCKinhibitor was removed by adding fresh culture medium and switching all dynamic groups back to static culture conditions. After a final 24 h the prestress was measured again for all groups.
Statistical Analysis
Only microtissues which were still attached to all four posts at the end of the experiment (72h) were included in the analysis. For the static control group and the ROCK-inhibited static group, sample numbers were 21 and 24, respectively, while for the dynamic and ROCK-inhibited dynamic group the sample sizes were 10 and 13, respectively. All data were reported as mean ± standard error of mean. Statistical analysis of the data was performed with a many-to-one Dunnett test, comparing all conditions to one control group, accounting for heterogeneous variances and unequal samples sizes using the methods and implementation into the statistical software package R (R Core Team, Vienna, Austria) described in
Microtissue Strains Are One-Third of the Applied Flexcell Strains
The applied Flexcell strains clearly translated to a strain in the microtissues (Figures 6A,B). During one cycle (2 s), these microtissues strains followed a homogeneous inverse parabolic profile (Figure 6C), reaching a maximum value halfway through the cycle. When quantifying the maximal microtissue Green-Lagrange strains, it was found that they increased with the applied Flexcell strains (Figures 6C,D). However, the applied Flexcell strains did not equal the microtissue strains. As a rule of thumb, the actual microtissue strain was assessed to be one-third of the applied Flexcell strain.
The Microtissues Have a Uniform Distribution of Cells, and Collagen and Stress Fibers Are Aligned in the Constrained Direction
The cells are uniformly spaced inside the microtissues ( Figure 7A). Also the collagen ( Figure 7B) and actin ( Figure 7C) are homogenously distributed in the microtissue, where the fibers are oriented in the longitudinal direction.
Increased Prestress in Dynamically Cultured Microtissues
The prestress in the static control group remained constant during 72 h of culture (Figure 8A, blue). Although initially the prestress was similar after 24 h, following dynamic stimulation, the microtissue prestress increased significantly to levels over twice that of the statically cultured controls after 48 h of culture ( Figure 8A, red). Upon removal of the dynamic mechanical cue, prestress levels went back to equal magnitudes as that of static controls at 72 h of culture.
Prestress Increase Is Caused by the Stress Fibers and Is Reversible
The statically cultured microtissues which were exposed to the ROCK-inhibitor showed a similar prestress magnitude at 24 and 48 h of culture. However, after addition of the inhibitor, the prestress dropped significantly (Figure 8B, red), leaving only a small amount of residual stress. After subsequent removal of the ROCK-inhibitor, within 24 h the prestress again reached comparable levels compared to the control group. A similar phenomenon was observed in the ROCK-inhibited dynamically cultured microtissues. Upon dynamic culture, the stretched microtissue prestress again increased significantly to levels over twice that of the statically cultured controls at 48 h of culture (Figure 8C, blue). Addition of the ROCKinhibitor diminished that higher prestress almost completely, leading to levels of residual stresses comparable to the statically ROCK-inhibited microtissues ( Figure 8B). In line with earlier observations, removal of the ROCK-inhibitor and subsequent static culture resulted in similar prestress levels as statically cultured controls after 72 h.
Prestress Per Cell Is Similar in Statically and Dynamically Cultured Groups at the End of Culture
The number of cells per individual microtissue was determined using a DAPI staining (section Number of cells) at the end of culture. The number of cells differed substantially within the experimental groups for both conditions (Figure 9). However no statistical differences between the statically and dynamically cultured groups were found (p = 0.5614). When plotting the stress in each microtissue against the number of cells, an increasing linear relationship was found for both the statically (R 2 =0.55) and dynamically (R 2 =0.62) cultured groups (Figure 10).
DISCUSSION
In this study it was investigated how long-term exposure to dynamic loading will influence cell traction-induced prestress in 3D microtissues. To facilitate this, a microfabricated tissue gauge platform was combined with the commercially available Flexcell system to enable 24 h cyclic stretching of the microtissues.
Setup Validation
First the developed system was validated by measuring the actual microtissue strains with DIC. As already stated by Colombo et al. (2008), the applied Flexcell strains do not necessarily translate one-to-one to actual microtissue strains. On average, the actual microtissue strain was one third of the applied Flexcell strains. Possible reasons for this are two-fold: first, addition of a PDMS layer to the Flexcell membrane makes the entire base of the µFlex-TUGs more stiff, and thus with the same vacuum magnitude the membrane yields less displacement. Second, the microtissues are connected to compliant microposts, which contrary to for example Foolen et al. (2012), bend when the cyclic stretch is applied, making the strain in the microtissues even smaller.
Cell Traction Forces Are a Mechanostat for Tissue Prestress
After validation of the setup, experiments were performed to determine the effect of dynamic loading on cell tractionmediated prestress. In the first 24 h, all groups were cultured statically to ensure microtissue formation. The prestress after 24 h was similar for all four groups. This confirms that merely culturing the microtissues in a different setup (µFlex-TUG or static µTUGs) without applying cyclic stretch did not affect the prestress. For static controls, prestress remains constant over the entire experiment. Upon dynamic culture, the prestress roughly increased two-fold compared to the static controls. This prestress increase is entirely caused by cell traction forces, as ROCK-inhibition almost completely diminishes the prestress. In fact, both the statically and dynamically cultured microtissues returned to similar values of prestress (0.25 and 0.34 kPa, respectively) after ROCK-inhibition. This passive residual stress in the static group contributes 19% to the total prestress, and 14% in the dynamic group. When comparing the percentual contribution in the static group to statically cultured TE strips by Van Vlimmeren et al. (2012), this percentage is significantly lower (60%). However, in this particular study the culture times were considerably shorter, so very little matrix production could have taken place. Therefore, this discrepancy could be explained by the fact that the only residual matrix prestress is caused by the relatively compliant collagen gel, which is far less stiff compared to the fibrous collagenous matrix in TE strips.
After switching the dynamically cultured microtissues back to static conditions, or upon removal of the ROCK-inhibitor, prestress in all groups is again normalized and comparable to the control group after 72 h. The typical timescales (smaller than 24 h) at which the increase and decrease of prestress lay within the typical turnover rate of actin stress fiber, which is approximately minutes (Peterson et al., 2004;Mbikou et al., 2006;Livne and Geiger, 2016). These results show that cyclic loadinduced elevated prestress levels, and the restoration to baseline levels after removal of the ROCK-inhibitor or dynamic cue are regulated only by the actin stress fiber generated cell traction forces. It appears that in these relatively immature microtissues, the actin stress fibers serve as a mechanostat to regulate the prestress in response to perturbations in the environment.
Microtissue Structure and Increasing Prestress With Cell Number
At the experimental timepoints, both statically and dynamically cultured microtissues were stained for actin stress fibers, collagen and cell nuclei. Figures 7B,C show that the collagen, and cellular actin fibers are present uninterruptedly between the four microposts. This means that all cell-generated forces can be translated within the tissue. Both the actin and collagen fibers are aligned in the longitudinal direction of the microtissues. This phenomenon is not surprising since this is the only constrained direction of the constructs.
The DAPI-stained microtissues affirm a homogeneous distribution of cells with the microtissue (Figure 7A). However, cell numbers among microtissues varied up to 10-fold (± 150-1,500). This is possibly caused by the fact that during seeding, the cells are centrifuged into 80 different microwells at once. This makes it impossible to control how many cells end up per microtissue. Regardless of this fact, successful microtissues formation-attached to all four posts-could occur with such a varying number of cells per construct. The effect of the varying numbers of cells per microtissue is also noticeable in microtissue prestress. Since prestress differences in this study are mainly caused by cell traction forces, larger cell numbers in microtissues cause higher prestress magnitudes. However, a two-fold increase in cell number for instance does not yield a prestress of the double magnitude. It appears that the cells cooperatively regulate the tissue prestress in response to altering cell numbers. This coincides with the findings by Canovi et al. (2016), who showed that cellular traction forces in 2D culture decrease with increasing confluency. Again in this case, cellular traction forces serve as a mechanostat for the total tissue prestress in response to changing numbers of cells.
Challenges and Future Directions
In these experiments, the microtissues were loaded uniaxially. Since many cardiovascular tissues are loaded in multiple directions, a biaxial loading regime as used in Foolen et al. (2012) would mimic the in vivo situation more closely. Moreover, this setup could enable testing prestress development in anisotropically organized tissues. In addition, currently it was not investigated how the dynamic loading might affect cellular morphology and phenotype. Regardless of the relatively short time of the experiments, such morphological and phenotypic changes are known to occur, for instance under the influence of micropost stiffness Kural and Billiar (2016). With this platform, a thorough investigation of morphology and/or phenotype could be conducted using this platform. Moreover, although the large range of cells per tissue can at first be perceived as a drawback of the current method, it could be utilized to investigate the effect of cell number on generated tissue force and prestress more thoroughly. For instance, a comprehensive analysis of the force or stress per cell could help to determine the cooperative behavior of cells to reach a tensional homeostasis. Also, in the current experiments the applied microtissue strains were relatively low (3.01% ± 0.26), whereas higher levels of strain would be more realistic to the native situation of many cardiovascular tissues. Additionally, tracking prestress levels in real-time, instead of at discrete time points can give more insight into the rate of change in prestress. Currently, it is unknown how prestress changes in between the discrete time points (hence the dotted lines in Figure 8). Finally, the microtissues were created using a collagen gel, supplemented with matrigel. Such collagenous gels are subject to significant degradation and remodeling, which can alter the mechanical properties, and therefore the cellular response (Allison et al., 2009;Smithmyer et al., 2014). However, since collagen gels tend to be stable for culture times shorter than 1 week (Smithmyer et al., 2014), the degree of degradation is probably negligible in the current experiments, which only last 72 h. In that regard, little to no matrix production could occur during these experiments, which explains the relatively small contribution of the residual stress to the total tissue prestress. Yet, it is known that prolonged dynamic loading induces elevated levels of ECM production and cross-linking (Boerboom et al., 2008;van Geemen et al., 2013), which is expected to generate more prestress. It would therefore be interesting to increase the culture time of the experiments to study the effect of ECMinduced prestress.
Implications
The implications of increased cell traction-induced prestress under dynamic loading conditions and subsequent relatively quick recovery, are numerous. This phenomenon should for instance be considered when mechanically testing cardiovascular tissues. Rausch and Kuhl (2013) already reported that neglecting tissue prestress explained differences in reported stiffness values in literature of up to four orders of magnitude for the same types of tissues, showing the importance of accurate prestress quantification. Usually, these prestress measurements involve isolating the tissue in question from its native situation and determining the retraction (Chuong and Fung, 1986;Amini et al., 2012;Horny et al., 2013). However since this study shows that the prestress drops relatively quickly after removal of the dynamic cues, prestress measurements in these dynamically loaded cardiovascular tissues could yield different outcomes depending on the time of the measurement after excision.
Increased cell-induced prestress could also influence tissueengineering therapies. Since these therapies rely on cell-seeded scaffolds where these cells often have a contractile phenotype (Mol et al., 2006;Tara et al., 2015), substantial prestress magnitudes are ought to be expected. Introducing such a construct from a previously unloaded environment into a continuously dynamic loaded in vivo environment, can lead to prestress-induced changes in the construct's mechanical behavior. This can lead to subsequent alterations in its in vivo functionality, determining the success or failure of the therapy.
Several factors associated with tissue-engineering strategies can influence these cell-generated prestress levels. For instance, the mechanical properties of the scaffold dictate the biomechanical behavior of the construct, thus also influence the mechanical strains experienced by the cells. A relatively compliant scaffold would induce larger cellular strains, and subsequent higher tissue prestress, whereas a stiff scaffold could shield the cells from the dynamic loads, yielding a lower prestress. Furthermore, temporal changes in scaffold degradation, ECM accumulation and organization will influence dynamic cues sensed by the cells, and hence the tissue prestress, which then again alters the functioning of the TE construct.
Conclusions
This study investigated how 24 h dynamic loading influences cell traction-induced prestress in 3D microtissues. Toward this end, a setup to culture microtissues was combined with the commercially available Flexcell system to facilitate dynamic culture of the constructs. First, the setup was validated to determine the peak microtissue stretches during the experiment, after which the effect of the applied dynamic microtissue stretch on the generated prestress was quantified. After 24 h, prestress increased significantly compared to static controls. However, after subsequent removal of the dynamic cue, prestress again dropped to levels comparable to static controls. With the addition of a ROCK-inhibitor, the prestress in these microtissues vanished almost completely, confirming that the prestress in these microtissues can be completely attributed to the cellular traction forces. Finally, after removal of the ROCK-inhibitor, prestress magnitudes restored to baseline levels. In conclusions, this study systematically and quantitatively investigated the effect of dynamic loading on cell traction-mediated tissue prestress. The results indicate that intrinsic tissue prestress is a highly controlled parameter, where the actin stress fibers serve as a mechanostat by regulating tissue prestress levels in response to perturbations in the environment. These findings can have important implications for mechanical testing of native cardiovascular tissues, and tissueengineering therapies.
DATA ACCESSIBILITY
All data and numerical code have been stored at SURFdrive, a personal cloud storage service for the Dutch education and research community, and are available upon request.
DATA AVAILABILITY
The datasets generated for this study are available on request to the corresponding author.
AUTHOR CONTRIBUTIONS
MvK, NK, SL, and CB conceptualized the idea behind this study. MvK and NK conducted the experiments and numerical simulations and analyzed the data. MvK wrote the manuscript. JF was involved in developing the experimental setup. SL and CB supervised the project. All authors reviewed the manuscript. | 7,688.8 | 2019-03-12T00:00:00.000 | [
"Biology",
"Engineering"
] |
Evidence for the expression of vasorin in the human female reproductive tissues
Abstract Objective: To investigate the expression and localization of Vasorin (Vasn) in human female reproductive system. Methods: The presence of Vasorin was evaluated by RT-PCR and immunoblotting analyses in patient-derived endometrial, myometrial and granulosa cells (GCs) primary cultures. Immunostaining analyses were performed to detect Vasn localization in primary cultures and in ovarian and uterine tissues. Results: Vasn mRNA was detected in patient-derived endometrial, myometrial and GCs primary cultures without significant differences at the transcript level. Otherwise, immunoblotting analysis showed that Vasn protein levels were significantly higher in GCs than proliferative endometrial stromal cells (ESCs) and myometrial cells. Immunohistochemistry performed in ovarian tissues revealed that Vasn was expressed in the GCs of ovarian follicles at different stages of development with a higher immunostaining signal in mature ovarian follicles such as the antral follicle or on the surface of cumulus oophorus cells than in early-stage follicles. The immunostaining of uterine tissues showed that Vasn was expressed in the proliferative stroma endometrium while it was significantly less expressed in the secretory endometrium. Conversely, no protein immunoreactivity was revealed in health myometrial tissue. Conclusions: Our results revealed the presence of Vasn in the ovary and the endometrium. The pattern of Vasn expression and distribution suggests that this protein may have a role in the regulation of processes such as folliculogenesis, oocyte maturation, and endometrial proliferation.
Introduction
Vasorin (Vasn) is a type I transmembrane glycoprotein of 673 amino acids; it is also known as Slit-like 2 (Slitl2) due to its strong structural similarities with the Slit family of proteins [1].Vasn has been identified in numerous species such as rodents, zebrafish, and human [2].Although the high degree of structural similarity between murine and human protein suggests a highly conserved function of Vasn throughout evolution, to date no human disease or phenotype has been directly associated with variants in the vasn gene [1].
According to new evidence Vasn seems to be a potential biomarker for nephropathy [3][4][5] and tumorigenesis.In adult human tissues, Vasn is expressed at highest levels in aorta, at intermediate levels in kidney and placenta, and much less so in other tissues such as liver, brain, heart, lung, and skeletal muscle [6].Ikeda et al. [6] have first reported that Vasn modulates the vascular response to injury by counteracting Transforming growth factor beta (TGF-β) signaling in vivo.TGF-β plays an important role in vascular pathophysiology regulating cell growth and differentiation, extracellular matrix accumulation and inflammation [7].Moreover, TGF-β1 signaling contributes to the development of smooth muscle cells from embryonic stem cells [8].It has been demonstrated that Vasn extracellular domain can be released from cell surface by metalloprotease ADAM17-mediated cleavage [9].The soluble extracellular domain of Vasn (sVasn) negatively modulates TGF-β signaling by sequestering the ligand in a complex, thereby preventing ligand-receptor interactions and subsequent downstream signalings such as SMAD-2/3 phosphorylation and collagen production [10].
Several reports also showed Vasn overexpression in different cancer cells, such as hepatocellular carcinoma [11,12], breast cancer, human gliomas, especially glioblastoma multiforme [13].More recently, studies performed in human lung cancer tissues and cell lines revealed that the increase in Vasn expression was inversely associated with lung cancer patient survival [14].In the light of these data, the role of Vasn in carcinogenesis process has been deeply investigated and it has been shown that its expression promotes tumor progression [15].Furthermore, the protein seems to be involved in the escaping cell death, which represents a crucial event promoting tumor growth.In particular, Vasn is also referred as ATIA (anti-TNFa-induced apoptosis) since it may act as an antiapoptotic factor by suppressing reactive oxygen species (ROS) production and protecting cells against TNFα-and hypoxia-induced apoptosis [14].In addition to being expressed on cell membrane and being secreted, Vasn is also able to translocate into the mitochondria where it binds to thioredoxin-2 (TRX2) and suppresses ROS production [16].
The involvement of TGFβ − superfamily members in the regulation of ovarian folliculogenesis and ovulation has been widely described [17][18][19].However, the expression of the TGF-β inhibitory binding protein Vasn and its function in the ovarian physiology and female fertility have been poorly investigated.In particular, the role of this protein in the reproductive tissues was only investigated in mouse model system by Rimon-Dahari et al. [20].Their studies demonstrated that somatic cells of mouse ovarian follicles express the protein, whose expression is up regulated by Luteinizing Hormone.For the first time, this study presents Vasn as a new potential regulator of murine folliculogenesis, by participating in the regulation of antral follicle survival, and the establishment of the ovarian follicle pool.Interestingly, the presence of the protein was also highlighted in a proteomic analysis of human follicular fluid from fertile women [21].All these promising data and the lack of information on the topic in human tissues prompted us to investigate the Vasn expression in the human female reproductive system, using ovarian, endometrial, and myometrial tissues and the respective primary cultures.
Human female reproductive tissues collection
Collected human specimens from healthy patients (mean age of 41 ± 6), who signed informed consent for this study, were obtained from the Department of Obstetrics and Gynecology of Modena University Hospital.Myometrium, endometrium, and ovarian tissues were obtained from eight women who underwent myomectomy (n = 3), hysterectomy (n = 3) or ovariectomy (n = 3) for benign circumstances.This study has been approved by the Local Institutional Review Board.
Primary myometrial and endometrial stromal cell cultures
Myometrium and endometrium fragments were manually cut into small pieces, followed by incubation on a shaker at 37 °C for 3 to 5 h in DMEM containing 1% penicillin/streptomycin, 1 mg/ mL collagenase type II.Sigma-Aldrich, St. Louis, MO, USA supplied all reagents for cell cultures' digestion.The digested tissues were filtered through a sterile 100 μm polyethylene mesh filter to remove undigested tissues and to obtain a single-cell suspension.To assure the purity of the endometrial stromal culture, trypsinization was carried out to eliminate contaminating epithelial cells (TrypsinEDTA10X, GIBCO, Waltham, MA, USA).The immunostaining for the stromal marker Vimentin was performed to confirm the purity of the stromal preparation.Cells were cultured in DMEM medium supplemented with 10% fetal bovine serum (GIBCO), 1% glutamine (Sigma Aldrich) and 1% Antibiotic-Antimycotic solution (GIBCO).
Tissue slice preparation and VENTANA immunohistochemistry
Tissue samples of ovary, endometrium and myometrium, obtained from the above healthy patients, were fixed in 4% paraformaldehyde/PBS (PFA) (Electron Microscopy Sciences, Hatfield, PA, USA) for 24 h, dehydrated and then paraffin embedded.Sections of 4-5 µm thickness were investigated by immunohistochemical analysis.Immunohistochemistry was carried out on a fully automated VENTANA Benchmark XT (VENTANA Medical Systems; Roche Group, Tucson, AZ, USA) using the pre-diluted VENTANA anti-VASN rabbit polyclonal primary antibody (1:500, Thermo Fisher Scientific, Waltham, MA, USA, PA5-98236), together with the XT ULTRAVIEW DAB detection kit (VENTANA Medical Systems).Negative controls were performed by omitting primary antibody and replacing primary antibody with isotype IgG.Nuclei were counterstained with Carazzi Hematoxylin (Merck KGaA, Milan, Italy).Pictures were acquired at magnification 10-20-40X using Olympus BX53 optical microscope.
GCs isolation and primary cell culture
Patients included in this non-clinical study represent the average population of women undergoing in-vitro fertilization (IVF) at the Department of Obstetrics and Gynecology of Modena University Hospital of advanced maternal age (36 ± 5 years).All patients (n = 20) consented to the donation of GCs, which were otherwise discarded.
hGCs were purified, from follicular aspirates, by centrifugation through a discontinuous Percoll (Amersham, Sweden) gradient [22], and plated in 35-mm dishes at a density of 8x10 5 cells/ dish in DMEM medium supplemented with 5 % fetal bovine serum, 1% glutamine and 1% antibiotic-antimycotic solution.The cells were used for experiments after 6 days of culture to eliminate the hormonal effects suffered by the patient during the pre-IVF in vivo stimulation protocol.
RNA extraction and RT-PCR analysis
Total RNA was extracted from proliferative ESCs, myometrial cells and GCs using TRI-Reagent (Sigma-Aldrich).RNA concentration and purity were detected using the Nanodrop ND-1000.Total RNA was reverse transcribed in a final volume of 20 μl using the High-Capacity RNA-to-cDNA™ Kit according to the manufacturer's instructions.Expression of human Vasn was evaluated by semiquantitative RT-PCR using DreamTaq Green PCR Master Mix (2X).Thermo Fisher Scientific supplied all reagents for RNA analysis.All reactions were carried out in triplicate and the following protocol used: hot start for 2 min at 95 °C, followed by 35 cycles of 30 s at 95 °C, annealing for 30 s at 60 °C and extension for 1 min at 72 °C.For each PCR reaction, 15 μl samples were loaded in a 2% agarose gel.Ribosomal protein S7 (RpS7) was used to normalize the intensity of the amplified fragment bands.
Primer's specificity was confirmed by melting curve analysis and the sequences of the primers used were:
Statistical analyses
Statistical analyses were performed using one-way analysis of Variance (ANOVA) followed by the Student-Newman-Keuls method to compare multiple groups.Values with p< 0.05 were considered statistically significant.
Vasn mRNA and protein expression in human primary cultures
Characterization of Vasn in human primary cultures may be a starting point for more in-depth future studies on the function of this protein in the ovary and uterus.We used as experimental model GCs isolated from human follicular fluid of women undergoing oocyte retrieval for IVF protocol, myometrial cells, and proliferative ESCs isolated from healthy myometrium and endometrium, respectively.To obtain a pure endometrial stromal culture, trypsinization was carried out to eliminate contaminating epithelial cells.Cells from passages 0 and 1 were evaluated by immunofluorescence assay for the stromal marker Vimentin.At passage 0 both cell types were present, epithelial cells that did not show staining for the stromal marker, and stromal cells that were positive for Vimentin.The cells at passage 1 were all positive for vimentin marker demonstrating only the presence of stromal cells (Figure 1A).Vasn mRNA expression levels were evaluated by RT-PCR analysis.Vasn mRNA was detected in all cell types without significant differences at the transcript level (Figure 1B).Otherwise, western blotting analysis showed that Vasn protein levels were significantly higher in GCs than ESCs and myometrial cells, thereby indicating that Vasn expression was regulated at post transcriptional level (Figure 1C).
Localization of Vasn in primary cultures of GCs and ESCs
To get more information about Vasn protein we performed a localization analysis on cellular models where Vasn was most expressed as demonstrated by the previous analysis.We stained GCs and proliferative ESCs with an anti-vasorin antibody and, as shown in the Figure 2, Vasn was located on the cell surface of both cell types.
Vasn protein expression in human female reproductive tissues
The immunohistochemical staining of the protein was performed in both ovarian and uterine tissues.Regarding human ovarian tissue, the immunostaining revealed the presence of Vasn in follicles at different stages of development, including early-stage follicles.In particular, the immunoreactivity was detected at the level of pre-granulosa cells, as shown in Figure 3A'-A'' , and it was also maintained during follicle maturation at the surface of GCs, as observed in follicles at advanced stage of development (Figure 3B'-B'' and C'-C'').Interestingly, the analyses also revealed a high Vasn expression on the surface of cumulus oophorus cells surrounding the oocyte, thus suggesting a role for the protein during the whole process of follicle maturation.Whereas no Vasn expression was observed at level of theca cells in both antral and Graafian follicles.
The presence of the protein was also investigated in uterine tissues.In detail, we detected Vasn immunoreactivity only in the stromal compartment of both proliferative and secretory phase endometria with a strong difference in immunoreactivity signal that was much higher in proliferative stroma compared to the secretory one (Figure 4A'-A'' and B'-B'').Conversely, the expression of the protein was observed neither in proliferative nor in secretory glandular epithelia as well as no immunoreactivity for the protein was revealed in health myometrial tissue (Figure 4C'-C'').
Discussion
Our work aimed to investigate for the first time the expression of Vasn in the human female reproductive system.Immunohistochemical analyses performed in human ovarian tissue indicated that Vasn was expressed in the GCs of ovarian follicles at different stages of development.Interestingly, the immunostaining signal was much higher in mature ovarian follicles such as the antral follicle or on the surface of cumulus oophorus cells than in early-stage follicles.
In recent years, increasing interest has been observed in the complex intraovarian control mechanisms that, together with other systemic signals, coordinate follicle recruitment, selection, and growth from the primordial stage through ovulation and corpus luteum formation.Several growth factors, many belonging to the TGF-beta superfamily, are expressed by ovarian somatic cells and oocytes and they act as intraovarian regulators of folliculogenesis [17].Among these factors, activin, Growth differentiation factor-9 (GDF9), and Bone morphogenetic protein 15 (BMP15), are required for the early stages of folliculogenesis.Rimon-Dahari and colleagues, reported that Vasn is expressed in mouse ovaries [20].Specifically, they found the expression of Vasn mRNA and protein in GCs of all follicular developmental stages: primordial, primary, secondary, antral, and preovulatory follicles.
Studies performed in bovine follicles proved that the proper development and function of ovarian follicles is maintained by continuous angiogenesis and that follicular vascularization appears to be crucial in achieving follicle dominance [23].Furthermore, it has been demonstrated that GCs play an important role in this process, by producing angiogenic factors [24].Therefore, investigating the GCs contribute to the development of the blood vessel network during the folliculogenesis, represents an interesting study goal.In this regard, Vasn, which is known as an important angiogenic factor, may be expressed by GCs to promote and support follicular development and oocyte maturation.
The endometrium is mainly composed of two different cell types: epithelial cells and stromal cells.Stromal cells displaying intrinsically migratory and proliferative characters contribute to the regenerative capacity of the endometrium.During the follicular phase of the menstrual cycle, it has been observed that stromal cells proliferate, thus inducing a thickening of the endometrium.On the other hand, during the luteal phase, these cells respond to progesterone exposure and decrease their replication rate.By undergoing a process of differentiation, the stromal cells provide the proper environment for the eventual embryo implantation [25].We observed that Vasn was highly expressed in the proliferative stroma endometrium, and that it was significantly less expressed in the secretory endometrium.Therefore, we could hypothesize a Vasn contribution to the proliferative process of stromal cells so that the endometrium reaches the right thickness for a possible pregnancy.In the transition from proliferative to secretory endometrium, cell proliferation is reduced in favor of the differentiation process, and this event would explain the observed downregulation of its expression.
Vasn expression, on the other hand, was absent in the myometrium, the middle layer of the uterine wall, consisting mainly of uterine smooth muscle cells.The myometric compartment changes significantly during pregnancy, mainly due to muscle hypertrophy and an increase in lymphatic vessels and blood vessels [26].Therefore, it could be of interest to investigate a possible Vasn expression in the myometrium selectively during the pregnancy.
In conclusion, our data first showed Vasn expression in the human female reproductive tissues such as ovarian tissue and endometrium.The in vitro analysis located the protein on the cell surface of GCs and ESCs and further investigations are currently ongoing on primary cell cultures to establish the function and molecular mechanisms by which Vasn acts in the human female reproductive system.Identifying the role of Vasn may have important clinical relevance considering the already well-known involvement of many members of the TGF-β superfamily in the pathophysiology of the ovary and endometrium.
Figure 1 .
Figure 1.(a) characterization of proliferative endometrial stromal cells by immunofluorescence assay for the stromal marker Vimentin (red).Scale Bar= 20 µm.(B) representative rT-Pcr analysis of vasn transcript expression in proliferative endometrial stromal cells, granulosa cells and myometrial cells of three independent cell cultures.(c) representative western blot analysis of Vasn protein expression in proliferative endometrial stromal cells, granulosa cells and myometrial cells.densitometric absorbance values from three independent experiments were averaged (± SeM) and were expressed as arbitrary units (a.u.).*p ≤ 0.001.eScs = endometrial stromal cells; Gcs = granulosa cells.
Figure 2 .
Figure 2. localization of Vasn in primary cultures of granulosa and endometrial stromal cells by immunofluorescence staining with anti-vasorin (red) and with daPi (Blue).Scale bar= 20 µm.
Figure 3 .
Figure 3. Vasn immunoreactivity in the human ovarian tissue.(a-B-c) Histological images stained with hematoxylin and eosin (H&e) of human ovarian tissue.Vasn expression was observed on the surface of pregranulosa cells (a'-a'' , arrowhead) and it was also maintained during follicle maturation at the surface of granulosa cells (B'-B'' , arrowhead).High Vasn expression was also present on the surface of cumulus oophorus cells surrounding the oocyte (c'-c'' , arrowhead).Scale bar= 200 µm.
Figure 4 .
Figure 4. Vasn immunoreactivity in the normal proliferative-secretory phase endometrial tissue and normal myometrium.(a-B-c) Histological images stained with hematoxylin and eosin (H&e) of human uterine tissues.Vasn expression was evident in the stromal cells of proliferative endometrium (a'-a'' , arrow).Vasn expression was extremely weak in the stroma of secretory endometrium (B'-B'' , arrow).Vasn expression was absent in the myometrium (c'-c'').Scale bar = 200 µm.
min at 100 °C for denaturing gel electrophoresis.Samples containing equal amounts of proteins (40 μg) were run on 4-12% SDS-polyacrylamide gel (SDS-PAGE) and transferred onto a nitrocellulose membrane with a 0.2 µm pore size (Thermo Fisher Scientific).Membranes were blocked for 2 h in 5% of not-fat milk powder (Sigma-Aldrich) in PBS1X with 0.05% [v/v] Tween® 20 Detergent (Sigma-Aldrich), and then incubated with the antibodies overnight at 4 °C.Primary antibodies used were: rabbit Western blot analysisCells were washed with PBS 1X, suspended in 150 μl of lysis buffer 1X (50 mM TrisEDTA, 150 mM NaCl, 0.1% SDS, 0.5% DOC, 0,1% TritonX-100) supplemented with the Protease and Phosphatase Inhibitor Cocktail (Thermo Fisher Scientific).After incubation for 30 min on ice, protein concentration was measured by BCA Protein Assay Kit (Thermo Fisher Scientific).LDS Sample Buffer (4X) and Sample Reducing Agent (10X) (Thermo Fisher Scientific) were added to the protein sample and boiled for 5 | 4,118.8 | 2023-06-09T00:00:00.000 | [
"Medicine",
"Biology"
] |
Weighted Steklov Problem Under Nonresonance Conditions
abstract: We deal with the existence of weak solutions of the nonlinear problem −∆pu + V |u|u = 0 in a bounded smooth domain Ω ⊂ R which is subject to the boundary condition |∇u| ∂u ∂ν = f(x, u). Here V ∈ L∞(Ω) possibly exhibit both signs which leads to an extension of particular cases in literature and f is a Carathéodory function that satisfies some additional conditions. Finally we prove, under and between nonresonance conditions, existence results for the problem.
Introduction
Let Ω be a bounded smooth domain in R N with outward unit normal ν on the boundary ∂Ω.For a given number p > 1, a bounded function V in Ω and a certain Carathéodory function f , we consider the following nonlinear problem with Steklov boundary condition (P f ) : where −∆ p u = −div(|∇u| p−2 ∇u).This work is mainly motivated by the study of asymmetric elliptic problems with sign-changing weights carried out in [10].The problem was actually considered recently in [7] for the p-Laplacian operator (in the case V ≡ 0), where the existence of the p-harmonic solutions was proved.Also in [5], the case V ≡ 1 was treated under and between the first two eigenvalues.
In the present paper, we shall adapt and extend the approach in [7] in order to derive our main results for a homogeneous perturbation of −∆ p , the p-Laplacian operator, which is a prototype of quasilinear differential operator.When V ≡ 0 it is known that (1.1) admits solutions, see e.g.[7], [3] and its references for the case p = 2. On the other hand, it can also be proved the existence of solutions for (1.1) when one introduces a positive potential V .Allowing V to change sign makes the problem more interesting and challenging as to ensure nontrivial solutions for (P f ), a lot of facts has to be put in consideration: the regularity of the domain and the lack of coercitivity of the functional energy.
The paper is organized as follows.In section two, we give a review of a certain tools and established results that help in our concern.We thereby state properties for the first nonprincipal eigenvalue for an asymmetric Steklov problem with respect to its weights.In the third section, we solve under nonresonance conditions, namely, conditions that involve not only a kind of nonresonance between the first two eigenvalues but also the ones under the first eigenvalue.
Relevant background 2.1. The functional framework
Let Ω ⊂ R N be an open set.The p-Laplace operator (p > 1) is the partial differential operator which to every function u : Ω −→ R assigns the function ∆ p u(x) := div(|∇u(x)| p−2 ∇u(x)), x ∈ Ω. (2.1) We simply write ∆ instead of ∆ 2 and call the 2-Laplace operator simply Laplace operator.Throughout this paper, Ω will be a bounded smooth domain of class C 2,α where 0 < α < 1 with outward unit normal ν on the boundary ∂Ω.For a given p > 1, denotes the usual Sobolev space equipped with the norm It is well known that (W 1,p (Ω), || • ||) is a Banach space that is separable and reflexive (see H. Brezis [6]).The value of any u ∈ W 1,p (Ω) on ∂Ω is to be understood in the sense of the trace i.e. there is a unique linear and continuous operator γ : where Ω ⊂ R N is a bounded smooth domain of class C 2,α where 0 < α < 1 with outward unit normal ν on the boundary ∂Ω.λ ∈ R is regarded as an eigenvalue.We assume that m, n ∈ C α (∂Ω) for some 0 < α < 1.Finally, V is a given function in L ∞ (Ω) which may change sign and u = u + − u − where u ± := max{±u, 0}.To solve (2.4), the authors in [10] have considered the C 1 functionals on W 1,p (Ω) and introduced the real parameters and to bypass arisen coerciveness difficulties of the energy functional due to the sign-changing possibility of the potential V .In brief, are the principal eigenvalues of (P V,m,m ) if and only if [14]).It can be therefore seen that problem (P V,m,n ) has a nontrivial and one-signed solutions under suitable assumptions (see details in [10]) if and only if Let ϕ m and −ϕ n be the corresponding one-signed eigenfunctions associated respectively to λ 1 (V, m) and λ 1 (V, n).
Remark 2.1.Since the boundary weights lie in C α (∂Ω), every solution of (2.4) belongs to C 1,α ( Ω), for 0 < α < 1 (see [12,14]).We note that if an eigenfunction u is positive in Ω, it is shown that u remains positive on ∂Ω (see the first part of the proof of Theorem 3.1 in [14]).Furthermore, one can state using Proposition 5.8 in [15] that if u changes sign in Ω then it is also a sign-changing function on ∂Ω.
As some facts break down when (at least) one of the values β(V, m) and β(V, n) vanishes, the authors in [10] have proved that in this case, there is still a hope of obtaining existence solutions for (P V,m,n ).Indeed, 1.There exist u 1 ≥ 0 and u 2 ≤ 0 in M m,n such that ) and E V (u 2 ) < c(m, n, V ).
Define
Continuity and monotonicity properties concerning c(m, n, V ) with respect to its first two arguments (boundary weights) are given in [10].
, we then assume in addition that β(V k , m k ) ≥ 0 for all k ∈ N and m − ≡ 0 (resp.β(V k , n k ) ≥ 0 for all k ∈ N and n − ≡ 0).Hence, the following relations hold: for at least one eigenfunction u associated to c(m, n, V ), then c(m, n, V ) > c( m, n, V ).
We are guided to consider some basic results on the Nemytskii operator.Simple proofs of these facts can be found in (for instance) Kavian [11] or de Figueiredo [8].
On the Nemytskii operator
Let Ω be as in the beginning of Section 2. and g : ∂Ω × R −→ R be a Carathéodory function, i.e.: • for a.e.x ∈ ∂Ω, the function s −→ g(x, s) is continuous in R.
In the case of a Carathéodory function, the assertion x ∈ ∂Ω is to be understood in the sense a.e.x ∈ ∂Ω.Let M be the set of all measurable function u : ∂Ω −→ R.
In the light of this proposition, a Carathéodory function g : ∂Ω × R −→ R defines an operator N g : M −→ M, which is called Nemytskii.The result below states sufficient conditions when a Nemytskii operator maps an L q1 space into another L q2 .Proposition 2.4.Assume g : ∂Ω × R −→ R is Carathéodory and the following growth condition is satisfied: where Then N g (L p1r (∂Ω)) ⊂ L p1 (∂Ω).In addition, N g is continuous from L p1r (∂Ω) into L p1 (∂Ω) and maps bounded sets into bounded sets.
We now give some important results concerning the Nemytskii operator that will be used later.
Proposition 2.5.Suppose g : ∂Ω × R −→ R is Carathéodory and it satisfies the growth condition: where (2.16) Then we have: 1. the function G is Carathéodory and there exist is continuously Fréchet differentiable and Ψ ′ (u) = N G u for all u ∈ L p (∂Ω).
Assumptions and nonresonance results
The present article deals explicitly with a very known type of problem where Ω ⊂ R N is a bounded smooth domain of class C 2,α where 0 < α < 1 with outward unit normal ν on the boundary ∂Ω.
The functions V (x) and f (x, s) satisfy the following conditions: for some 0 < r < 1.
We define the following functions and make the assumption that they have nontrivial positive parts: We also assume that (H S ) k ± , K ± , l ± , and L ± are in C r (∂Ω) for some 0 < r < 1 and note that the aforementioned limits are held uniformly with respect to x ∈ ∂Ω that is for every ε > 0, there exist a ε ∈ L p ′ (∂Ω), and b ε ∈ L 1 (∂Ω) such that for a.e.x ∈ ∂Ω and ∀s ∈ R, According to (H S ) and (H 1 ), we conclude that there exist for a.e.x ∈ ∂Ω and all s ∈ R.
In addition, we require the above functions to satisfy where λ D 1 (V ), β(V, •), λ 1 (V, •) and c(•, •, V ) are related to the asymmetric Steklov problem (2.4).Remark 3.1.One easily checks from (H f ) and (H F ) that We state our first result concerning the strict monotonicity of λ 1 (V, •) as a principal positive eigenvalue of (P V,m,m ).
on ∂Ω (where means that one has a large inequality a.e in ∂Ω and a strict inequality in a subset with a positive measure) then λ 1 (V, m 1 ) > λ 1 (V, m 2 ).
Nonresonance between the first two eigenvalues
We study a nonresonance problem relating to Steklov boundary conditions and in addition we deal with some indefinite weight as a new feature.To fix one's ideas, problem (P f ) can be found in [4,7] where particular cases of weight were considered.Throughout this subsection, we work on gathering needed properties to apply a version of the classical "Mountain Pass Theorem" for a C 1 functional restricted to a C 1 manifold (see [1,8]).Our purpose is of course to obtain existence results for (P f ) and by doing so, extend some of the known results in [4,5,7].In order to have things well defined in the context of variational approach, we consider for u ∈ W 1,p (Ω), as the C 1 functional which allows to get the weak formulation of (3.1) as follows It follows readily that the critical points of Φ are precisely the weak solutions of (P f ).So the search for solutions of (3.1) is transformed in the investigation of critical points of Φ relying on standard arguments.For convenience, we recall a version of the well-known "Mountain Pass Theorem" in a useful and popular form (see [1]).
and that f satisfies (P S) condition on M .Then c is a critical value of f .
We state our first main result as follows: and (H 4 ) are satisfied.
Then the problem (P f ) admits a solution in W 1,p (Ω).
We will use Proposition 3.2 for the proof of Theorem 3.1 and we start with the following that proves the required Palais-Smale condition.Proposition 3.3.Φ satisfies the (P S) condition on W 1,p (Ω) that is for any sequence with c real constant and ε n → 0, one has that (u n ) admits a convergent subsequence.
Proof: The proof adopts the scheme in [7].Let (u n ) be a Palais-Smale sequence, i.e. (3.10) is satisfied.Since W 1,p (Ω) is a Banach space that is reflexive, to prove that (u n ) has a convergent subsequence, it suffixes to prove its boundedness.To this end, let assume by contradiction that (u n ) is unbounded i.e. ||u n || → ∞ and set v n = u n ||u n || . We now show that this is not the case, so arriving to contradiction.
As (v n ) is bounded in the same space W 1,p (Ω), one can find some v 0 in W 1,p (Ω) such that v n ⇀ v 0 in W 1,p (Ω) and v n → v 0 in L p (Ω) and then in L p (∂Ω).Using (H 3 ) with s = u n (x) and divide it by ||u n || p−1 , one deduces that f (x, u n )/||u n || p−1 is bounded in L p ′ (∂Ω) and then converges weakly to some f 0 .Rewriting the second inequality of (3.10) by setting ϕ = (v n − v 0 ), we reach Applying Hölder inequality and taking into account the fact that and Applying the (S + ) property stated in Lemma 3.1 below and Hölder inequality, one easily derives that (v n ) converges strongly to v 0 in W 1,p (Ω) with ||v 0 || = 1.From (3.11), we can write Based on (H f ) (see [9]), there exist α 1 and α 2 in L q (∂Ω) such that and almost for every x ∈ ∂Ω, (3.17) In view of (3.16) and since the values of α 1 (x) (resp.α 2 (x)) on {x ∈ ∂Ω : v 0 (x) ≤ 0} (resp.on {x ∈ ∂Ω : v 0 (x) ≥ 0}) are irrelevant, we follow [7] by assuming that Relying on Remark 2.1, we will distinguish the two cases where v 0 ≥ 0 a.e. on ∂Ω or v 0 changes sign on ∂Ω and prove that v 0 ≥ 0 almost everywhere on ∂Ω or v 0 changes sign on ∂Ω, both lead to a contradiction and thereby get expected conclusion.
2. Suppose now that v 0 changes sign on ∂Ω and still consider (3.15).Then v 0 verifies (3.15) which means that v 0 is a solution of the following Steklov problem Let us show that B α 1 ,α2 (v 0 ) = 0. Assume by contradiction that Repeating similar arguments from the proof of [10, Proposition 3.10], we reach a contradiction and one can infer c(α 1 , α 2 , V ) ≤ 1.Moreover, monotonicity of c(•, •, V ) together with (3.17) and (3.2) lead to Adapt ideas from the previous case, we have Let assume by contradiction that Therefore ) by the strict monotonicity of c(•, •, V ).We then reach a contradiction since we have established that c(α 1 , α 2 , V ) = c(K + , K−, V ).Finally, (3.23) reads as and then from Remark 3.1 and (3.17), which contradicts the strict inequality in (3.2) and put an end to the proof.
We now turn to the study of the geometry of Φ and first look for directions along which Φ goes to −∞.
As in [1] and [7], we recall for simplicity, the meaning of (H F ) being holding uniformly with respect to x ∈ ∂Ω.That is for every ε > 0, there exists b ε ∈ L 1 (∂Ω) such that for almost every x ∈ ∂Ω and all s ∈ R. Let take r > 0 and get back to Φ to write As λ 1 (V, l + ) < 1, we just have to choose ε less than (1−λ1(V,l+))EV (δ+) There exists r 0 > 0 such that for all r ≥ r 0 and for all γ ∈ Γ r with Once Proposition 3.4 is proved, we can pick r ≥ r 0 and apply Proposition 3.2 by setting H = Γ r and f ≡ Φ in Proposition 3.2 to conclude on the solvability of (P f ).(3.28) Thus let r > r 0 and take γ ∈ Γ r .We now face the two cases that arise here that is either B L+,L− (γ(t)) > 0 for all t ∈ [0, 1] or there exists t 0 ∈ [0, 1] such that B L+,L− (γ(t 0 )) ≤ 0.
On the other hand which means that one can find some w 0 in γ([0, 1]) such that Now suppose rather that γ(t 0 ) = 0 in Ω. Hence we can normalize the path γ(t 0 ) and get as an admissible function for the definition of λ D 1 (V ) and write .
This leads to E V (γ(t 0 )) > 0 and we can conclude that As a consequence, imply that for all ε > 0, for a.e.x in ∂Ω and ∀s ∈ R.
S ) and (3.34) are satisfied then the problem (P f ) has (at least) one solution in M g,g .
Proof: Let us show first that the energy functional Φ is coercive.Indeed, assume by contradiction that there exists a sequence (u n ) in M g,g such that Weighted Steklov Problem Under Nonresonance Conditions
103
(where the norm || • || W 1,p is a equivalent to the usual norm on W 1,p (Ω)) and for some constant K.One shows by contradiction that t n := Indeed, assume by contradiction that (t n ) is bounded.We first deduce that ∂Ω |u n | p dσ is bounded and in addition we write from (3.7), and using (3.35), it reads Choosing ε > 0 such that λ 1 (V, g + ε) > 1, it comes we get v ∈ W 1,p 0 (Ω) and v is therefore admissible for λ D 1 (V ).This leads to which contradicts the assumption λ D 1 (V ) > 0.
• Taking the case This leads to a contradiction with the assumption β(V, g) > 0 and we get expected result that is Φ is coercive on M g,g .Since Φ is sequentially weakly lower semicontinuous, Φ attains a minimum value and this ends the proof. ✷
∂Ω 1 .
|v| p dσ > 0, we have ∂Ω |v n | p dσ > 0 and then ∂Ω |u n | p dσ > 0. Let us set s n := ∂Ω |u n | p dσ 1/p and show that s n → ∞.By contradiction, assume that the sequence (s n ) is bounded.Then one shows (using (3.39)) that it is so for Ω |∇u n | p dx but this, once again, contradicts (3.36) and the result follows.Now we define w n := u n s n and get ||w n || L p (∂Ω) = We show by contradiction that Ω |u n | p dx s p n is bounded and as ∂Ω g(x)|w n | p dσ ≤ 1 s p n −→ 0 when n → ∞, (3.43) we conclude by dividing (3.39) by s p n that Ω |∇w n | p dx is bounded andas a consequence (w n ) is a bounded sequence in W 1,p (Ω) and there exists w ∈ W 1,p (Ω) such that w n ⇀ w.By standard argument, one reaches w n → w in L p (Ω) ∩ L p (∂Ω) and write ||w|| L p (∂Ω) = 1 and B g,g (w) = 0 which mean that w can be seen as an admissible function in the definition of β(V, g).Furthermore, dividing (3.41) by s p n and passing to the limit, one gets lim inf n→∞ E V (w n ) ≤ 0 and consequentlyβ(V, g) ≤ E V (w) ≤ lim inf n→∞ E V (w n ) ≤ 0.(3.44) Define v n = u n t n and note that ||v n || L p (Ω) = 1.Dividing (3.39) by t p n , we can easily see that Ω |∇v n | p dx becomes bounded and then (v n ) is a bounded sequence in W 1,p (Ω) and by standard arguments, one derives that (v n ) converges weakly to some v in W 1,p (Ω) and v n → v in L p (Ω) ∩ L p (∂Ω).
x)|u n | p dx and ∂Ω F (x, u n )dσ bounded and therefore Ω |∇u n | p dx also is bounded.Thus this contradicts (3.36) and we conclude that t n = Ω |u n | p dx | 4,618.6 | 2018-10-01T00:00:00.000 | [
"Mathematics"
] |
Sectorial Mertens and Mirsky formulae for imaginary quadratic number fields
We extend formulae of Mertens and Mirsky on the asymptotic behaviour of the standard Euler function to the Euler functions of principal rings of integers of imaginary quadratic number fields, giving versions in angular sector and with congruences.
Introduction
Let K be a number field of degree n K , with ring of integers O K , number of real places r 1 , number of complex conjugated places r 2 , regulator R K , class number h K , number of units ω K , discriminant D K and Dedekind zeta function ζ K (see for instance [Nar]).Let I K be the semigroup of nonzero ideals of O K , let ϕ K : I K Ñ N be the Euler function of K, and let N : I K Ñ N be the norm, with ϕ K paq " ϕ K paO K q and Npaq " NpaO K q for every a P O K ´t0u.As usual, p below ranges over prime ideals in I K .The functions Op¨q below depend only on K.
Our first result (see Section 2) is a Mertens formula with congruences for number fields.Though probably well-known at least when m " O K , we provide a proof for lack of reference (compare with [Gro,Satz 2], [Cos,§4.3],[PP1,Theo. 3.1]) since arguments of its proof will be useful for our next result.For every m P I K , let c m " Npmq ź p|m p1 `1 Nppq q .
Theorem 1.1 For every m P I K , if n K ě 2, then as x Ñ `8, we have Assume in the remaining part of this introduction that K is imaginary quadratic and that O K is principal.By Dirichlet's unit theorem, these assumptions are more or less necessary (besides K " Q) for the following sums to be well defined and finite.
We give in Section 3 a version in angular sectors of the Mertens formula given by Theorem 1.1, that will be needed in [PP3].For all z P C ˆ, θ P s0, 2πs and R ě 0, we consider the truncated angular sector Cpz, θ, Rq " !ρ e it z : It is important that the function Op¨q in the following result is uniform in m, z and θ.
Theorem 1.2 Assume that K is imaginary quadratic with O K principal.For all m P I K , z P C ˆand θ P s0, 2πs, as x Ñ `8, we have Lastly, we give a uniform asymptotic formula for the sum in angular sectors in C of angle θ of the products of two shifted Euler functions with congruences, that will be needed in [PP3].When K " Q (the sectorial restriction is then meaningless), this formula is due to Mirsky [Mir,Thm. 9,Eq. (30)] without congruences, and to Fouvry [PP2,Appendix] with congruences.For simplicity, we give a version without congruences and without an error term in this introduction, see Section 4 Theorem 4.1 for the general statement.
Theorem 1.3 For all z P C ˆ, θ P s0, 2πs and k P O K , as x Ñ `8, we have Theorems 1.2 and 1.3 are used in [PP3] in order to study the correlations of pairs of complex logarithms of Z-lattice points in the complex line at various scalings, when the weights are defined by the Euler function, proving the existence of pair correlation functions.We prove in op.cit.that at the linear scaling, the pair correlations exhibit level repulsion, as it sometimes occurs in statistical physics.A geometric application is given in op.cit. to the pair correlation of the lengths of common perpendicular geodesic arcs from the maximal Margulis cusp neighborhood to itself in the Bianchi manifolds PSL 2 pO K qzH 3 R .
Recall that I K is the semigroup of nonzero (integral) ideals of the Dedekind ring O K (with unit O K ).For all I, J P I K , we write J | I if I Ă J, we denote by pI, Jq " I `J the greatest common ideal divisor of I and J, by rI, Js " I X J the least common ideal multiple of I and J, and by IJ the product ideal of I and J.
We denote by NpIq " CardpO K {Iq the (absolute) norm of I P I K , which is completely multiplicative.The norm of a P O K ´t0u is It coincides with the (relative) norm N K{Q paq of a (see for instance [Nar]), and in particular is equal to |a| 2 if K is imaginary quadratic.
Recall that the Dedekind zeta function ζ K : ts P C : Repsq ą 1u Ñ C of K is defined (see for instance [Nar,§7.1])equivalently by We denote by ϕ K : I K Ñ N the Euler function of K, defined (see for instance [Nar,page 13]) equivalently by For every a P O K ´t0u, we define ϕ K paq " ϕ K paO K q.Note that the Euler function ϕ K is multiplicative2 by the Chinese remainder theorem.We have Npaq " as checked by telescopic sum when a is a power of a prime ideal, and by multiplicativity.We denote by µ K : I K Ñ Z the Möbius function of K, defined by a for some prime ideal p p´1q m if a " p 1 . . .p m for pairwise distinct prime ideals p 1 , . . ., p m and m P N ´t0u .
For every a P O K ´t0u, we define µ K paq " µ K paO K q.We have (see for instance [Sha]) the Möbius inversion formula: for all f, g : In particular, since the norm is completely multiplicative and by Equation (2), we have Proof of Theorem 1.1.In this proof, all functions Op¨q depend only on K. Let Recall (see for instance [MO,Theo. 5]) that, as x Ñ `8, we have By Abel's summation formula, as y Ñ `8, we have Furthermore, we have This formula implies since Nppb, mqq ď Npmq that Let us denote by S m pxq the sum on the left hand side in the statement of Theorem 1.1.Note that by the Gauss lemma, for all m, b, c P I K , we have m | bc if and only if mpm, bq ´1 | c.Then by Equation ( 4), by the change of variable c " mpm, bq ´1a, by the complete multiplicativity of the norm, by Equation ( 7) with y " Nppb,mqqx Npbq Npmq , since Nppb, mqq ď Npmq, and by Equation ( 8), we have By decomposing a nonzero integral ideal b into powers of prime ideals, by the definition of the Möbius function, and by the Euler product formula for the Dedekind zeta function, we have Equations ( 9) and ( 5) hence imply Theorem 1.1.l
A sectorial Mertens formula
Assume in the remaining part of this paper that K is imaginary quadratic and that O K is principal (or equivalently factorial (UFD)).By Dirichlet's unit theorem, the group of units O K , whose order we denote by |O K |, is finite if and only if pr 1 , r 2 q is equal to p1, 0q or p0, 1q.This justifies our restriction, the case K " Q being well-known.With the notation of the beginning of the introduction, we then have (see for instance [Nar]) D K P t´4, ´8, ´3, ´7, ´11, ´19, ´43, ´67, ´163u, and Given a Z-lattice Λ in the Euclidean space C (that is, a discrete (free abelian) subgroup of pC, `q), we denote by covol Λ " VolpC{ Λq the area of a fundamental parallelogram F Λ for Λ and by diam Λ the diameter of F Λ .Note that every element m With the notation of Equation ( 1), note that for every z 1 P C ˆ, we have Proof of Theorem 1.2.Let z P C ˆ, θ P s0, 2πs and y ą 0. Since AreapCpz, θ, yqq " θ 2 y 2 , the standard Gauss counting argument, the finiteness of the number of imaginary quadratic number fields with class number 1, and the equality on the left of Formula (11) give ¯.
Let us denote by S m,z,θ pxq the sum on the left hand side in the statement of Theorem 1.2.Then by Equation ( 4), we have The proof then proceeds exactly as in the proof of Theorem 1.1.l
A sectorial Mirsky formula
We now give a uniform asymptotic formula for the sum in angular sectors of the products of shifted Euler functions with congruences.For all z P C ˆ, θ P s0, 2πs, k P O K , m P I K and x ě 1, let Theorem 4.1 Assume that K is imaginary quadratic with O K principal.There exists a universal constant C ą 0 such that for all k P O K and m P I K , there exists c m,k P s0, 1s such that for all z P C ˆ, θ P s0, 2πs and x ě 1, we have ˇˇSz,θ,k,mpxq We will prove Theorem 4.1 at the end of this Section after giving a number of Lemmas required for the proof.We fix k P O K and m " mO K P I K , and we define h " kO K , which is a possibly zero integral ideal.We start by giving the first definition and a simpler formula for the constant c m,k that appears in the statement of Theorem 4.1.We define and Lemma 4.2 The series in Equation (15) defining c m,k converges absolutely.We have c m,k ď 1 and c 1 m ą 0. Furthermore, we have where In the special case m " O K , Equation ( 16) becomes Theorem 1.3 in the introduction follows from Theorem 4.1 and the above computation.
Proof.Let us prove that uniformly in x ě 1, we have This implies, by taking x " 1, that the first claim of Lemma 4.2 is satisfied, since the Möbius function has values in t0, ˘1u.Let us denote by Z m,h pxq the above sum.Since N `pcpb, mq, mpb, cqq ˘ď Npmpb, cqq, we have Equation ( 19) follows, since there are only finitely many fields K satisfying the assumptions of Theorem 4.1.
The proof of Equation ( 16) that we now give is similar to Fouvry's proof of Equation ( 21) in [PP2,Appendix].
For every b P I K , let χ b : I K Ñ t0, 1u be the characteristic function of the set of elements c P I K such that pc, bq | h.Let us define a map ψ b : Note that the assertion pcpb, mq, mpb, cqq | b h is equivalent to the assertion For every b P I K , let χ b : I K Ñ t0, 1u be the characteristic function of the set of elements c P I K such that the above divisibility assertion is satisfied.Let us finally define a map C ˚: I K Ñ R (which depends on m and h) by By the absolute convergence property, Equation ( 15) then becomes In order to transform the series C ˚pbq defined by Formula ( 21) into an Eulerian product and in order to analyse it, we will use the following two lemmas.
Lemma 4.3 For every b P I K , the maps χ b , χ b and ψ b on I K are multiplicative.
Proof.We have ψ b pO K q " O K and χ b pO K q " χ b pO K q " 1.Let I, J P I K be coprime.The equality pIJ, bq " pI, bqpJ, bq and the fact that pI, bq and pJ, bq are coprime imply that χ b pIJq " χ b pIqχ b pJq.
In order to prove the multiplicativity of the map ψ b , we write Since I is coprime to pJ, bq and since J is coprime to pI, bq, we obtain as wanted the equality ψ b pIJq " ψ b pIq ψ b pJq.21) may be written as an Eulerian product By Equations ( 22) and ( 23), and by Lemma 4.4, we have This equation writes c m,k as a series N pbq 2 where f : I K Ñ R is a multiplicative function, which vanishes on the nontrivial powers of prime ideals.By Eulerian product, we have therefore proved Equation ( 16).
Let us now prove that 0 ď c m,k ď 1.Note that for every prime ideal p, we have In particular all the factors of the two products over p in Equation ( 16) belong to r0, 1s, hence 0 ď c m,k ď 1 Npmq ď 1.Let us finally prove that c 1 m ą 0. For every prime ideal p, let w p " κ m,h ppq κ 1 h ppq Nppp,mqq Nppq 2 . By Formula (17), if Nppq " 2, we have In particular 1 ´wp ‰ 0 if Nppq " 2. From the inequalities (24) and by Equation ( 16), we have Since there are only finitely many primes ideals p dividing m, the term on the right hand side is bounded from below by a positive constant c 1 m " min kPOK c m,k ą 0. This concludes the proof of Lemma 4.
Recall that a system of two congruences " n " α 0 mod α n " β 0 mod β with unknown n P O K , where α, β, α 0 , β 0 P O K and α, β ‰ 0, has a solution if and only if α 0 ´β0 " 0 mod pα, βq.Furthermore, if this congruence condition is satisfied, that is, if there exists n 0 , m 0 P O K such that α 0 ´β0 " βm 0 ´αn 0 , then n is a solution if and only if n ´α0 ´αn 0 P αO K X βO K " rα, βsO K .
This is equivalent to asking n to belong to the translate Λ α,β,α0,β0 " α 0 `αn 0 ` Λ α,β of the Z-lattice Λ α,β " rα, βsO K .Applying this with α " m pb,mq , β " c pb,cq , α 0 " 0 and β 0 " ´k Thus Equation ( 27) becomes, using Equation ( 12), Let b, c be as in the index of the first sum above.Using again the standard Gauss counting argument, using Formula (11) for the second equality and the equation Nprα, βsq " Npαq Npβq Nppα,βqq for the last equality, we have, uniformly in b, c, m P O K ´t0u, k P O K , z P C ˆ, θ P s0, 2πs and y ě 1, CardpΛ α,β,α0,β0 Using this with y " x |b| , which is at least 1 since |b| ď x, we have N `pmpb, cq, cpb, mqq ˘1{2 By Equation (19) (replacing therein x by x 2 ), completing the first sum of the above equation with the indices b P O K ´t0u such that |b| ą x introduces an error of the form Op 1 x q (uniformly in m P O K ´t0u, k P O K and x ě 1).A computation similar to the one done for Equation (19) gives that the second sum in Equation ( 30) is actually bounded by 1 |O K | 2 ζ K p 3 2 q 2 ζ K p2q, which is uniform since there are only finitely many such fields K.
By the definition of the constant c m,k in Equation ( 15), this proves Equation ( 26 Replacing f `by f ´gives the same minoration to S z,θ,k,m pxq, hence Theorem 4.1 follows.l Finally, the multiplicativity property χ b pIJq " χ b pIqχ b pJq of the function χ b is a consequence of the multiplicativity of the map ψ b and of the fact that ψ b pIq and ψ b pJq are coprime.l Lemma 4.4 For every prime ideal p and every b P I K , we have ψ b ppq " " p if p | b, pp, mq otherwise, and χ b ppq χ b ppq " 1 ô $ & % p | pb, hq or p ∤ b and pp, mq | h .Proof.The first formula follows from the definition of ψ b ppq (see Formula (20)) by considering the three cases ‚ p | b, ‚ p ∤ b and p | m, and ‚ p ∤ b and p ∤ m.The second formula follows from the first one, from the definitions of χ b ppq and χ b ppq, and from the fact that χ b ppq χ b ppq " 1 if and only if χ b ppq " χ b ppq " 1, by considering the two cases ‚ p | b and ‚ p ∤ b. l The arithmetic function c Þ Ñ µ K pcq χ b pcq χ b pcq Npψ b pcqq being multiplicative by Lemma 4.3 and the complete multiplicativity of the norm, and vanishing on the nontrivial powers of primes, the series defining C ˚pbq in Formula ( 2. lNow that we understand the constant c m,k , we continue towards the proof of Theorem 4.1 by giving an asymptotic formula for the sum Uniformly in m P I K , k P O K , z P C ˆ, θ P s0, 2πs and x ě 1, we haver Spxq " θ c m,k a |D K x 2 `Opxq .(26)Proof.For all nonzero elements a and b in the factorial ring O K , we denote by pa, bq any fixed choice of gcd of a and b, and by ra, bs any fixed choice of lcm of a and b.By Equation (4), for every a P O K ´t0u, we haveϕ K paq Npaq " 1 |O K | ÿ bPOK ´t0u : b | a µ K pbq Npbq .Let x ě 1. Applying twice this equality, since Npbq ď Npaq when b | a, we have by Fubini'c P O K ´t0u.The system of three congruences $ & % a " 0 mod m a " 0 mod b a " ´k mod c has a solution a P O K ´t0u such that |a| ď x if and only if there exists an element n P O K ´t0u such that a " bn, |n| ď x |b| and " bn " 0 mod m bn " ´k mod c .(28) When pb, cq ∤ k, no solution exists.Assume that pb, cq | k.Since b pb,cq is invertible modulo c pb,cq , we denote by b pb,cq a multiplicative inverse of b pb,cq modulo c pb,cq .Then the system of congruences (28) is equivalent to # b pb,mq n " 0 mod m pb,mq b pb,cq n " ´k pb,cq mod c pb,cq ô # n " 0 mod m pb,mq n " ´k pb,cq b pb,cq mod c pb,cq . | 4,477.2 | 2022-05-31T00:00:00.000 | [
"Mathematics"
] |